.
For more information about libfribidi, check: .
edgedetect
Detect and draw edges. The filter uses the Canny Edge Detection
algorithm.
The filter accepts the following options:
low
high
Set low and high threshold values used by the Canny thresholding
algorithm.
The high threshold selects the "strong" edge pixels, which are then
connected through 8-connectivity with the "weak" edge pixels
selected by the low threshold.
low and high threshold values must be chosen in the range [0,1],
and low should be lesser or equal to high.
Default value for low is "20/255", and default value for high is
"50/255".
mode
Define the drawing mode.
wires
Draw white/gray wires on black background.
colormix
Mix the colors to create a paint/cartoon effect.
canny
Apply Canny edge detector on all selected planes.
Default value is wires.
planes
Select planes for filtering. By default all available planes are
filtered.
Examples
o Standard edge detection with custom values for the hysteresis
thresholding:
edgedetect=low=0.1:high=0.4
o Painting effect without thresholding:
edgedetect=mode=colormix:high=0
eq
Set brightness, contrast, saturation and approximate gamma adjustment.
The filter accepts the following options:
contrast
Set the contrast expression. The value must be a float value in
range "-2.0" to 2.0. The default value is "1".
brightness
Set the brightness expression. The value must be a float value in
range "-1.0" to 1.0. The default value is "0".
saturation
Set the saturation expression. The value must be a float in range
0.0 to 3.0. The default value is "1".
gamma
Set the gamma expression. The value must be a float in range 0.1 to
10.0. The default value is "1".
gamma_r
Set the gamma expression for red. The value must be a float in
range 0.1 to 10.0. The default value is "1".
gamma_g
Set the gamma expression for green. The value must be a float in
range 0.1 to 10.0. The default value is "1".
gamma_b
Set the gamma expression for blue. The value must be a float in
range 0.1 to 10.0. The default value is "1".
gamma_weight
Set the gamma weight expression. It can be used to reduce the
effect of a high gamma value on bright image areas, e.g. keep them
from getting overamplified and just plain white. The value must be
a float in range 0.0 to 1.0. A value of 0.0 turns the gamma
correction all the way down while 1.0 leaves it at its full
strength. Default is "1".
eval
Set when the expressions for brightness, contrast, saturation and
gamma expressions are evaluated.
It accepts the following values:
init
only evaluate expressions once during the filter initialization
or when a command is processed
frame
evaluate expressions for each incoming frame
Default value is init.
The expressions accept the following parameters:
n frame count of the input frame starting from 0
pos byte position of the corresponding packet in the input file, NAN if
unspecified
r frame rate of the input video, NAN if the input frame rate is
unknown
t timestamp expressed in seconds, NAN if the input timestamp is
unknown
Commands
The filter supports the following commands:
contrast
Set the contrast expression.
brightness
Set the brightness expression.
saturation
Set the saturation expression.
gamma
Set the gamma expression.
gamma_r
Set the gamma_r expression.
gamma_g
Set gamma_g expression.
gamma_b
Set gamma_b expression.
gamma_weight
Set gamma_weight expression.
The command accepts the same syntax of the corresponding option.
If the specified expression is not valid, it is kept at its current
value.
erosion
Apply erosion effect to the video.
This filter replaces the pixel by the local(3x3) minimum.
It accepts the following options:
threshold0
threshold1
threshold2
threshold3
Limit the maximum change for each plane, default is 65535. If 0,
plane will remain unchanged.
coordinates
Flag which specifies the pixel to refer to. Default is 255 i.e. all
eight pixels are used.
Flags to local 3x3 coordinates maps like this:
1 2 3
4 5
6 7 8
extractplanes
Extract color channel components from input video stream into separate
grayscale video streams.
The filter accepts the following option:
planes
Set plane(s) to extract.
Available values for planes are:
y
u
v
a
r
g
b
Choosing planes not available in the input will result in an error.
That means you cannot select "r", "g", "b" planes with "y", "u",
"v" planes at same time.
Examples
o Extract luma, u and v color channel component from input video
frame into 3 grayscale outputs:
ffmpeg -i video.avi -filter_complex 'extractplanes=y+u+v[y][u][v]' -map '[y]' y.avi -map '[u]' u.avi -map '[v]' v.avi
elbg
Apply a posterize effect using the ELBG (Enhanced LBG) algorithm.
For each input image, the filter will compute the optimal mapping from
the input to the output given the codebook length, that is the number
of distinct output colors.
This filter accepts the following options.
codebook_length, l
Set codebook length. The value must be a positive integer, and
represents the number of distinct output colors. Default value is
256.
nb_steps, n
Set the maximum number of iterations to apply for computing the
optimal mapping. The higher the value the better the result and the
higher the computation time. Default value is 1.
seed, s
Set a random seed, must be an integer included between 0 and
UINT32_MAX. If not specified, or if explicitly set to -1, the
filter will try to use a good random seed on a best effort basis.
pal8
Set pal8 output pixel format. This option does not work with
codebook length greater than 256.
entropy
Measure graylevel entropy in histogram of color channels of video
frames.
It accepts the following parameters:
mode
Can be either normal or diff. Default is normal.
diff mode measures entropy of histogram delta values, absolute
differences between neighbour histogram values.
fade
Apply a fade-in/out effect to the input video.
It accepts the following parameters:
type, t
The effect type can be either "in" for a fade-in, or "out" for a
fade-out effect. Default is "in".
start_frame, s
Specify the number of the frame to start applying the fade effect
at. Default is 0.
nb_frames, n
The number of frames that the fade effect lasts. At the end of the
fade-in effect, the output video will have the same intensity as
the input video. At the end of the fade-out transition, the output
video will be filled with the selected color. Default is 25.
alpha
If set to 1, fade only alpha channel, if one exists on the input.
Default value is 0.
start_time, st
Specify the timestamp (in seconds) of the frame to start to apply
the fade effect. If both start_frame and start_time are specified,
the fade will start at whichever comes last. Default is 0.
duration, d
The number of seconds for which the fade effect has to last. At the
end of the fade-in effect the output video will have the same
intensity as the input video, at the end of the fade-out transition
the output video will be filled with the selected color. If both
duration and nb_frames are specified, duration is used. Default is
0 (nb_frames is used by default).
color, c
Specify the color of the fade. Default is "black".
Examples
o Fade in the first 30 frames of video:
fade=in:0:30
The command above is equivalent to:
fade=t=in:s=0:n=30
o Fade out the last 45 frames of a 200-frame video:
fade=out:155:45
fade=type=out:start_frame=155:nb_frames=45
o Fade in the first 25 frames and fade out the last 25 frames of a
1000-frame video:
fade=in:0:25, fade=out:975:25
o Make the first 5 frames yellow, then fade in from frame 5-24:
fade=in:5:20:color=yellow
o Fade in alpha over first 25 frames of video:
fade=in:0:25:alpha=1
o Make the first 5.5 seconds black, then fade in for 0.5 seconds:
fade=t=in:st=5.5:d=0.5
fftfilt
Apply arbitrary expressions to samples in frequency domain
dc_Y
Adjust the dc value (gain) of the luma plane of the image. The
filter accepts an integer value in range 0 to 1000. The default
value is set to 0.
dc_U
Adjust the dc value (gain) of the 1st chroma plane of the image.
The filter accepts an integer value in range 0 to 1000. The default
value is set to 0.
dc_V
Adjust the dc value (gain) of the 2nd chroma plane of the image.
The filter accepts an integer value in range 0 to 1000. The default
value is set to 0.
weight_Y
Set the frequency domain weight expression for the luma plane.
weight_U
Set the frequency domain weight expression for the 1st chroma
plane.
weight_V
Set the frequency domain weight expression for the 2nd chroma
plane.
eval
Set when the expressions are evaluated.
It accepts the following values:
init
Only evaluate expressions once during the filter
initialization.
frame
Evaluate expressions for each incoming frame.
Default value is init.
The filter accepts the following variables:
X
Y The coordinates of the current sample.
W
H The width and height of the image.
N The number of input frame, starting from 0.
Examples
o High-pass:
fftfilt=dc_Y=128:weight_Y='squish(1-(Y+X)/100)'
o Low-pass:
fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)'
o Sharpen:
fftfilt=dc_Y=0:weight_Y='1+squish(1-(Y+X)/100)'
o Blur:
fftfilt=dc_Y=0:weight_Y='exp(-4 * ((Y+X)/(W+H)))'
fftdnoiz
Denoise frames using 3D FFT (frequency domain filtering).
The filter accepts the following options:
sigma
Set the noise sigma constant. This sets denoising strength.
Default value is 1. Allowed range is from 0 to 30. Using very high
sigma with low overlap may give blocking artifacts.
amount
Set amount of denoising. By default all detected noise is reduced.
Default value is 1. Allowed range is from 0 to 1.
block
Set size of block, Default is 4, can be 3, 4, 5 or 6. Actual size
of block in pixels is 2 to power of block, so by default block size
in pixels is 2^4 which is 16.
overlap
Set block overlap. Default is 0.5. Allowed range is from 0.2 to
0.8.
prev
Set number of previous frames to use for denoising. By default is
set to 0.
next
Set number of next frames to to use for denoising. By default is
set to 0.
planes
Set planes which will be filtered, by default are all available
filtered except alpha.
field
Extract a single field from an interlaced image using stride arithmetic
to avoid wasting CPU time. The output frames are marked as non-
interlaced.
The filter accepts the following options:
type
Specify whether to extract the top (if the value is 0 or "top") or
the bottom field (if the value is 1 or "bottom").
fieldhint
Create new frames by copying the top and bottom fields from surrounding
frames supplied as numbers by the hint file.
hint
Set file containing hints: absolute/relative frame numbers.
There must be one line for each frame in a clip. Each line must
contain two numbers separated by the comma, optionally followed by
"-" or "+". Numbers supplied on each line of file can not be out
of [N-1,N+1] where N is current frame number for "absolute" mode or
out of [-1, 1] range for "relative" mode. First number tells from
which frame to pick up top field and second number tells from which
frame to pick up bottom field.
If optionally followed by "+" output frame will be marked as
interlaced, else if followed by "-" output frame will be marked as
progressive, else it will be marked same as input frame. If line
starts with "#" or ";" that line is skipped.
mode
Can be item "absolute" or "relative". Default is "absolute".
Example of first several lines of "hint" file for "relative" mode:
0,0 - # first frame
1,0 - # second frame, use third's frame top field and second's frame bottom field
1,0 - # third frame, use fourth's frame top field and third's frame bottom field
1,0 -
0,0 -
0,0 -
1,0 -
1,0 -
1,0 -
0,0 -
0,0 -
1,0 -
1,0 -
1,0 -
0,0 -
fieldmatch
Field matching filter for inverse telecine. It is meant to reconstruct
the progressive frames from a telecined stream. The filter does not
drop duplicated frames, so to achieve a complete inverse telecine
"fieldmatch" needs to be followed by a decimation filter such as
decimate in the filtergraph.
The separation of the field matching and the decimation is notably
motivated by the possibility of inserting a de-interlacing filter
fallback between the two. If the source has mixed telecined and real
interlaced content, "fieldmatch" will not be able to match fields for
the interlaced parts. But these remaining combed frames will be marked
as interlaced, and thus can be de-interlaced by a later filter such as
yadif before decimation.
In addition to the various configuration options, "fieldmatch" can take
an optional second stream, activated through the ppsrc option. If
enabled, the frames reconstruction will be based on the fields and
frames from this second stream. This allows the first input to be pre-
processed in order to help the various algorithms of the filter, while
keeping the output lossless (assuming the fields are matched properly).
Typically, a field-aware denoiser, or brightness/contrast adjustments
can help.
Note that this filter uses the same algorithms as TIVTC/TFM (AviSynth
project) and VIVTC/VFM (VapourSynth project). The later is a light
clone of TFM from which "fieldmatch" is based on. While the semantic
and usage are very close, some behaviour and options names can differ.
The decimate filter currently only works for constant frame rate input.
If your input has mixed telecined (30fps) and progressive content with
a lower framerate like 24fps use the following filterchain to produce
the necessary cfr stream:
"dejudder,fps=30000/1001,fieldmatch,decimate".
The filter accepts the following options:
order
Specify the assumed field order of the input stream. Available
values are:
auto
Auto detect parity (use FFmpeg's internal parity value).
bff Assume bottom field first.
tff Assume top field first.
Note that it is sometimes recommended not to trust the parity
announced by the stream.
Default value is auto.
mode
Set the matching mode or strategy to use. pc mode is the safest in
the sense that it won't risk creating jerkiness due to duplicate
frames when possible, but if there are bad edits or blended fields
it will end up outputting combed frames when a good match might
actually exist. On the other hand, pcn_ub mode is the most risky in
terms of creating jerkiness, but will almost always find a good
frame if there is one. The other values are all somewhere in
between pc and pcn_ub in terms of risking jerkiness and creating
duplicate frames versus finding good matches in sections with bad
edits, orphaned fields, blended fields, etc.
More details about p/c/n/u/b are available in p/c/n/u/b meaning
section.
Available values are:
pc 2-way matching (p/c)
pc_n
2-way matching, and trying 3rd match if still combed (p/c + n)
pc_u
2-way matching, and trying 3rd match (same order) if still
combed (p/c + u)
pc_n_ub
2-way matching, trying 3rd match if still combed, and trying
4th/5th matches if still combed (p/c + n + u/b)
pcn 3-way matching (p/c/n)
pcn_ub
3-way matching, and trying 4th/5th matches if all 3 of the
original matches are detected as combed (p/c/n + u/b)
The parenthesis at the end indicate the matches that would be used
for that mode assuming order=tff (and field on auto or top).
In terms of speed pc mode is by far the fastest and pcn_ub is the
slowest.
Default value is pc_n.
ppsrc
Mark the main input stream as a pre-processed input, and enable the
secondary input stream as the clean source to pick the fields from.
See the filter introduction for more details. It is similar to the
clip2 feature from VFM/TFM.
Default value is 0 (disabled).
field
Set the field to match from. It is recommended to set this to the
same value as order unless you experience matching failures with
that setting. In certain circumstances changing the field that is
used to match from can have a large impact on matching performance.
Available values are:
auto
Automatic (same value as order).
bottom
Match from the bottom field.
top Match from the top field.
Default value is auto.
mchroma
Set whether or not chroma is included during the match comparisons.
In most cases it is recommended to leave this enabled. You should
set this to 0 only if your clip has bad chroma problems such as
heavy rainbowing or other artifacts. Setting this to 0 could also
be used to speed things up at the cost of some accuracy.
Default value is 1.
y0
y1 These define an exclusion band which excludes the lines between y0
and y1 from being included in the field matching decision. An
exclusion band can be used to ignore subtitles, a logo, or other
things that may interfere with the matching. y0 sets the starting
scan line and y1 sets the ending line; all lines in between y0 and
y1 (including y0 and y1) will be ignored. Setting y0 and y1 to the
same value will disable the feature. y0 and y1 defaults to 0.
scthresh
Set the scene change detection threshold as a percentage of maximum
change on the luma plane. Good values are in the "[8.0, 14.0]"
range. Scene change detection is only relevant in case
combmatch=sc. The range for scthresh is "[0.0, 100.0]".
Default value is 12.0.
combmatch
When combatch is not none, "fieldmatch" will take into account the
combed scores of matches when deciding what match to use as the
final match. Available values are:
none
No final matching based on combed scores.
sc Combed scores are only used when a scene change is detected.
full
Use combed scores all the time.
Default is sc.
combdbg
Force "fieldmatch" to calculate the combed metrics for certain
matches and print them. This setting is known as micout in TFM/VFM
vocabulary. Available values are:
none
No forced calculation.
pcn Force p/c/n calculations.
pcnub
Force p/c/n/u/b calculations.
Default value is none.
cthresh
This is the area combing threshold used for combed frame detection.
This essentially controls how "strong" or "visible" combing must be
to be detected. Larger values mean combing must be more visible
and smaller values mean combing can be less visible or strong and
still be detected. Valid settings are from "-1" (every pixel will
be detected as combed) to 255 (no pixel will be detected as
combed). This is basically a pixel difference value. A good range
is "[8, 12]".
Default value is 9.
chroma
Sets whether or not chroma is considered in the combed frame
decision. Only disable this if your source has chroma problems
(rainbowing, etc.) that are causing problems for the combed frame
detection with chroma enabled. Actually, using chroma=0 is usually
more reliable, except for the case where there is chroma only
combing in the source.
Default value is 0.
blockx
blocky
Respectively set the x-axis and y-axis size of the window used
during combed frame detection. This has to do with the size of the
area in which combpel pixels are required to be detected as combed
for a frame to be declared combed. See the combpel parameter
description for more info. Possible values are any number that is
a power of 2 starting at 4 and going up to 512.
Default value is 16.
combpel
The number of combed pixels inside any of the blocky by blockx size
blocks on the frame for the frame to be detected as combed. While
cthresh controls how "visible" the combing must be, this setting
controls "how much" combing there must be in any localized area (a
window defined by the blockx and blocky settings) on the frame.
Minimum value is 0 and maximum is "blocky x blockx" (at which point
no frames will ever be detected as combed). This setting is known
as MI in TFM/VFM vocabulary.
Default value is 80.
p/c/n/u/b meaning
p/c/n
We assume the following telecined stream:
Top fields: 1 2 2 3 4
Bottom fields: 1 2 3 4 4
The numbers correspond to the progressive frame the fields relate to.
Here, the first two frames are progressive, the 3rd and 4th are combed,
and so on.
When "fieldmatch" is configured to run a matching from bottom
(field=bottom) this is how this input stream get transformed:
Input stream:
T 1 2 2 3 4
B 1 2 3 4 4 <-- matching reference
Matches: c c n n c
Output stream:
T 1 2 3 4 4
B 1 2 3 4 4
As a result of the field matching, we can see that some frames get
duplicated. To perform a complete inverse telecine, you need to rely
on a decimation filter after this operation. See for instance the
decimate filter.
The same operation now matching from top fields (field=top) looks like
this:
Input stream:
T 1 2 2 3 4 <-- matching reference
B 1 2 3 4 4
Matches: c c p p c
Output stream:
T 1 2 2 3 4
B 1 2 2 3 4
In these examples, we can see what p, c and n mean; basically, they
refer to the frame and field of the opposite parity:
*
*
*
u/b
The u and b matching are a bit special in the sense that they match
from the opposite parity flag. In the following examples, we assume
that we are currently matching the 2nd frame (Top:2, bottom:2).
According to the match, a 'x' is placed above and below each matched
fields.
With bottom matching (field=bottom):
Match: c p n b u
x x x x x
Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2
Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
x x x x x
Output frames:
2 1 2 2 2
2 2 2 1 3
With top matching (field=top):
Match: c p n b u
x x x x x
Top 1 2 2 1 2 2 1 2 2 1 2 2 1 2 2
Bottom 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
x x x x x
Output frames:
2 2 2 1 2
2 1 3 2 2
Examples
Simple IVTC of a top field first telecined stream:
fieldmatch=order=tff:combmatch=none, decimate
Advanced IVTC, with fallback on yadif for still combed frames:
fieldmatch=order=tff:combmatch=full, yadif=deint=interlaced, decimate
fieldorder
Transform the field order of the input video.
It accepts the following parameters:
order
The output field order. Valid values are tff for top field first or
bff for bottom field first.
The default value is tff.
The transformation is done by shifting the picture content up or down
by one line, and filling the remaining line with appropriate picture
content. This method is consistent with most broadcast field order
converters.
If the input video is not flagged as being interlaced, or it is already
flagged as being of the required output field order, then this filter
does not alter the incoming video.
It is very useful when converting to or from PAL DV material, which is
bottom field first.
For example:
ffmpeg -i in.vob -vf "fieldorder=bff" out.dv
fifo, afifo
Buffer input images and send them when they are requested.
It is mainly useful when auto-inserted by the libavfilter framework.
It does not take parameters.
fillborders
Fill borders of the input video, without changing video stream
dimensions. Sometimes video can have garbage at the four edges and you
may not want to crop video input to keep size multiple of some number.
This filter accepts the following options:
left
Number of pixels to fill from left border.
right
Number of pixels to fill from right border.
top Number of pixels to fill from top border.
bottom
Number of pixels to fill from bottom border.
mode
Set fill mode.
It accepts the following values:
smear
fill pixels using outermost pixels
mirror
fill pixels using mirroring
fixed
fill pixels with constant value
Default is smear.
color
Set color for pixels in fixed mode. Default is black.
find_rect
Find a rectangular object
It accepts the following options:
object
Filepath of the object image, needs to be in gray8.
threshold
Detection threshold, default is 0.5.
mipmaps
Number of mipmaps, default is 3.
xmin, ymin, xmax, ymax
Specifies the rectangle in which to search.
Examples
o Generate a representative palette of a given video using ffmpeg:
ffmpeg -i file.ts -vf find_rect=newref.pgm,cover_rect=cover.jpg:mode=cover new.mkv
cover_rect
Cover a rectangular object
It accepts the following options:
cover
Filepath of the optional cover image, needs to be in yuv420.
mode
Set covering mode.
It accepts the following values:
cover
cover it by the supplied image
blur
cover it by interpolating the surrounding pixels
Default value is blur.
Examples
o Generate a representative palette of a given video using ffmpeg:
ffmpeg -i file.ts -vf find_rect=newref.pgm,cover_rect=cover.jpg:mode=cover new.mkv
floodfill
Flood area with values of same pixel components with another values.
It accepts the following options:
x Set pixel x coordinate.
y Set pixel y coordinate.
s0 Set source #0 component value.
s1 Set source #1 component value.
s2 Set source #2 component value.
s3 Set source #3 component value.
d0 Set destination #0 component value.
d1 Set destination #1 component value.
d2 Set destination #2 component value.
d3 Set destination #3 component value.
format
Convert the input video to one of the specified pixel formats.
Libavfilter will try to pick one that is suitable as input to the next
filter.
It accepts the following parameters:
pix_fmts
A '|'-separated list of pixel format names, such as
"pix_fmts=yuv420p|monow|rgb24".
Examples
o Convert the input video to the yuv420p format
format=pix_fmts=yuv420p
Convert the input video to any of the formats in the list
format=pix_fmts=yuv420p|yuv444p|yuv410p
fps
Convert the video to specified constant frame rate by duplicating or
dropping frames as necessary.
It accepts the following parameters:
fps The desired output frame rate. The default is 25.
start_time
Assume the first PTS should be the given value, in seconds. This
allows for padding/trimming at the start of stream. By default, no
assumption is made about the first frame's expected PTS, so no
padding or trimming is done. For example, this could be set to 0
to pad the beginning with duplicates of the first frame if a video
stream starts after the audio stream or to trim any frames with a
negative PTS.
round
Timestamp (PTS) rounding method.
Possible values are:
zero
round towards 0
inf round away from 0
down
round towards -infinity
up round towards +infinity
near
round to nearest
The default is "near".
eof_action
Action performed when reading the last frame.
Possible values are:
round
Use same timestamp rounding method as used for other frames.
pass
Pass through last frame if input duration has not been reached
yet.
The default is "round".
Alternatively, the options can be specified as a flat string:
fps[:start_time[:round]].
See also the setpts filter.
Examples
o A typical usage in order to set the fps to 25:
fps=fps=25
o Sets the fps to 24, using abbreviation and rounding method to round
to nearest:
fps=fps=film:round=near
framepack
Pack two different video streams into a stereoscopic video, setting
proper metadata on supported codecs. The two views should have the same
size and framerate and processing will stop when the shorter video
ends. Please note that you may conveniently adjust view properties with
the scale and fps filters.
It accepts the following parameters:
format
The desired packing format. Supported values are:
sbs The views are next to each other (default).
tab The views are on top of each other.
lines
The views are packed by line.
columns
The views are packed by column.
frameseq
The views are temporally interleaved.
Some examples:
# Convert left and right views into a frame-sequential video
ffmpeg -i LEFT -i RIGHT -filter_complex framepack=frameseq OUTPUT
# Convert views into a side-by-side video with the same output resolution as the input
ffmpeg -i LEFT -i RIGHT -filter_complex [0:v]scale=w=iw/2[left],[1:v]scale=w=iw/2[right],[left][right]framepack=sbs OUTPUT
framerate
Change the frame rate by interpolating new video output frames from the
source frames.
This filter is not designed to function correctly with interlaced
media. If you wish to change the frame rate of interlaced media then
you are required to deinterlace before this filter and re-interlace
after this filter.
A description of the accepted options follows.
fps Specify the output frames per second. This option can also be
specified as a value alone. The default is 50.
interp_start
Specify the start of a range where the output frame will be created
as a linear interpolation of two frames. The range is [0-255], the
default is 15.
interp_end
Specify the end of a range where the output frame will be created
as a linear interpolation of two frames. The range is [0-255], the
default is 240.
scene
Specify the level at which a scene change is detected as a value
between 0 and 100 to indicate a new scene; a low value reflects a
low probability for the current frame to introduce a new scene,
while a higher value means the current frame is more likely to be
one. The default is 8.2.
flags
Specify flags influencing the filter process.
Available value for flags is:
scene_change_detect, scd
Enable scene change detection using the value of the option
scene. This flag is enabled by default.
framestep
Select one frame every N-th frame.
This filter accepts the following option:
step
Select frame after every "step" frames. Allowed values are
positive integers higher than 0. Default value is 1.
frei0r
Apply a frei0r effect to the input video.
To enable the compilation of this filter, you need to install the
frei0r header and configure FFmpeg with "--enable-frei0r".
It accepts the following parameters:
filter_name
The name of the frei0r effect to load. If the environment variable
FREI0R_PATH is defined, the frei0r effect is searched for in each
of the directories specified by the colon-separated list in
FREI0R_PATH. Otherwise, the standard frei0r paths are searched, in
this order: HOME/.frei0r-1/lib/, /usr/local/lib/frei0r-1/,
/usr/lib/frei0r-1/.
filter_params
A '|'-separated list of parameters to pass to the frei0r effect.
A frei0r effect parameter can be a boolean (its value is either "y" or
"n"), a double, a color (specified as R/G/B, where R, G, and B are
floating point numbers between 0.0 and 1.0, inclusive) or a color
description as specified in the "Color" section in the ffmpeg-utils
manual, a position (specified as X/Y, where X and Y are floating point
numbers) and/or a string.
The number and types of parameters depend on the loaded effect. If an
effect parameter is not specified, the default value is set.
Examples
o Apply the distort0r effect, setting the first two double
parameters:
frei0r=filter_name=distort0r:filter_params=0.5|0.01
o Apply the colordistance effect, taking a color as the first
parameter:
frei0r=colordistance:0.2/0.3/0.4
frei0r=colordistance:violet
frei0r=colordistance:0x112233
o Apply the perspective effect, specifying the top left and top right
image positions:
frei0r=perspective:0.2/0.2|0.8/0.2
For more information, see
fspp
Apply fast and simple postprocessing. It is a faster version of spp.
It splits (I)DCT into horizontal/vertical passes. Unlike the simple
post- processing filter, one of them is performed once per block, not
per pixel. This allows for much higher speed.
The filter accepts the following options:
quality
Set quality. This option defines the number of levels for
averaging. It accepts an integer in the range 4-5. Default value is
4.
qp Force a constant quantization parameter. It accepts an integer in
range 0-63. If not set, the filter will use the QP from the video
stream (if available).
strength
Set filter strength. It accepts an integer in range -15 to 32.
Lower values mean more details but also more artifacts, while
higher values make the image smoother but also blurrier. Default
value is 0 X PSNR optimal.
use_bframe_qp
Enable the use of the QP from the B-Frames if set to 1. Using this
option may cause flicker since the B-Frames have often larger QP.
Default is 0 (not enabled).
gblur
Apply Gaussian blur filter.
The filter accepts the following options:
sigma
Set horizontal sigma, standard deviation of Gaussian blur. Default
is 0.5.
steps
Set number of steps for Gaussian approximation. Defauls is 1.
planes
Set which planes to filter. By default all planes are filtered.
sigmaV
Set vertical sigma, if negative it will be same as "sigma".
Default is "-1".
geq
Apply generic equation to each pixel.
The filter accepts the following options:
lum_expr, lum
Set the luminance expression.
cb_expr, cb
Set the chrominance blue expression.
cr_expr, cr
Set the chrominance red expression.
alpha_expr, a
Set the alpha expression.
red_expr, r
Set the red expression.
green_expr, g
Set the green expression.
blue_expr, b
Set the blue expression.
The colorspace is selected according to the specified options. If one
of the lum_expr, cb_expr, or cr_expr options is specified, the filter
will automatically select a YCbCr colorspace. If one of the red_expr,
green_expr, or blue_expr options is specified, it will select an RGB
colorspace.
If one of the chrominance expression is not defined, it falls back on
the other one. If no alpha expression is specified it will evaluate to
opaque value. If none of chrominance expressions are specified, they
will evaluate to the luminance expression.
The expressions can use the following variables and functions:
N The sequential number of the filtered frame, starting from 0.
X
Y The coordinates of the current sample.
W
H The width and height of the image.
SW
SH Width and height scale depending on the currently filtered plane.
It is the ratio between the corresponding luma plane number of
pixels and the current plane ones. E.g. for YUV4:2:0 the values are
"1,1" for the luma plane, and "0.5,0.5" for chroma planes.
T Time of the current frame, expressed in seconds.
p(x, y)
Return the value of the pixel at location (x,y) of the current
plane.
lum(x, y)
Return the value of the pixel at location (x,y) of the luminance
plane.
cb(x, y)
Return the value of the pixel at location (x,y) of the blue-
difference chroma plane. Return 0 if there is no such plane.
cr(x, y)
Return the value of the pixel at location (x,y) of the red-
difference chroma plane. Return 0 if there is no such plane.
r(x, y)
g(x, y)
b(x, y)
Return the value of the pixel at location (x,y) of the
red/green/blue component. Return 0 if there is no such component.
alpha(x, y)
Return the value of the pixel at location (x,y) of the alpha plane.
Return 0 if there is no such plane.
For functions, if x and y are outside the area, the value will be
automatically clipped to the closer edge.
Examples
o Flip the image horizontally:
geq=p(W-X\,Y)
o Generate a bidimensional sine wave, with angle "PI/3" and a
wavelength of 100 pixels:
geq=128 + 100*sin(2*(PI/100)*(cos(PI/3)*(X-50*T) + sin(PI/3)*Y)):128:128
o Generate a fancy enigmatic moving light:
nullsrc=s=256x256,geq=random(1)/hypot(X-cos(N*0.07)*W/2-W/2\,Y-sin(N*0.09)*H/2-H/2)^2*1000000*sin(N*0.02):128:128
o Generate a quick emboss effect:
format=gray,geq=lum_expr='(p(X,Y)+(256-p(X-4,Y-4)))/2'
o Modify RGB components depending on pixel position:
geq=r='X/W*r(X,Y)':g='(1-X/W)*g(X,Y)':b='(H-Y)/H*b(X,Y)'
o Create a radial gradient that is the same size as the input (also
see the vignette filter):
geq=lum=255*gauss((X/W-0.5)*3)*gauss((Y/H-0.5)*3)/gauss(0)/gauss(0),format=gray
gradfun
Fix the banding artifacts that are sometimes introduced into nearly
flat regions by truncation to 8-bit color depth. Interpolate the
gradients that should go where the bands are, and dither them.
It is designed for playback only. Do not use it prior to lossy
compression, because compression tends to lose the dither and bring
back the bands.
It accepts the following parameters:
strength
The maximum amount by which the filter will change any one pixel.
This is also the threshold for detecting nearly flat regions.
Acceptable values range from .51 to 64; the default value is 1.2.
Out-of-range values will be clipped to the valid range.
radius
The neighborhood to fit the gradient to. A larger radius makes for
smoother gradients, but also prevents the filter from modifying the
pixels near detailed regions. Acceptable values are 8-32; the
default value is 16. Out-of-range values will be clipped to the
valid range.
Alternatively, the options can be specified as a flat string:
strength[:radius]
Examples
o Apply the filter with a 3.5 strength and radius of 8:
gradfun=3.5:8
o Specify radius, omitting the strength (which will fall-back to the
default value):
gradfun=radius=8
graphmonitor, agraphmonitor
Show various filtergraph stats.
With this filter one can debug complete filtergraph. Especially issues
with links filling with queued frames.
The filter accepts the following options:
size, s
Set video output size. Default is hd720.
opacity, o
Set video opacity. Default is 0.9. Allowed range is from 0 to 1.
mode, m
Set output mode, can be fulll or compact. In compact mode only
filters with some queued frames have displayed stats.
flags, f
Set flags which enable which stats are shown in video.
Available values for flags are:
queue
Display number of queued frames in each link.
frame_count_in
Display number of frames taken from filter.
frame_count_out
Display number of frames given out from filter.
pts Display current filtered frame pts.
time
Display current filtered frame time.
timebase
Display time base for filter link.
format
Display used format for filter link.
size
Display video size or number of audio channels in case of audio
used by filter link.
rate
Display video frame rate or sample rate in case of audio used
by filter link.
rate, r
Set upper limit for video rate of output stream, Default value is
25. This guarantee that output video frame rate will not be higher
than this value.
greyedge
A color constancy variation filter which estimates scene illumination
via grey edge algorithm and corrects the scene colors accordingly.
See:
The filter accepts the following options:
difford
The order of differentiation to be applied on the scene. Must be
chosen in the range [0,2] and default value is 1.
minknorm
The Minkowski parameter to be used for calculating the Minkowski
distance. Must be chosen in the range [0,20] and default value is
1. Set to 0 for getting max value instead of calculating Minkowski
distance.
sigma
The standard deviation of Gaussian blur to be applied on the scene.
Must be chosen in the range [0,1024.0] and default value = 1.
floor( sigma * break_off_sigma(3) ) can't be euqal to 0 if difford
is greater than 0.
Examples
o Grey Edge:
greyedge=difford=1:minknorm=5:sigma=2
o Max Edge:
greyedge=difford=1:minknorm=0:sigma=2
haldclut
Apply a Hald CLUT to a video stream.
First input is the video stream to process, and second one is the Hald
CLUT. The Hald CLUT input can be a simple picture or a complete video
stream.
The filter accepts the following options:
shortest
Force termination when the shortest input terminates. Default is 0.
repeatlast
Continue applying the last CLUT after the end of the stream. A
value of 0 disable the filter after the last frame of the CLUT is
reached. Default is 1.
"haldclut" also has the same interpolation options as lut3d (both
filters share the same internals).
More information about the Hald CLUT can be found on Eskil Steenberg's
website (Hald CLUT author) at
.
Workflow examples
Hald CLUT video stream
Generate an identity Hald CLUT stream altered with various effects:
ffmpeg -f lavfi -i B=8 -vf "hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process" -t 10 -c:v ffv1 clut.nut
Note: make sure you use a lossless codec.
Then use it with "haldclut" to apply it on some random stream:
ffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv
The Hald CLUT will be applied to the 10 first seconds (duration of
clut.nut), then the latest picture of that CLUT stream will be applied
to the remaining frames of the "mandelbrot" stream.
Hald CLUT with preview
A Hald CLUT is supposed to be a squared image of "Level*Level*Level" by
"Level*Level*Level" pixels. For a given Hald CLUT, FFmpeg will select
the biggest possible square starting at the top left of the picture.
The remaining padding pixels (bottom or right) will be ignored. This
area can be used to add a preview of the Hald CLUT.
Typically, the following generated Hald CLUT will be supported by the
"haldclut" filter:
ffmpeg -f lavfi -i B=8 -vf "
pad=iw+320 [padded_clut];
smptebars=s=320x256, split [a][b];
[padded_clut][a] overlay=W-320:h, curves=color_negative [main];
[main][b] overlay=W-320" -frames:v 1 clut.png
It contains the original and a preview of the effect of the CLUT: SMPTE
color bars are displayed on the right-top, and below the same color
bars processed by the color changes.
Then, the effect of this Hald CLUT can be visualized with:
ffplay input.mkv -vf "movie=clut.png, [in] haldclut"
hflip
Flip the input video horizontally.
For example, to horizontally flip the input video with ffmpeg:
ffmpeg -i in.avi -vf "hflip" out.avi
histeq
This filter applies a global color histogram equalization on a per-
frame basis.
It can be used to correct video that has a compressed range of pixel
intensities. The filter redistributes the pixel intensities to
equalize their distribution across the intensity range. It may be
viewed as an "automatically adjusting contrast filter". This filter is
useful only for correcting degraded or poorly captured source video.
The filter accepts the following options:
strength
Determine the amount of equalization to be applied. As the
strength is reduced, the distribution of pixel intensities more-
and-more approaches that of the input frame. The value must be a
float number in the range [0,1] and defaults to 0.200.
intensity
Set the maximum intensity that can generated and scale the output
values appropriately. The strength should be set as desired and
then the intensity can be limited if needed to avoid washing-out.
The value must be a float number in the range [0,1] and defaults to
0.210.
antibanding
Set the antibanding level. If enabled the filter will randomly vary
the luminance of output pixels by a small amount to avoid banding
of the histogram. Possible values are "none", "weak" or "strong".
It defaults to "none".
histogram
Compute and draw a color distribution histogram for the input video.
The computed histogram is a representation of the color component
distribution in an image.
Standard histogram displays the color components distribution in an
image. Displays color graph for each color component. Shows
distribution of the Y, U, V, A or R, G, B components, depending on
input format, in the current frame. Below each graph a color component
scale meter is shown.
The filter accepts the following options:
level_height
Set height of level. Default value is 200. Allowed range is [50,
2048].
scale_height
Set height of color scale. Default value is 12. Allowed range is
[0, 40].
display_mode
Set display mode. It accepts the following values:
stack
Per color component graphs are placed below each other.
parade
Per color component graphs are placed side by side.
overlay
Presents information identical to that in the "parade", except
that the graphs representing color components are superimposed
directly over one another.
Default is "stack".
levels_mode
Set mode. Can be either "linear", or "logarithmic". Default is
"linear".
components
Set what color components to display. Default is 7.
fgopacity
Set foreground opacity. Default is 0.7.
bgopacity
Set background opacity. Default is 0.5.
Examples
o Calculate and draw histogram:
ffplay -i input -vf histogram
hqdn3d
This is a high precision/quality 3d denoise filter. It aims to reduce
image noise, producing smooth images and making still images really
still. It should enhance compressibility.
It accepts the following optional parameters:
luma_spatial
A non-negative floating point number which specifies spatial luma
strength. It defaults to 4.0.
chroma_spatial
A non-negative floating point number which specifies spatial chroma
strength. It defaults to 3.0*luma_spatial/4.0.
luma_tmp
A floating point number which specifies luma temporal strength. It
defaults to 6.0*luma_spatial/4.0.
chroma_tmp
A floating point number which specifies chroma temporal strength.
It defaults to luma_tmp*chroma_spatial/luma_spatial.
hwdownload
Download hardware frames to system memory.
The input must be in hardware frames, and the output a non-hardware
format. Not all formats will be supported on the output - it may be
necessary to insert an additional format filter immediately following
in the graph to get the output in a supported format.
hwmap
Map hardware frames to system memory or to another device.
This filter has several different modes of operation; which one is used
depends on the input and output formats:
o Hardware frame input, normal frame output
Map the input frames to system memory and pass them to the output.
If the original hardware frame is later required (for example,
after overlaying something else on part of it), the hwmap filter
can be used again in the next mode to retrieve it.
o Normal frame input, hardware frame output
If the input is actually a software-mapped hardware frame, then
unmap it - that is, return the original hardware frame.
Otherwise, a device must be provided. Create new hardware surfaces
on that device for the output, then map them back to the software
format at the input and give those frames to the preceding filter.
This will then act like the hwupload filter, but may be able to
avoid an additional copy when the input is already in a compatible
format.
o Hardware frame input and output
A device must be supplied for the output, either directly or with
the derive_device option. The input and output devices must be of
different types and compatible - the exact meaning of this is
system-dependent, but typically it means that they must refer to
the same underlying hardware context (for example, refer to the
same graphics card).
If the input frames were originally created on the output device,
then unmap to retrieve the original frames.
Otherwise, map the frames to the output device - create new
hardware frames on the output corresponding to the frames on the
input.
The following additional parameters are accepted:
mode
Set the frame mapping mode. Some combination of:
read
The mapped frame should be readable.
write
The mapped frame should be writeable.
overwrite
The mapping will always overwrite the entire frame.
This may improve performance in some cases, as the original
contents of the frame need not be loaded.
direct
The mapping must not involve any copying.
Indirect mappings to copies of frames are created in some cases
where either direct mapping is not possible or it would have
unexpected properties. Setting this flag ensures that the
mapping is direct and will fail if that is not possible.
Defaults to read+write if not specified.
derive_device type
Rather than using the device supplied at initialisation, instead
derive a new device of type type from the device the input frames
exist on.
reverse
In a hardware to hardware mapping, map in reverse - create frames
in the sink and map them back to the source. This may be necessary
in some cases where a mapping in one direction is required but only
the opposite direction is supported by the devices being used.
This option is dangerous - it may break the preceding filter in
undefined ways if there are any additional constraints on that
filter's output. Do not use it without fully understanding the
implications of its use.
hwupload
Upload system memory frames to hardware surfaces.
The device to upload to must be supplied when the filter is
initialised. If using ffmpeg, select the appropriate device with the
-filter_hw_device option.
hwupload_cuda
Upload system memory frames to a CUDA device.
It accepts the following optional parameters:
device
The number of the CUDA device to use
hqx
Apply a high-quality magnification filter designed for pixel art. This
filter was originally created by Maxim Stepin.
It accepts the following option:
n Set the scaling dimension: 2 for "hq2x", 3 for "hq3x" and 4 for
"hq4x". Default is 3.
hstack
Stack input videos horizontally.
All streams must be of same pixel format and of same height.
Note that this filter is faster than using overlay and pad filter to
create same output.
The filter accept the following option:
inputs
Set number of input streams. Default is 2.
shortest
If set to 1, force the output to terminate when the shortest input
terminates. Default value is 0.
hue
Modify the hue and/or the saturation of the input.
It accepts the following parameters:
h Specify the hue angle as a number of degrees. It accepts an
expression, and defaults to "0".
s Specify the saturation in the [-10,10] range. It accepts an
expression and defaults to "1".
H Specify the hue angle as a number of radians. It accepts an
expression, and defaults to "0".
b Specify the brightness in the [-10,10] range. It accepts an
expression and defaults to "0".
h and H are mutually exclusive, and can't be specified at the same
time.
The b, h, H and s option values are expressions containing the
following constants:
n frame count of the input frame starting from 0
pts presentation timestamp of the input frame expressed in time base
units
r frame rate of the input video, NAN if the input frame rate is
unknown
t timestamp expressed in seconds, NAN if the input timestamp is
unknown
tb time base of the input video
Examples
o Set the hue to 90 degrees and the saturation to 1.0:
hue=h=90:s=1
o Same command but expressing the hue in radians:
hue=H=PI/2:s=1
o Rotate hue and make the saturation swing between 0 and 2 over a
period of 1 second:
hue="H=2*PI*t: s=sin(2*PI*t)+1"
o Apply a 3 seconds saturation fade-in effect starting at 0:
hue="s=min(t/3\,1)"
The general fade-in expression can be written as:
hue="s=min(0\, max((t-START)/DURATION\, 1))"
o Apply a 3 seconds saturation fade-out effect starting at 5 seconds:
hue="s=max(0\, min(1\, (8-t)/3))"
The general fade-out expression can be written as:
hue="s=max(0\, min(1\, (START+DURATION-t)/DURATION))"
Commands
This filter supports the following commands:
b
s
h
H Modify the hue and/or the saturation and/or brightness of the input
video. The command accepts the same syntax of the corresponding
option.
If the specified expression is not valid, it is kept at its current
value.
hysteresis
Grow first stream into second stream by connecting components. This
makes it possible to build more robust edge masks.
This filter accepts the following options:
planes
Set which planes will be processed as bitmap, unprocessed planes
will be copied from first stream. By default value 0xf, all planes
will be processed.
threshold
Set threshold which is used in filtering. If pixel component value
is higher than this value filter algorithm for connecting
components is activated. By default value is 0.
idet
Detect video interlacing type.
This filter tries to detect if the input frames are interlaced,
progressive, top or bottom field first. It will also try to detect
fields that are repeated between adjacent frames (a sign of telecine).
Single frame detection considers only immediately adjacent frames when
classifying each frame. Multiple frame detection incorporates the
classification history of previous frames.
The filter will log these metadata values:
single.current_frame
Detected type of current frame using single-frame detection. One
of: ``tff'' (top field first), ``bff'' (bottom field first),
``progressive'', or ``undetermined''
single.tff
Cumulative number of frames detected as top field first using
single-frame detection.
multiple.tff
Cumulative number of frames detected as top field first using
multiple-frame detection.
single.bff
Cumulative number of frames detected as bottom field first using
single-frame detection.
multiple.current_frame
Detected type of current frame using multiple-frame detection. One
of: ``tff'' (top field first), ``bff'' (bottom field first),
``progressive'', or ``undetermined''
multiple.bff
Cumulative number of frames detected as bottom field first using
multiple-frame detection.
single.progressive
Cumulative number of frames detected as progressive using single-
frame detection.
multiple.progressive
Cumulative number of frames detected as progressive using multiple-
frame detection.
single.undetermined
Cumulative number of frames that could not be classified using
single-frame detection.
multiple.undetermined
Cumulative number of frames that could not be classified using
multiple-frame detection.
repeated.current_frame
Which field in the current frame is repeated from the last. One of
``neither'', ``top'', or ``bottom''.
repeated.neither
Cumulative number of frames with no repeated field.
repeated.top
Cumulative number of frames with the top field repeated from the
previous frame's top field.
repeated.bottom
Cumulative number of frames with the bottom field repeated from the
previous frame's bottom field.
The filter accepts the following options:
intl_thres
Set interlacing threshold.
prog_thres
Set progressive threshold.
rep_thres
Threshold for repeated field detection.
half_life
Number of frames after which a given frame's contribution to the
statistics is halved (i.e., it contributes only 0.5 to its
classification). The default of 0 means that all frames seen are
given full weight of 1.0 forever.
analyze_interlaced_flag
When this is not 0 then idet will use the specified number of
frames to determine if the interlaced flag is accurate, it will not
count undetermined frames. If the flag is found to be accurate it
will be used without any further computations, if it is found to be
inaccurate it will be cleared without any further computations.
This allows inserting the idet filter as a low computational method
to clean up the interlaced flag
il
Deinterleave or interleave fields.
This filter allows one to process interlaced images fields without
deinterlacing them. Deinterleaving splits the input frame into 2 fields
(so called half pictures). Odd lines are moved to the top half of the
output image, even lines to the bottom half. You can process (filter)
them independently and then re-interleave them.
The filter accepts the following options:
luma_mode, l
chroma_mode, c
alpha_mode, a
Available values for luma_mode, chroma_mode and alpha_mode are:
none
Do nothing.
deinterleave, d
Deinterleave fields, placing one above the other.
interleave, i
Interleave fields. Reverse the effect of deinterleaving.
Default value is "none".
luma_swap, ls
chroma_swap, cs
alpha_swap, as
Swap luma/chroma/alpha fields. Exchange even & odd lines. Default
value is 0.
inflate
Apply inflate effect to the video.
This filter replaces the pixel by the local(3x3) average by taking into
account only values higher than the pixel.
It accepts the following options:
threshold0
threshold1
threshold2
threshold3
Limit the maximum change for each plane, default is 65535. If 0,
plane will remain unchanged.
interlace
Simple interlacing filter from progressive contents. This interleaves
upper (or lower) lines from odd frames with lower (or upper) lines from
even frames, halving the frame rate and preserving image height.
Original Original New Frame
Frame 'j' Frame 'j+1' (tff)
========== =========== ==================
Line 0 --------------------> Frame 'j' Line 0
Line 1 Line 1 ----> Frame 'j+1' Line 1
Line 2 ---------------------> Frame 'j' Line 2
Line 3 Line 3 ----> Frame 'j+1' Line 3
... ... ...
New Frame + 1 will be generated by Frame 'j+2' and Frame 'j+3' and so on
It accepts the following optional parameters:
scan
This determines whether the interlaced frame is taken from the even
(tff - default) or odd (bff) lines of the progressive frame.
lowpass
Vertical lowpass filter to avoid twitter interlacing and reduce
moire patterns.
0, off
Disable vertical lowpass filter
1, linear
Enable linear filter (default)
2, complex
Enable complex filter. This will slightly less reduce twitter
and moire but better retain detail and subjective sharpness
impression.
kerndeint
Deinterlace input video by applying Donald Graft's adaptive kernel
deinterling. Work on interlaced parts of a video to produce progressive
frames.
The description of the accepted parameters follows.
thresh
Set the threshold which affects the filter's tolerance when
determining if a pixel line must be processed. It must be an
integer in the range [0,255] and defaults to 10. A value of 0 will
result in applying the process on every pixels.
map Paint pixels exceeding the threshold value to white if set to 1.
Default is 0.
order
Set the fields order. Swap fields if set to 1, leave fields alone
if 0. Default is 0.
sharp
Enable additional sharpening if set to 1. Default is 0.
twoway
Enable twoway sharpening if set to 1. Default is 0.
Examples
o Apply default values:
kerndeint=thresh=10:map=0:order=0:sharp=0:twoway=0
o Enable additional sharpening:
kerndeint=sharp=1
o Paint processed pixels in white:
kerndeint=map=1
lenscorrection
Correct radial lens distortion
This filter can be used to correct for radial distortion as can result
from the use of wide angle lenses, and thereby re-rectify the image. To
find the right parameters one can use tools available for example as
part of opencv or simply trial-and-error. To use opencv use the
calibration sample (under samples/cpp) from the opencv sources and
extract the k1 and k2 coefficients from the resulting matrix.
Note that effectively the same filter is available in the open-source
tools Krita and Digikam from the KDE project.
In contrast to the vignette filter, which can also be used to
compensate lens errors, this filter corrects the distortion of the
image, whereas vignette corrects the brightness distribution, so you
may want to use both filters together in certain cases, though you will
have to take care of ordering, i.e. whether vignetting should be
applied before or after lens correction.
Options
The filter accepts the following options:
cx Relative x-coordinate of the focal point of the image, and thereby
the center of the distortion. This value has a range [0,1] and is
expressed as fractions of the image width. Default is 0.5.
cy Relative y-coordinate of the focal point of the image, and thereby
the center of the distortion. This value has a range [0,1] and is
expressed as fractions of the image height. Default is 0.5.
k1 Coefficient of the quadratic correction term. This value has a
range [-1,1]. 0 means no correction. Default is 0.
k2 Coefficient of the double quadratic correction term. This value has
a range [-1,1]. 0 means no correction. Default is 0.
The formula that generates the correction is:
r_src = r_tgt * (1 + k1 * (r_tgt / r_0)^2 + k2 * (r_tgt / r_0)^4)
where r_0 is halve of the image diagonal and r_src and r_tgt are the
distances from the focal point in the source and target images,
respectively.
lensfun
Apply lens correction via the lensfun library
().
The "lensfun" filter requires the camera make, camera model, and lens
model to apply the lens correction. The filter will load the lensfun
database and query it to find the corresponding camera and lens entries
in the database. As long as these entries can be found with the given
options, the filter can perform corrections on frames. Note that
incomplete strings will result in the filter choosing the best match
with the given options, and the filter will output the chosen camera
and lens models (logged with level "info"). You must provide the make,
camera model, and lens model as they are required.
The filter accepts the following options:
make
The make of the camera (for example, "Canon"). This option is
required.
model
The model of the camera (for example, "Canon EOS 100D"). This
option is required.
lens_model
The model of the lens (for example, "Canon EF-S 18-55mm f/3.5-5.6
IS STM"). This option is required.
mode
The type of correction to apply. The following values are valid
options:
vignetting
Enables fixing lens vignetting.
geometry
Enables fixing lens geometry. This is the default.
subpixel
Enables fixing chromatic aberrations.
vig_geo
Enables fixing lens vignetting and lens geometry.
vig_subpixel
Enables fixing lens vignetting and chromatic aberrations.
distortion
Enables fixing both lens geometry and chromatic aberrations.
all Enables all possible corrections.
focal_length
The focal length of the image/video (zoom; expected constant for
video). For example, a 18--55mm lens has focal length range of
[18--55], so a value in that range should be chosen when using that
lens. Default 18.
aperture
The aperture of the image/video (expected constant for video). Note
that aperture is only used for vignetting correction. Default 3.5.
focus_distance
The focus distance of the image/video (expected constant for
video). Note that focus distance is only used for vignetting and
only slightly affects the vignetting correction process. If
unknown, leave it at the default value (which is 1000).
target_geometry
The target geometry of the output image/video. The following values
are valid options:
rectilinear (default)
fisheye
panoramic
equirectangular
fisheye_orthographic
fisheye_stereographic
fisheye_equisolid
fisheye_thoby
reverse
Apply the reverse of image correction (instead of correcting
distortion, apply it).
interpolation
The type of interpolation used when correcting distortion. The
following values are valid options:
nearest
linear (default)
lanczos
Examples
o Apply lens correction with make "Canon", camera model "Canon EOS
100D", and lens model "Canon EF-S 18-55mm f/3.5-5.6 IS STM" with
focal length of "18" and aperture of "8.0".
ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8 -c:v h264 -b:v 8000k output.mov
o Apply the same as before, but only for the first 5 seconds of
video.
ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8:enable='lte(t\,5)' -c:v h264 -b:v 8000k output.mov
libvmaf
Obtain the VMAF (Video Multi-Method Assessment Fusion) score between
two input videos.
The obtained VMAF score is printed through the logging system.
It requires Netflix's vmaf library (libvmaf) as a pre-requisite. After
installing the library it can be enabled using: "./configure
--enable-libvmaf --enable-version3". If no model path is specified it
uses the default model: "vmaf_v0.6.1.pkl".
The filter has following options:
model_path
Set the model path which is to be used for SVM. Default value:
"vmaf_v0.6.1.pkl"
log_path
Set the file path to be used to store logs.
log_fmt
Set the format of the log file (xml or json).
enable_transform
Enables transform for computing vmaf.
phone_model
Invokes the phone model which will generate VMAF scores higher than
in the regular model, which is more suitable for laptop, TV, etc.
viewing conditions.
psnr
Enables computing psnr along with vmaf.
ssim
Enables computing ssim along with vmaf.
ms_ssim
Enables computing ms_ssim along with vmaf.
pool
Set the pool method (mean, min or harmonic mean) to be used for
computing vmaf.
n_threads
Set number of threads to be used when computing vmaf.
n_subsample
Set interval for frame subsampling used when computing vmaf.
enable_conf_interval
Enables confidence interval.
This filter also supports the framesync options.
On the below examples the input file main.mpg being processed is
compared with the reference file ref.mpg.
ffmpeg -i main.mpg -i ref.mpg -lavfi libvmaf -f null -
Example with options:
ffmpeg -i main.mpg -i ref.mpg -lavfi libvmaf="psnr=1:enable-transform=1" -f null -
limiter
Limits the pixel components values to the specified range [min, max].
The filter accepts the following options:
min Lower bound. Defaults to the lowest allowed value for the input.
max Upper bound. Defaults to the highest allowed value for the input.
planes
Specify which planes will be processed. Defaults to all available.
loop
Loop video frames.
The filter accepts the following options:
loop
Set the number of loops. Setting this value to -1 will result in
infinite loops. Default is 0.
size
Set maximal size in number of frames. Default is 0.
start
Set first frame of loop. Default is 0.
Examples
o Loop single first frame infinitely:
loop=loop=-1:size=1:start=0
o Loop single first frame 10 times:
loop=loop=10:size=1:start=0
o Loop 10 first frames 5 times:
loop=loop=5:size=10:start=0
lut1d
Apply a 1D LUT to an input video.
The filter accepts the following options:
file
Set the 1D LUT file name.
Currently supported formats:
cube
Iridas
interp
Select interpolation mode.
Available values are:
nearest
Use values from the nearest defined point.
linear
Interpolate values using the linear interpolation.
cubic
Interpolate values using the cubic interpolation.
lut3d
Apply a 3D LUT to an input video.
The filter accepts the following options:
file
Set the 3D LUT file name.
Currently supported formats:
3dl AfterEffects
cube
Iridas
dat DaVinci
m3d Pandora
interp
Select interpolation mode.
Available values are:
nearest
Use values from the nearest defined point.
trilinear
Interpolate values using the 8 points defining a cube.
tetrahedral
Interpolate values using a tetrahedron.
This filter also supports the framesync options.
lumakey
Turn certain luma values into transparency.
The filter accepts the following options:
threshold
Set the luma which will be used as base for transparency. Default
value is 0.
tolerance
Set the range of luma values to be keyed out. Default value is 0.
softness
Set the range of softness. Default value is 0. Use this to control
gradual transition from zero to full transparency.
lut, lutrgb, lutyuv
Compute a look-up table for binding each pixel component input value to
an output value, and apply it to the input video.
lutyuv applies a lookup table to a YUV input video, lutrgb to an RGB
input video.
These filters accept the following parameters:
c0 set first pixel component expression
c1 set second pixel component expression
c2 set third pixel component expression
c3 set fourth pixel component expression, corresponds to the alpha
component
r set red component expression
g set green component expression
b set blue component expression
a alpha component expression
y set Y/luminance component expression
u set U/Cb component expression
v set V/Cr component expression
Each of them specifies the expression to use for computing the lookup
table for the corresponding pixel component values.
The exact component associated to each of the c* options depends on the
format in input.
The lut filter requires either YUV or RGB pixel formats in input,
lutrgb requires RGB pixel formats in input, and lutyuv requires YUV.
The expressions can contain the following constants and functions:
w
h The input width and height.
val The input value for the pixel component.
clipval
The input value, clipped to the minval-maxval range.
maxval
The maximum value for the pixel component.
minval
The minimum value for the pixel component.
negval
The negated value for the pixel component value, clipped to the
minval-maxval range; it corresponds to the expression
"maxval-clipval+minval".
clip(val)
The computed value in val, clipped to the minval-maxval range.
gammaval(gamma)
The computed gamma correction value of the pixel component value,
clipped to the minval-maxval range. It corresponds to the
expression
"pow((clipval-minval)/(maxval-minval)\,gamma)*(maxval-minval)+minval"
All expressions default to "val".
Examples
o Negate input video:
lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
The above is the same as:
lutrgb="r=negval:g=negval:b=negval"
lutyuv="y=negval:u=negval:v=negval"
o Negate luminance:
lutyuv=y=negval
o Remove chroma components, turning the video into a graytone image:
lutyuv="u=128:v=128"
o Apply a luma burning effect:
lutyuv="y=2*val"
o Remove green and blue components:
lutrgb="g=0:b=0"
o Set a constant alpha channel value on input:
format=rgba,lutrgb=a="maxval-minval/2"
o Correct luminance gamma by a factor of 0.5:
lutyuv=y=gammaval(0.5)
o Discard least significant bits of luma:
lutyuv=y='bitand(val, 128+64+32)'
o Technicolor like effect:
lutyuv=u='(val-maxval/2)*2+maxval/2':v='(val-maxval/2)*2+maxval/2'
lut2, tlut2
The "lut2" filter takes two input streams and outputs one stream.
The "tlut2" (time lut2) filter takes two consecutive frames from one
single stream.
This filter accepts the following parameters:
c0 set first pixel component expression
c1 set second pixel component expression
c2 set third pixel component expression
c3 set fourth pixel component expression, corresponds to the alpha
component
Each of them specifies the expression to use for computing the lookup
table for the corresponding pixel component values.
The exact component associated to each of the c* options depends on the
format in inputs.
The expressions can contain the following constants:
w
h The input width and height.
x The first input value for the pixel component.
y The second input value for the pixel component.
bdx The first input video bit depth.
bdy The second input video bit depth.
All expressions default to "x".
Examples
o Highlight differences between two RGB video streams:
lut2='ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,0,pow(2,bdx)-1)'
o Highlight differences between two YUV video streams:
lut2='ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,pow(2,bdx-1),pow(2,bdx)-1):ifnot(x-y,pow(2,bdx-1),pow(2,bdx)-1)'
o Show max difference between two video streams:
lut2='if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1))):if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1))):if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1)))'
maskedclamp
Clamp the first input stream with the second input and third input
stream.
Returns the value of first stream to be between second input stream -
"undershoot" and third input stream + "overshoot".
This filter accepts the following options:
undershoot
Default value is 0.
overshoot
Default value is 0.
planes
Set which planes will be processed as bitmap, unprocessed planes
will be copied from first stream. By default value 0xf, all planes
will be processed.
maskedmerge
Merge the first input stream with the second input stream using per
pixel weights in the third input stream.
A value of 0 in the third stream pixel component means that pixel
component from first stream is returned unchanged, while maximum value
(eg. 255 for 8-bit videos) means that pixel component from second
stream is returned unchanged. Intermediate values define the amount of
merging between both input stream's pixel components.
This filter accepts the following options:
planes
Set which planes will be processed as bitmap, unprocessed planes
will be copied from first stream. By default value 0xf, all planes
will be processed.
mcdeint
Apply motion-compensation deinterlacing.
It needs one field per frame as input and must thus be used together
with yadif=1/3 or equivalent.
This filter accepts the following options:
mode
Set the deinterlacing mode.
It accepts one of the following values:
fast
medium
slow
use iterative motion estimation
extra_slow
like slow, but use multiple reference frames.
Default value is fast.
parity
Set the picture field parity assumed for the input video. It must
be one of the following values:
0, tff
assume top field first
1, bff
assume bottom field first
Default value is bff.
qp Set per-block quantization parameter (QP) used by the internal
encoder.
Higher values should result in a smoother motion vector field but
less optimal individual vectors. Default value is 1.
mergeplanes
Merge color channel components from several video streams.
The filter accepts up to 4 input streams, and merge selected input
planes to the output video.
This filter accepts the following options:
mapping
Set input to output plane mapping. Default is 0.
The mappings is specified as a bitmap. It should be specified as a
hexadecimal number in the form 0xAa[Bb[Cc[Dd]]]. 'Aa' describes the
mapping for the first plane of the output stream. 'A' sets the
number of the input stream to use (from 0 to 3), and 'a' the plane
number of the corresponding input to use (from 0 to 3). The rest of
the mappings is similar, 'Bb' describes the mapping for the output
stream second plane, 'Cc' describes the mapping for the output
stream third plane and 'Dd' describes the mapping for the output
stream fourth plane.
format
Set output pixel format. Default is "yuva444p".
Examples
o Merge three gray video streams of same width and height into single
video stream:
[a0][a1][a2]mergeplanes=0x001020:yuv444p
o Merge 1st yuv444p stream and 2nd gray video stream into yuva444p
video stream:
[a0][a1]mergeplanes=0x00010210:yuva444p
o Swap Y and A plane in yuva444p stream:
format=yuva444p,mergeplanes=0x03010200:yuva444p
o Swap U and V plane in yuv420p stream:
format=yuv420p,mergeplanes=0x000201:yuv420p
o Cast a rgb24 clip to yuv444p:
format=rgb24,mergeplanes=0x000102:yuv444p
mestimate
Estimate and export motion vectors using block matching algorithms.
Motion vectors are stored in frame side data to be used by other
filters.
This filter accepts the following options:
method
Specify the motion estimation method. Accepts one of the following
values:
esa Exhaustive search algorithm.
tss Three step search algorithm.
tdls
Two dimensional logarithmic search algorithm.
ntss
New three step search algorithm.
fss Four step search algorithm.
ds Diamond search algorithm.
hexbs
Hexagon-based search algorithm.
epzs
Enhanced predictive zonal search algorithm.
umh Uneven multi-hexagon search algorithm.
Default value is esa.
mb_size
Macroblock size. Default 16.
search_param
Search parameter. Default 7.
midequalizer
Apply Midway Image Equalization effect using two video streams.
Midway Image Equalization adjusts a pair of images to have the same
histogram, while maintaining their dynamics as much as possible. It's
useful for e.g. matching exposures from a pair of stereo cameras.
This filter has two inputs and one output, which must be of same pixel
format, but may be of different sizes. The output of filter is first
input adjusted with midway histogram of both inputs.
This filter accepts the following option:
planes
Set which planes to process. Default is 15, which is all available
planes.
minterpolate
Convert the video to specified frame rate using motion interpolation.
This filter accepts the following options:
fps Specify the output frame rate. This can be rational e.g.
"60000/1001". Frames are dropped if fps is lower than source fps.
Default 60.
mi_mode
Motion interpolation mode. Following values are accepted:
dup Duplicate previous or next frame for interpolating new ones.
blend
Blend source frames. Interpolated frame is mean of previous and
next frames.
mci Motion compensated interpolation. Following options are
effective when this mode is selected:
mc_mode
Motion compensation mode. Following values are accepted:
obmc
Overlapped block motion compensation.
aobmc
Adaptive overlapped block motion compensation. Window
weighting coefficients are controlled adaptively
according to the reliabilities of the neighboring
motion vectors to reduce oversmoothing.
Default mode is obmc.
me_mode
Motion estimation mode. Following values are accepted:
bidir
Bidirectional motion estimation. Motion vectors are
estimated for each source frame in both forward and
backward directions.
bilat
Bilateral motion estimation. Motion vectors are
estimated directly for interpolated frame.
Default mode is bilat.
me The algorithm to be used for motion estimation. Following
values are accepted:
esa Exhaustive search algorithm.
tss Three step search algorithm.
tdls
Two dimensional logarithmic search algorithm.
ntss
New three step search algorithm.
fss Four step search algorithm.
ds Diamond search algorithm.
hexbs
Hexagon-based search algorithm.
epzs
Enhanced predictive zonal search algorithm.
umh Uneven multi-hexagon search algorithm.
Default algorithm is epzs.
mb_size
Macroblock size. Default 16.
search_param
Motion estimation search parameter. Default 32.
vsbmc
Enable variable-size block motion compensation. Motion
estimation is applied with smaller block sizes at object
boundaries in order to make the them less blur. Default is
0 (disabled).
scd Scene change detection method. Scene change leads motion vectors to
be in random direction. Scene change detection replace interpolated
frames by duplicate ones. May not be needed for other modes.
Following values are accepted:
none
Disable scene change detection.
fdiff
Frame difference. Corresponding pixel values are compared and
if it satisfies scd_threshold scene change is detected.
Default method is fdiff.
scd_threshold
Scene change detection threshold. Default is 5.0.
mix
Mix several video input streams into one video stream.
A description of the accepted options follows.
nb_inputs
The number of inputs. If unspecified, it defaults to 2.
weights
Specify weight of each input video stream as sequence. Each weight
is separated by space. If number of weights is smaller than number
of frames last specified weight will be used for all remaining
unset weights.
scale
Specify scale, if it is set it will be multiplied with sum of each
weight multiplied with pixel values to give final destination pixel
value. By default scale is auto scaled to sum of weights.
duration
Specify how end of stream is determined.
longest
The duration of the longest input. (default)
shortest
The duration of the shortest input.
first
The duration of the first input.
mpdecimate
Drop frames that do not differ greatly from the previous frame in order
to reduce frame rate.
The main use of this filter is for very-low-bitrate encoding (e.g.
streaming over dialup modem), but it could in theory be used for fixing
movies that were inverse-telecined incorrectly.
A description of the accepted options follows.
max Set the maximum number of consecutive frames which can be dropped
(if positive), or the minimum interval between dropped frames (if
negative). If the value is 0, the frame is dropped disregarding the
number of previous sequentially dropped frames.
Default value is 0.
hi
lo
frac
Set the dropping threshold values.
Values for hi and lo are for 8x8 pixel blocks and represent actual
pixel value differences, so a threshold of 64 corresponds to 1 unit
of difference for each pixel, or the same spread out differently
over the block.
A frame is a candidate for dropping if no 8x8 blocks differ by more
than a threshold of hi, and if no more than frac blocks (1 meaning
the whole image) differ by more than a threshold of lo.
Default value for hi is 64*12, default value for lo is 64*5, and
default value for frac is 0.33.
negate
Negate (invert) the input video.
It accepts the following option:
negate_alpha
With value 1, it negates the alpha component, if present. Default
value is 0.
nlmeans
Denoise frames using Non-Local Means algorithm.
Each pixel is adjusted by looking for other pixels with similar
contexts. This context similarity is defined by comparing their
surrounding patches of size pxp. Patches are searched in an area of rxr
around the pixel.
Note that the research area defines centers for patches, which means
some patches will be made of pixels outside that research area.
The filter accepts the following options.
s Set denoising strength.
p Set patch size.
pc Same as p but for chroma planes.
The default value is 0 and means automatic.
r Set research size.
rc Same as r but for chroma planes.
The default value is 0 and means automatic.
nnedi
Deinterlace video using neural network edge directed interpolation.
This filter accepts the following options:
weights
Mandatory option, without binary file filter can not work.
Currently file can be found here:
https://github.com/dubhater/vapoursynth-nnedi3/blob/master/src/nnedi3_weights.bin
deint
Set which frames to deinterlace, by default it is "all". Can be
"all" or "interlaced".
field
Set mode of operation.
Can be one of the following:
af Use frame flags, both fields.
a Use frame flags, single field.
t Use top field only.
b Use bottom field only.
tf Use both fields, top first.
bf Use both fields, bottom first.
planes
Set which planes to process, by default filter process all frames.
nsize
Set size of local neighborhood around each pixel, used by the
predictor neural network.
Can be one of the following:
s8x6
s16x6
s32x6
s48x6
s8x4
s16x4
s32x4
nns Set the number of neurons in predictor neural network. Can be one
of the following:
n16
n32
n64
n128
n256
qual
Controls the number of different neural network predictions that
are blended together to compute the final output value. Can be
"fast", default or "slow".
etype
Set which set of weights to use in the predictor. Can be one of
the following:
a weights trained to minimize absolute error
s weights trained to minimize squared error
pscrn
Controls whether or not the prescreener neural network is used to
decide which pixels should be processed by the predictor neural
network and which can be handled by simple cubic interpolation.
The prescreener is trained to know whether cubic interpolation will
be sufficient for a pixel or whether it should be predicted by the
predictor nn. The computational complexity of the prescreener nn
is much less than that of the predictor nn. Since most pixels can
be handled by cubic interpolation, using the prescreener generally
results in much faster processing. The prescreener is pretty
accurate, so the difference between using it and not using it is
almost always unnoticeable.
Can be one of the following:
none
original
new
Default is "new".
fapprox
Set various debugging flags.
noformat
Force libavfilter not to use any of the specified pixel formats for the
input to the next filter.
It accepts the following parameters:
pix_fmts
A '|'-separated list of pixel format names, such as
pix_fmts=yuv420p|monow|rgb24".
Examples
o Force libavfilter to use a format different from yuv420p for the
input to the vflip filter:
noformat=pix_fmts=yuv420p,vflip
o Convert the input video to any of the formats not contained in the
list:
noformat=yuv420p|yuv444p|yuv410p
noise
Add noise on video input frame.
The filter accepts the following options:
all_seed
c0_seed
c1_seed
c2_seed
c3_seed
Set noise seed for specific pixel component or all pixel components
in case of all_seed. Default value is 123457.
all_strength, alls
c0_strength, c0s
c1_strength, c1s
c2_strength, c2s
c3_strength, c3s
Set noise strength for specific pixel component or all pixel
components in case all_strength. Default value is 0. Allowed range
is [0, 100].
all_flags, allf
c0_flags, c0f
c1_flags, c1f
c2_flags, c2f
c3_flags, c3f
Set pixel component flags or set flags for all components if
all_flags. Available values for component flags are:
a averaged temporal noise (smoother)
p mix random noise with a (semi)regular pattern
t temporal noise (noise pattern changes between frames)
u uniform noise (gaussian otherwise)
Examples
Add temporal and uniform noise to input video:
noise=alls=20:allf=t+u
normalize
Normalize RGB video (aka histogram stretching, contrast stretching).
See: https://en.wikipedia.org/wiki/Normalization_(image_processing)
For each channel of each frame, the filter computes the input range and
maps it linearly to the user-specified output range. The output range
defaults to the full dynamic range from pure black to pure white.
Temporal smoothing can be used on the input range to reduce flickering
(rapid changes in brightness) caused when small dark or bright objects
enter or leave the scene. This is similar to the auto-exposure
(automatic gain control) on a video camera, and, like a video camera,
it may cause a period of over- or under-exposure of the video.
The R,G,B channels can be normalized independently, which may cause
some color shifting, or linked together as a single channel, which
prevents color shifting. Linked normalization preserves hue.
Independent normalization does not, so it can be used to remove some
color casts. Independent and linked normalization can be combined in
any ratio.
The normalize filter accepts the following options:
blackpt
whitept
Colors which define the output range. The minimum input value is
mapped to the blackpt. The maximum input value is mapped to the
whitept. The defaults are black and white respectively. Specifying
white for blackpt and black for whitept will give color-inverted,
normalized video. Shades of grey can be used to reduce the dynamic
range (contrast). Specifying saturated colors here can create some
interesting effects.
smoothing
The number of previous frames to use for temporal smoothing. The
input range of each channel is smoothed using a rolling average
over the current frame and the smoothing previous frames. The
default is 0 (no temporal smoothing).
independence
Controls the ratio of independent (color shifting) channel
normalization to linked (color preserving) normalization. 0.0 is
fully linked, 1.0 is fully independent. Defaults to 1.0 (fully
independent).
strength
Overall strength of the filter. 1.0 is full strength. 0.0 is a
rather expensive no-op. Defaults to 1.0 (full strength).
Examples
Stretch video contrast to use the full dynamic range, with no temporal
smoothing; may flicker depending on the source content:
normalize=blackpt=black:whitept=white:smoothing=0
As above, but with 50 frames of temporal smoothing; flicker should be
reduced, depending on the source content:
normalize=blackpt=black:whitept=white:smoothing=50
As above, but with hue-preserving linked channel normalization:
normalize=blackpt=black:whitept=white:smoothing=50:independence=0
As above, but with half strength:
normalize=blackpt=black:whitept=white:smoothing=50:independence=0:strength=0.5
Map the darkest input color to red, the brightest input color to cyan:
normalize=blackpt=red:whitept=cyan
null
Pass the video source unchanged to the output.
ocr
Optical Character Recognition
This filter uses Tesseract for optical character recognition. To enable
compilation of this filter, you need to configure FFmpeg with
"--enable-libtesseract".
It accepts the following options:
datapath
Set datapath to tesseract data. Default is to use whatever was set
at installation.
language
Set language, default is "eng".
whitelist
Set character whitelist.
blacklist
Set character blacklist.
The filter exports recognized text as the frame metadata
"lavfi.ocr.text".
ocv
Apply a video transform using libopencv.
To enable this filter, install the libopencv library and headers and
configure FFmpeg with "--enable-libopencv".
It accepts the following parameters:
filter_name
The name of the libopencv filter to apply.
filter_params
The parameters to pass to the libopencv filter. If not specified,
the default values are assumed.
Refer to the official libopencv documentation for more precise
information:
Several libopencv filters are supported; see the following subsections.
dilate
Dilate an image by using a specific structuring element. It
corresponds to the libopencv function "cvDilate".
It accepts the parameters: struct_el|nb_iterations.
struct_el represents a structuring element, and has the syntax:
colsxrows+anchor_xxanchor_y/shape
cols and rows represent the number of columns and rows of the
structuring element, anchor_x and anchor_y the anchor point, and shape
the shape for the structuring element. shape must be "rect", "cross",
"ellipse", or "custom".
If the value for shape is "custom", it must be followed by a string of
the form "=filename". The file with name filename is assumed to
represent a binary image, with each printable character corresponding
to a bright pixel. When a custom shape is used, cols and rows are
ignored, the number or columns and rows of the read file are assumed
instead.
The default value for struct_el is "3x3+0x0/rect".
nb_iterations specifies the number of times the transform is applied to
the image, and defaults to 1.
Some examples:
# Use the default values
ocv=dilate
# Dilate using a structuring element with a 5x5 cross, iterating two times
ocv=filter_name=dilate:filter_params=5x5+2x2/cross|2
# Read the shape from the file diamond.shape, iterating two times.
# The file diamond.shape may contain a pattern of characters like this
# *
# ***
# *****
# ***
# *
# The specified columns and rows are ignored
# but the anchor point coordinates are not
ocv=dilate:0x0+2x2/custom=diamond.shape|2
erode
Erode an image by using a specific structuring element. It corresponds
to the libopencv function "cvErode".
It accepts the parameters: struct_el:nb_iterations, with the same
syntax and semantics as the dilate filter.
smooth
Smooth the input video.
The filter takes the following parameters:
type|param1|param2|param3|param4.
type is the type of smooth filter to apply, and must be one of the
following values: "blur", "blur_no_scale", "median", "gaussian", or
"bilateral". The default value is "gaussian".
The meaning of param1, param2, param3, and param4 depend on the smooth
type. param1 and param2 accept integer positive values or 0. param3 and
param4 accept floating point values.
The default value for param1 is 3. The default value for the other
parameters is 0.
These parameters correspond to the parameters assigned to the libopencv
function "cvSmooth".
oscilloscope
2D Video Oscilloscope.
Useful to measure spatial impulse, step responses, chroma delays, etc.
It accepts the following parameters:
x Set scope center x position.
y Set scope center y position.
s Set scope size, relative to frame diagonal.
t Set scope tilt/rotation.
o Set trace opacity.
tx Set trace center x position.
ty Set trace center y position.
tw Set trace width, relative to width of frame.
th Set trace height, relative to height of frame.
c Set which components to trace. By default it traces first three
components.
g Draw trace grid. By default is enabled.
st Draw some statistics. By default is enabled.
sc Draw scope. By default is enabled.
Examples
o Inspect full first row of video frame.
oscilloscope=x=0.5:y=0:s=1
o Inspect full last row of video frame.
oscilloscope=x=0.5:y=1:s=1
o Inspect full 5th line of video frame of height 1080.
oscilloscope=x=0.5:y=5/1080:s=1
o Inspect full last column of video frame.
oscilloscope=x=1:y=0.5:s=1:t=1
overlay
Overlay one video on top of another.
It takes two inputs and has one output. The first input is the "main"
video on which the second input is overlaid.
It accepts the following parameters:
A description of the accepted options follows.
x
y Set the expression for the x and y coordinates of the overlaid
video on the main video. Default value is "0" for both expressions.
In case the expression is invalid, it is set to a huge value
(meaning that the overlay will not be displayed within the output
visible area).
eof_action
See framesync.
eval
Set when the expressions for x, and y are evaluated.
It accepts the following values:
init
only evaluate expressions once during the filter initialization
or when a command is processed
frame
evaluate expressions for each incoming frame
Default value is frame.
shortest
See framesync.
format
Set the format for the output video.
It accepts the following values:
yuv420
force YUV420 output
yuv422
force YUV422 output
yuv444
force YUV444 output
rgb force packed RGB output
gbrp
force planar RGB output
auto
automatically pick format
Default value is yuv420.
repeatlast
See framesync.
alpha
Set format of alpha of the overlaid video, it can be straight or
premultiplied. Default is straight.
The x, and y expressions can contain the following parameters.
main_w, W
main_h, H
The main input width and height.
overlay_w, w
overlay_h, h
The overlay input width and height.
x
y The computed values for x and y. They are evaluated for each new
frame.
hsub
vsub
horizontal and vertical chroma subsample values of the output
format. For example for the pixel format "yuv422p" hsub is 2 and
vsub is 1.
n the number of input frame, starting from 0
pos the position in the file of the input frame, NAN if unknown
t The timestamp, expressed in seconds. It's NAN if the input
timestamp is unknown.
This filter also supports the framesync options.
Note that the n, pos, t variables are available only when evaluation is
done per frame, and will evaluate to NAN when eval is set to init.
Be aware that frames are taken from each input video in timestamp
order, hence, if their initial timestamps differ, it is a good idea to
pass the two inputs through a setpts=PTS-STARTPTS filter to have them
begin in the same zero timestamp, as the example for the movie filter
does.
You can chain together more overlays but you should test the efficiency
of such approach.
Commands
This filter supports the following commands:
x
y Modify the x and y of the overlay input. The command accepts the
same syntax of the corresponding option.
If the specified expression is not valid, it is kept at its current
value.
Examples
o Draw the overlay at 10 pixels from the bottom right corner of the
main video:
overlay=main_w-overlay_w-10:main_h-overlay_h-10
Using named options the example above becomes:
overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10
o Insert a transparent PNG logo in the bottom left corner of the
input, using the ffmpeg tool with the "-filter_complex" option:
ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
o Insert 2 different transparent PNG logos (second logo on bottom
right corner) using the ffmpeg tool:
ffmpeg -i input -i logo1 -i logo2 -filter_complex 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output
o Add a transparent color layer on top of the main video; "WxH" must
specify the size of the main input to the overlay filter:
color=color=red@.3:size=WxH [over]; [in][over] overlay [out]
o Play an original video and a filtered version (here with the
deshake filter) side by side using the ffplay tool:
ffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'
The above command is the same as:
ffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'
o Make a sliding overlay appearing from the left to the right top
part of the screen starting since time 2:
overlay=x='if(gte(t,2), -w+(t-2)*20, NAN)':y=0
o Compose output by putting two input videos side to side:
ffmpeg -i left.avi -i right.avi -filter_complex "
nullsrc=size=200x100 [background];
[0:v] setpts=PTS-STARTPTS, scale=100x100 [left];
[1:v] setpts=PTS-STARTPTS, scale=100x100 [right];
[background][left] overlay=shortest=1 [background+left];
[background+left][right] overlay=shortest=1:x=100 [left+right]
"
o Mask 10-20 seconds of a video by applying the delogo filter to a
section
ffmpeg -i test.avi -codec:v:0 wmv2 -ar 11025 -b:v 9000k
-vf '[in]split[split_main][split_delogo];[split_delogo]trim=start=360:end=371,delogo=0:0:640:480[delogoed];[split_main][delogoed]overlay=eof_action=pass[out]'
masked.avi
o Chain several overlays in cascade:
nullsrc=s=200x200 [bg];
testsrc=s=100x100, split=4 [in0][in1][in2][in3];
[in0] lutrgb=r=0, [bg] overlay=0:0 [mid0];
[in1] lutrgb=g=0, [mid0] overlay=100:0 [mid1];
[in2] lutrgb=b=0, [mid1] overlay=0:100 [mid2];
[in3] null, [mid2] overlay=100:100 [out0]
owdenoise
Apply Overcomplete Wavelet denoiser.
The filter accepts the following options:
depth
Set depth.
Larger depth values will denoise lower frequency components more,
but slow down filtering.
Must be an int in the range 8-16, default is 8.
luma_strength, ls
Set luma strength.
Must be a double value in the range 0-1000, default is 1.0.
chroma_strength, cs
Set chroma strength.
Must be a double value in the range 0-1000, default is 1.0.
pad
Add paddings to the input image, and place the original input at the
provided x, y coordinates.
It accepts the following parameters:
width, w
height, h
Specify an expression for the size of the output image with the
paddings added. If the value for width or height is 0, the
corresponding input size is used for the output.
The width expression can reference the value set by the height
expression, and vice versa.
The default value of width and height is 0.
x
y Specify the offsets to place the input image at within the padded
area, with respect to the top/left border of the output image.
The x expression can reference the value set by the y expression,
and vice versa.
The default value of x and y is 0.
If x or y evaluate to a negative number, they'll be changed so the
input image is centered on the padded area.
color
Specify the color of the padded area. For the syntax of this
option, check the "Color" section in the ffmpeg-utils manual.
The default value of color is "black".
eval
Specify when to evaluate width, height, x and y expression.
It accepts the following values:
init
Only evaluate expressions once during the filter initialization
or when a command is processed.
frame
Evaluate expressions for each incoming frame.
Default value is init.
aspect
Pad to aspect instead to a resolution.
The value for the width, height, x, and y options are expressions
containing the following constants:
in_w
in_h
The input video width and height.
iw
ih These are the same as in_w and in_h.
out_w
out_h
The output width and height (the size of the padded area), as
specified by the width and height expressions.
ow
oh These are the same as out_w and out_h.
x
y The x and y offsets as specified by the x and y expressions, or NAN
if not yet specified.
a same as iw / ih
sar input sample aspect ratio
dar input display aspect ratio, it is the same as (iw / ih) * sar
hsub
vsub
The horizontal and vertical chroma subsample values. For example
for the pixel format "yuv422p" hsub is 2 and vsub is 1.
Examples
o Add paddings with the color "violet" to the input video. The output
video size is 640x480, and the top-left corner of the input video
is placed at column 0, row 40
pad=640:480:0:40:violet
The example above is equivalent to the following command:
pad=width=640:height=480:x=0:y=40:color=violet
o Pad the input to get an output with dimensions increased by 3/2,
and put the input video at the center of the padded area:
pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
o Pad the input to get a squared output with size equal to the
maximum value between the input width and height, and put the input
video at the center of the padded area:
pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
o Pad the input to get a final w/h ratio of 16:9:
pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
o In case of anamorphic video, in order to set the output display
aspect correctly, it is necessary to use sar in the expression,
according to the relation:
(ih * X / ih) * sar = output_dar
X = output_dar / sar
Thus the previous example needs to be modified to:
pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
o Double the output size and put the input video in the bottom-right
corner of the output padded area:
pad="2*iw:2*ih:ow-iw:oh-ih"
palettegen
Generate one palette for a whole video stream.
It accepts the following options:
max_colors
Set the maximum number of colors to quantize in the palette. Note:
the palette will still contain 256 colors; the unused palette
entries will be black.
reserve_transparent
Create a palette of 255 colors maximum and reserve the last one for
transparency. Reserving the transparency color is useful for GIF
optimization. If not set, the maximum of colors in the palette
will be 256. You probably want to disable this option for a
standalone image. Set by default.
transparency_color
Set the color that will be used as background for transparency.
stats_mode
Set statistics mode.
It accepts the following values:
full
Compute full frame histograms.
diff
Compute histograms only for the part that differs from previous
frame. This might be relevant to give more importance to the
moving part of your input if the background is static.
single
Compute new histogram for each frame.
Default value is full.
The filter also exports the frame metadata "lavfi.color_quant_ratio"
("nb_color_in / nb_color_out") which you can use to evaluate the degree
of color quantization of the palette. This information is also visible
at info logging level.
Examples
o Generate a representative palette of a given video using ffmpeg:
ffmpeg -i input.mkv -vf palettegen palette.png
paletteuse
Use a palette to downsample an input video stream.
The filter takes two inputs: one video stream and a palette. The
palette must be a 256 pixels image.
It accepts the following options:
dither
Select dithering mode. Available algorithms are:
bayer
Ordered 8x8 bayer dithering (deterministic)
heckbert
Dithering as defined by Paul Heckbert in 1982 (simple error
diffusion). Note: this dithering is sometimes considered
"wrong" and is included as a reference.
floyd_steinberg
Floyd and Steingberg dithering (error diffusion)
sierra2
Frankie Sierra dithering v2 (error diffusion)
sierra2_4a
Frankie Sierra dithering v2 "Lite" (error diffusion)
Default is sierra2_4a.
bayer_scale
When bayer dithering is selected, this option defines the scale of
the pattern (how much the crosshatch pattern is visible). A low
value means more visible pattern for less banding, and higher value
means less visible pattern at the cost of more banding.
The option must be an integer value in the range [0,5]. Default is
2.
diff_mode
If set, define the zone to process
rectangle
Only the changing rectangle will be reprocessed. This is
similar to GIF cropping/offsetting compression mechanism. This
option can be useful for speed if only a part of the image is
changing, and has use cases such as limiting the scope of the
error diffusal dither to the rectangle that bounds the moving
scene (it leads to more deterministic output if the scene
doesn't change much, and as a result less moving noise and
better GIF compression).
Default is none.
new Take new palette for each output frame.
alpha_threshold
Sets the alpha threshold for transparency. Alpha values above this
threshold will be treated as completely opaque, and values below
this threshold will be treated as completely transparent.
The option must be an integer value in the range [0,255]. Default
is 128.
Examples
o Use a palette (generated for example with palettegen) to encode a
GIF using ffmpeg:
ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif
perspective
Correct perspective of video not recorded perpendicular to the screen.
A description of the accepted parameters follows.
x0
y0
x1
y1
x2
y2
x3
y3 Set coordinates expression for top left, top right, bottom left and
bottom right corners. Default values are "0:0:W:0:0:H:W:H" with
which perspective will remain unchanged. If the "sense" option is
set to "source", then the specified points will be sent to the
corners of the destination. If the "sense" option is set to
"destination", then the corners of the source will be sent to the
specified coordinates.
The expressions can use the following variables:
W
H the width and height of video frame.
in Input frame count.
on Output frame count.
interpolation
Set interpolation for perspective correction.
It accepts the following values:
linear
cubic
Default value is linear.
sense
Set interpretation of coordinate options.
It accepts the following values:
0, source
Send point in the source specified by the given coordinates to
the corners of the destination.
1, destination
Send the corners of the source to the point in the destination
specified by the given coordinates.
Default value is source.
eval
Set when the expressions for coordinates x0,y0,...x3,y3 are
evaluated.
It accepts the following values:
init
only evaluate expressions once during the filter initialization
or when a command is processed
frame
evaluate expressions for each incoming frame
Default value is init.
phase
Delay interlaced video by one field time so that the field order
changes.
The intended use is to fix PAL movies that have been captured with the
opposite field order to the film-to-video transfer.
A description of the accepted parameters follows.
mode
Set phase mode.
It accepts the following values:
t Capture field order top-first, transfer bottom-first. Filter
will delay the bottom field.
b Capture field order bottom-first, transfer top-first. Filter
will delay the top field.
p Capture and transfer with the same field order. This mode only
exists for the documentation of the other options to refer to,
but if you actually select it, the filter will faithfully do
nothing.
a Capture field order determined automatically by field flags,
transfer opposite. Filter selects among t and b modes on a
frame by frame basis using field flags. If no field information
is available, then this works just like u.
u Capture unknown or varying, transfer opposite. Filter selects
among t and b on a frame by frame basis by analyzing the images
and selecting the alternative that produces best match between
the fields.
T Capture top-first, transfer unknown or varying. Filter selects
among t and p using image analysis.
B Capture bottom-first, transfer unknown or varying. Filter
selects among b and p using image analysis.
A Capture determined by field flags, transfer unknown or varying.
Filter selects among t, b and p using field flags and image
analysis. If no field information is available, then this works
just like U. This is the default mode.
U Both capture and transfer unknown or varying. Filter selects
among t, b and p using image analysis only.
pixdesctest
Pixel format descriptor test filter, mainly useful for internal
testing. The output video should be equal to the input video.
For example:
format=monow, pixdesctest
can be used to test the monowhite pixel format descriptor definition.
pixscope
Display sample values of color channels. Mainly useful for checking
color and levels. Minimum supported resolution is 640x480.
The filters accept the following options:
x Set scope X position, relative offset on X axis.
y Set scope Y position, relative offset on Y axis.
w Set scope width.
h Set scope height.
o Set window opacity. This window also holds statistics about pixel
area.
wx Set window X position, relative offset on X axis.
wy Set window Y position, relative offset on Y axis.
pp
Enable the specified chain of postprocessing subfilters using
libpostproc. This library should be automatically selected with a GPL
build ("--enable-gpl"). Subfilters must be separated by '/' and can be
disabled by prepending a '-'. Each subfilter and some options have a
short and a long name that can be used interchangeably, i.e. dr/dering
are the same.
The filters accept the following options:
subfilters
Set postprocessing subfilters string.
All subfilters share common options to determine their scope:
a/autoq
Honor the quality commands for this subfilter.
c/chrom
Do chrominance filtering, too (default).
y/nochrom
Do luminance filtering only (no chrominance).
n/noluma
Do chrominance filtering only (no luminance).
These options can be appended after the subfilter name, separated by a
'|'.
Available subfilters are:
hb/hdeblock[|difference[|flatness]]
Horizontal deblocking filter
difference
Difference factor where higher values mean more deblocking
(default: 32).
flatness
Flatness threshold where lower values mean more deblocking
(default: 39).
vb/vdeblock[|difference[|flatness]]
Vertical deblocking filter
difference
Difference factor where higher values mean more deblocking
(default: 32).
flatness
Flatness threshold where lower values mean more deblocking
(default: 39).
ha/hadeblock[|difference[|flatness]]
Accurate horizontal deblocking filter
difference
Difference factor where higher values mean more deblocking
(default: 32).
flatness
Flatness threshold where lower values mean more deblocking
(default: 39).
va/vadeblock[|difference[|flatness]]
Accurate vertical deblocking filter
difference
Difference factor where higher values mean more deblocking
(default: 32).
flatness
Flatness threshold where lower values mean more deblocking
(default: 39).
The horizontal and vertical deblocking filters share the difference and
flatness values so you cannot set different horizontal and vertical
thresholds.
h1/x1hdeblock
Experimental horizontal deblocking filter
v1/x1vdeblock
Experimental vertical deblocking filter
dr/dering
Deringing filter
tn/tmpnoise[|threshold1[|threshold2[|threshold3]]], temporal noise
reducer
threshold1
larger -> stronger filtering
threshold2
larger -> stronger filtering
threshold3
larger -> stronger filtering
al/autolevels[:f/fullyrange], automatic brightness / contrast
correction
f/fullyrange
Stretch luminance to "0-255".
lb/linblenddeint
Linear blend deinterlacing filter that deinterlaces the given block
by filtering all lines with a "(1 2 1)" filter.
li/linipoldeint
Linear interpolating deinterlacing filter that deinterlaces the
given block by linearly interpolating every second line.
ci/cubicipoldeint
Cubic interpolating deinterlacing filter deinterlaces the given
block by cubically interpolating every second line.
md/mediandeint
Median deinterlacing filter that deinterlaces the given block by
applying a median filter to every second line.
fd/ffmpegdeint
FFmpeg deinterlacing filter that deinterlaces the given block by
filtering every second line with a "(-1 4 2 4 -1)" filter.
l5/lowpass5
Vertically applied FIR lowpass deinterlacing filter that
deinterlaces the given block by filtering all lines with a "(-1 2 6
2 -1)" filter.
fq/forceQuant[|quantizer]
Overrides the quantizer table from the input with the constant
quantizer you specify.
quantizer
Quantizer to use
de/default
Default pp filter combination ("hb|a,vb|a,dr|a")
fa/fast
Fast pp filter combination ("h1|a,v1|a,dr|a")
ac High quality pp filter combination ("ha|a|128|7,va|a,dr|a")
Examples
o Apply horizontal and vertical deblocking, deringing and automatic
brightness/contrast:
pp=hb/vb/dr/al
o Apply default filters without brightness/contrast correction:
pp=de/-al
o Apply default filters and temporal denoiser:
pp=default/tmpnoise|1|2|3
o Apply deblocking on luminance only, and switch vertical deblocking
on or off automatically depending on available CPU time:
pp=hb|y/vb|a
pp7
Apply Postprocessing filter 7. It is variant of the spp filter, similar
to spp = 6 with 7 point DCT, where only the center sample is used after
IDCT.
The filter accepts the following options:
qp Force a constant quantization parameter. It accepts an integer in
range 0 to 63. If not set, the filter will use the QP from the
video stream (if available).
mode
Set thresholding mode. Available modes are:
hard
Set hard thresholding.
soft
Set soft thresholding (better de-ringing effect, but likely
blurrier).
medium
Set medium thresholding (good results, default).
premultiply
Apply alpha premultiply effect to input video stream using first plane
of second stream as alpha.
Both streams must have same dimensions and same pixel format.
The filter accepts the following option:
planes
Set which planes will be processed, unprocessed planes will be
copied. By default value 0xf, all planes will be processed.
inplace
Do not require 2nd input for processing, instead use alpha plane
from input stream.
prewitt
Apply prewitt operator to input video stream.
The filter accepts the following option:
planes
Set which planes will be processed, unprocessed planes will be
copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
delta
Set value which will be added to filtered result.
program_opencl
Filter video using an OpenCL program.
source
OpenCL program source file.
kernel
Kernel name in program.
inputs
Number of inputs to the filter. Defaults to 1.
size, s
Size of output frames. Defaults to the same as the first input.
The program source file must contain a kernel function with the given
name, which will be run once for each plane of the output. Each run on
a plane gets enqueued as a separate 2D global NDRange with one work-
item for each pixel to be generated. The global ID offset for each
work-item is therefore the coordinates of a pixel in the destination
image.
The kernel function needs to take the following arguments:
o Destination image, __write_only image2d_t.
This image will become the output; the kernel should write all of
it.
o Frame index, unsigned int.
This is a counter starting from zero and increasing by one for each
frame.
o Source images, __read_only image2d_t.
These are the most recent images on each input. The kernel may
read from them to generate the output, but they can't be written
to.
Example programs:
o Copy the input to the output (output must be the same size as the
input).
__kernel void copy(__write_only image2d_t destination,
unsigned int index,
__read_only image2d_t source)
{
const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;
int2 location = (int2)(get_global_id(0), get_global_id(1));
float4 value = read_imagef(source, sampler, location);
write_imagef(destination, location, value);
}
o Apply a simple transformation, rotating the input by an amount
increasing with the index counter. Pixel values are linearly
interpolated by the sampler, and the output need not have the same
dimensions as the input.
__kernel void rotate_image(__write_only image2d_t dst,
unsigned int index,
__read_only image2d_t src)
{
const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
CLK_FILTER_LINEAR);
float angle = (float)index / 100.0f;
float2 dst_dim = convert_float2(get_image_dim(dst));
float2 src_dim = convert_float2(get_image_dim(src));
float2 dst_cen = dst_dim / 2.0f;
float2 src_cen = src_dim / 2.0f;
int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
float2 dst_pos = convert_float2(dst_loc) - dst_cen;
float2 src_pos = {
cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,
sin(angle) * dst_pos.x + cos(angle) * dst_pos.y
};
src_pos = src_pos * src_dim / dst_dim;
float2 src_loc = src_pos + src_cen;
if (src_loc.x < 0.0f || src_loc.y < 0.0f ||
src_loc.x > src_dim.x || src_loc.y > src_dim.y)
write_imagef(dst, dst_loc, 0.5f);
else
write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));
}
o Blend two inputs together, with the amount of each input used
varying with the index counter.
__kernel void blend_images(__write_only image2d_t dst,
unsigned int index,
__read_only image2d_t src1,
__read_only image2d_t src2)
{
const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
CLK_FILTER_LINEAR);
float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;
int2 dst_loc = (int2)(get_global_id(0), get_global_id(1));
int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);
int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);
float4 val1 = read_imagef(src1, sampler, src1_loc);
float4 val2 = read_imagef(src2, sampler, src2_loc);
write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));
}
pseudocolor
Alter frame colors in video with pseudocolors.
This filter accept the following options:
c0 set pixel first component expression
c1 set pixel second component expression
c2 set pixel third component expression
c3 set pixel fourth component expression, corresponds to the alpha
component
i set component to use as base for altering colors
Each of them specifies the expression to use for computing the lookup
table for the corresponding pixel component values.
The expressions can contain the following constants and functions:
w
h The input width and height.
val The input value for the pixel component.
ymin, umin, vmin, amin
The minimum allowed component value.
ymax, umax, vmax, amax
The maximum allowed component value.
All expressions default to "val".
Examples
o Change too high luma values to gradient:
pseudocolor="'if(between(val,ymax,amax),lerp(ymin,ymax,(val-ymax)/(amax-ymax)),-1):if(between(val,ymax,amax),lerp(umax,umin,(val-ymax)/(amax-ymax)),-1):if(between(val,ymax,amax),lerp(vmin,vmax,(val-ymax)/(amax-ymax)),-1):-1'"
psnr
Obtain the average, maximum and minimum PSNR (Peak Signal to Noise
Ratio) between two input videos.
This filter takes in input two input videos, the first input is
considered the "main" source and is passed unchanged to the output. The
second input is used as a "reference" video for computing the PSNR.
Both video inputs must have the same resolution and pixel format for
this filter to work correctly. Also it assumes that both inputs have
the same number of frames, which are compared one by one.
The obtained average PSNR is printed through the logging system.
The filter stores the accumulated MSE (mean squared error) of each
frame, and at the end of the processing it is averaged across all
frames equally, and the following formula is applied to obtain the
PSNR:
PSNR = 10*log10(MAX^2/MSE)
Where MAX is the average of the maximum values of each component of the
image.
The description of the accepted parameters follows.
stats_file, f
If specified the filter will use the named file to save the PSNR of
each individual frame. When filename equals "-" the data is sent to
standard output.
stats_version
Specifies which version of the stats file format to use. Details of
each format are written below. Default value is 1.
stats_add_max
Determines whether the max value is output to the stats log.
Default value is 0. Requires stats_version >= 2. If this is set
and stats_version < 2, the filter will return an error.
This filter also supports the framesync options.
The file printed if stats_file is selected, contains a sequence of
key/value pairs of the form key:value for each compared couple of
frames.
If a stats_version greater than 1 is specified, a header line precedes
the list of per-frame-pair stats, with key value pairs following the
frame format with the following parameters:
psnr_log_version
The version of the log file format. Will match stats_version.
fields
A comma separated list of the per-frame-pair parameters included in
the log.
A description of each shown per-frame-pair parameter follows:
n sequential number of the input frame, starting from 1
mse_avg
Mean Square Error pixel-by-pixel average difference of the compared
frames, averaged over all the image components.
mse_y, mse_u, mse_v, mse_r, mse_g, mse_b, mse_a
Mean Square Error pixel-by-pixel average difference of the compared
frames for the component specified by the suffix.
psnr_y, psnr_u, psnr_v, psnr_r, psnr_g, psnr_b, psnr_a
Peak Signal to Noise ratio of the compared frames for the component
specified by the suffix.
max_avg, max_y, max_u, max_v
Maximum allowed value for each channel, and average over all
channels.
For example:
movie=ref_movie.mpg, setpts=PTS-STARTPTS [main];
[main][ref] psnr="stats_file=stats.log" [out]
On this example the input file being processed is compared with the
reference file ref_movie.mpg. The PSNR of each individual frame is
stored in stats.log.
pullup
Pulldown reversal (inverse telecine) filter, capable of handling mixed
hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps
progressive content.
The pullup filter is designed to take advantage of future context in
making its decisions. This filter is stateless in the sense that it
does not lock onto a pattern to follow, but it instead looks forward to
the following fields in order to identify matches and rebuild
progressive frames.
To produce content with an even framerate, insert the fps filter after
pullup, use "fps=24000/1001" if the input frame rate is 29.97fps,
"fps=24" for 30fps and the (rare) telecined 25fps input.
The filter accepts the following options:
jl
jr
jt
jb These options set the amount of "junk" to ignore at the left,
right, top, and bottom of the image, respectively. Left and right
are in units of 8 pixels, while top and bottom are in units of 2
lines. The default is 8 pixels on each side.
sb Set the strict breaks. Setting this option to 1 will reduce the
chances of filter generating an occasional mismatched frame, but it
may also cause an excessive number of frames to be dropped during
high motion sequences. Conversely, setting it to -1 will make
filter match fields more easily. This may help processing of video
where there is slight blurring between the fields, but may also
cause there to be interlaced frames in the output. Default value
is 0.
mp Set the metric plane to use. It accepts the following values:
l Use luma plane.
u Use chroma blue plane.
v Use chroma red plane.
This option may be set to use chroma plane instead of the default
luma plane for doing filter's computations. This may improve
accuracy on very clean source material, but more likely will
decrease accuracy, especially if there is chroma noise (rainbow
effect) or any grayscale video. The main purpose of setting mp to
a chroma plane is to reduce CPU load and make pullup usable in
realtime on slow machines.
For best results (without duplicated frames in the output file) it is
necessary to change the output frame rate. For example, to inverse
telecine NTSC input:
ffmpeg -i input -vf pullup -r 24000/1001 ...
qp
Change video quantization parameters (QP).
The filter accepts the following option:
qp Set expression for quantization parameter.
The expression is evaluated through the eval API and can contain, among
others, the following constants:
known
1 if index is not 129, 0 otherwise.
qp Sequential index starting from -129 to 128.
Examples
o Some equation like:
qp=2+2*sin(PI*qp)
random
Flush video frames from internal cache of frames into a random order.
No frame is discarded. Inspired by frei0r nervous filter.
frames
Set size in number of frames of internal cache, in range from 2 to
512. Default is 30.
seed
Set seed for random number generator, must be an integer included
between 0 and "UINT32_MAX". If not specified, or if explicitly set
to less than 0, the filter will try to use a good random seed on a
best effort basis.
readeia608
Read closed captioning (EIA-608) information from the top lines of a
video frame.
This filter adds frame metadata for "lavfi.readeia608.X.cc" and
"lavfi.readeia608.X.line", where "X" is the number of the identified
line with EIA-608 data (starting from 0). A description of each
metadata value follows:
lavfi.readeia608.X.cc
The two bytes stored as EIA-608 data (printed in hexadecimal).
lavfi.readeia608.X.line
The number of the line on which the EIA-608 data was identified and
read.
This filter accepts the following options:
scan_min
Set the line to start scanning for EIA-608 data. Default is 0.
scan_max
Set the line to end scanning for EIA-608 data. Default is 29.
mac Set minimal acceptable amplitude change for sync codes detection.
Default is 0.2. Allowed range is "[0.001 - 1]".
spw Set the ratio of width reserved for sync code detection. Default
is 0.27. Allowed range is "[0.01 - 0.7]".
mhd Set the max peaks height difference for sync code detection.
Default is 0.1. Allowed range is "[0.0 - 0.5]".
mpd Set max peaks period difference for sync code detection. Default
is 0.1. Allowed range is "[0.0 - 0.5]".
msd Set the first two max start code bits differences. Default is
0.02. Allowed range is "[0.0 - 0.5]".
bhd Set the minimum ratio of bits height compared to 3rd start code
bit. Default is 0.75. Allowed range is "[0.01 - 1]".
th_w
Set the white color threshold. Default is 0.35. Allowed range is
"[0.1 - 1]".
th_b
Set the black color threshold. Default is 0.15. Allowed range is
"[0.0 - 0.5]".
chp Enable checking the parity bit. In the event of a parity error, the
filter will output 0x00 for that character. Default is false.
Examples
o Output a csv with presentation time and the first two lines of
identified EIA-608 captioning data.
ffprobe -f lavfi -i movie=captioned_video.mov,readeia608 -show_entries frame=pkt_pts_time:frame_tags=lavfi.readeia608.0.cc,lavfi.readeia608.1.cc -of csv
readvitc
Read vertical interval timecode (VITC) information from the top lines
of a video frame.
The filter adds frame metadata key "lavfi.readvitc.tc_str" with the
timecode value, if a valid timecode has been detected. Further metadata
key "lavfi.readvitc.found" is set to 0/1 depending on whether timecode
data has been found or not.
This filter accepts the following options:
scan_max
Set the maximum number of lines to scan for VITC data. If the value
is set to "-1" the full video frame is scanned. Default is 45.
thr_b
Set the luma threshold for black. Accepts float numbers in the
range [0.0,1.0], default value is 0.2. The value must be equal or
less than "thr_w".
thr_w
Set the luma threshold for white. Accepts float numbers in the
range [0.0,1.0], default value is 0.6. The value must be equal or
greater than "thr_b".
Examples
o Detect and draw VITC data onto the video frame; if no valid VITC is
detected, draw "--:--:--:--" as a placeholder:
ffmpeg -i input.avi -filter:v 'readvitc,drawtext=fontfile=FreeMono.ttf:text=%{metadata\\:lavfi.readvitc.tc_str\\:--\\\\\\:--\\\\\\:--\\\\\\:--}:x=(w-tw)/2:y=400-ascent'
remap
Remap pixels using 2nd: Xmap and 3rd: Ymap input video stream.
Destination pixel at position (X, Y) will be picked from source (x, y)
position where x = Xmap(X, Y) and y = Ymap(X, Y). If mapping values are
out of range, zero value for pixel will be used for destination pixel.
Xmap and Ymap input video streams must be of same dimensions. Output
video stream will have Xmap/Ymap video stream dimensions. Xmap and
Ymap input video streams are 16bit depth, single channel.
removegrain
The removegrain filter is a spatial denoiser for progressive video.
m0 Set mode for the first plane.
m1 Set mode for the second plane.
m2 Set mode for the third plane.
m3 Set mode for the fourth plane.
Range of mode is from 0 to 24. Description of each mode follows:
0 Leave input plane unchanged. Default.
1 Clips the pixel with the minimum and maximum of the 8 neighbour
pixels.
2 Clips the pixel with the second minimum and maximum of the 8
neighbour pixels.
3 Clips the pixel with the third minimum and maximum of the 8
neighbour pixels.
4 Clips the pixel with the fourth minimum and maximum of the 8
neighbour pixels. This is equivalent to a median filter.
5 Line-sensitive clipping giving the minimal change.
6 Line-sensitive clipping, intermediate.
7 Line-sensitive clipping, intermediate.
8 Line-sensitive clipping, intermediate.
9 Line-sensitive clipping on a line where the neighbours pixels are
the closest.
10 Replaces the target pixel with the closest neighbour.
11 [1 2 1] horizontal and vertical kernel blur.
12 Same as mode 11.
13 Bob mode, interpolates top field from the line where the neighbours
pixels are the closest.
14 Bob mode, interpolates bottom field from the line where the
neighbours pixels are the closest.
15 Bob mode, interpolates top field. Same as 13 but with a more
complicated interpolation formula.
16 Bob mode, interpolates bottom field. Same as 14 but with a more
complicated interpolation formula.
17 Clips the pixel with the minimum and maximum of respectively the
maximum and minimum of each pair of opposite neighbour pixels.
18 Line-sensitive clipping using opposite neighbours whose greatest
distance from the current pixel is minimal.
19 Replaces the pixel with the average of its 8 neighbours.
20 Averages the 9 pixels ([1 1 1] horizontal and vertical blur).
21 Clips pixels using the averages of opposite neighbour.
22 Same as mode 21 but simpler and faster.
23 Small edge and halo removal, but reputed useless.
24 Similar as 23.
removelogo
Suppress a TV station logo, using an image file to determine which
pixels comprise the logo. It works by filling in the pixels that
comprise the logo with neighboring pixels.
The filter accepts the following options:
filename, f
Set the filter bitmap file, which can be any image format supported
by libavformat. The width and height of the image file must match
those of the video stream being processed.
Pixels in the provided bitmap image with a value of zero are not
considered part of the logo, non-zero pixels are considered part of the
logo. If you use white (255) for the logo and black (0) for the rest,
you will be safe. For making the filter bitmap, it is recommended to
take a screen capture of a black frame with the logo visible, and then
using a threshold filter followed by the erode filter once or twice.
If needed, little splotches can be fixed manually. Remember that if
logo pixels are not covered, the filter quality will be much reduced.
Marking too many pixels as part of the logo does not hurt as much, but
it will increase the amount of blurring needed to cover over the image
and will destroy more information than necessary, and extra pixels will
slow things down on a large logo.
repeatfields
This filter uses the repeat_field flag from the Video ES headers and
hard repeats fields based on its value.
reverse
Reverse a video clip.
Warning: This filter requires memory to buffer the entire clip, so
trimming is suggested.
Examples
o Take the first 5 seconds of a clip, and reverse it.
trim=end=5,reverse
roberts
Apply roberts cross operator to input video stream.
The filter accepts the following option:
planes
Set which planes will be processed, unprocessed planes will be
copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
delta
Set value which will be added to filtered result.
rotate
Rotate video by an arbitrary angle expressed in radians.
The filter accepts the following options:
A description of the optional parameters follows.
angle, a
Set an expression for the angle by which to rotate the input video
clockwise, expressed as a number of radians. A negative value will
result in a counter-clockwise rotation. By default it is set to
"0".
This expression is evaluated for each frame.
out_w, ow
Set the output width expression, default value is "iw". This
expression is evaluated just once during configuration.
out_h, oh
Set the output height expression, default value is "ih". This
expression is evaluated just once during configuration.
bilinear
Enable bilinear interpolation if set to 1, a value of 0 disables
it. Default value is 1.
fillcolor, c
Set the color used to fill the output area not covered by the
rotated image. For the general syntax of this option, check the
"Color" section in the ffmpeg-utils manual. If the special value
"none" is selected then no background is printed (useful for
example if the background is never shown).
Default value is "black".
The expressions for the angle and the output size can contain the
following constants and functions:
n sequential number of the input frame, starting from 0. It is always
NAN before the first frame is filtered.
t time in seconds of the input frame, it is set to 0 when the filter
is configured. It is always NAN before the first frame is filtered.
hsub
vsub
horizontal and vertical chroma subsample values. For example for
the pixel format "yuv422p" hsub is 2 and vsub is 1.
in_w, iw
in_h, ih
the input video width and height
out_w, ow
out_h, oh
the output width and height, that is the size of the padded area as
specified by the width and height expressions
rotw(a)
roth(a)
the minimal width/height required for completely containing the
input video rotated by a radians.
These are only available when computing the out_w and out_h
expressions.
Examples
o Rotate the input by PI/6 radians clockwise:
rotate=PI/6
o Rotate the input by PI/6 radians counter-clockwise:
rotate=-PI/6
o Rotate the input by 45 degrees clockwise:
rotate=45*PI/180
o Apply a constant rotation with period T, starting from an angle of
PI/3:
rotate=PI/3+2*PI*t/T
o Make the input video rotation oscillating with a period of T
seconds and an amplitude of A radians:
rotate=A*sin(2*PI/T*t)
o Rotate the video, output size is chosen so that the whole rotating
input video is always completely contained in the output:
rotate='2*PI*t:ow=hypot(iw,ih):oh=ow'
o Rotate the video, reduce the output size so that no background is
ever shown:
rotate=2*PI*t:ow='min(iw,ih)/sqrt(2)':oh=ow:c=none
Commands
The filter supports the following commands:
a, angle
Set the angle expression. The command accepts the same syntax of
the corresponding option.
If the specified expression is not valid, it is kept at its current
value.
sab
Apply Shape Adaptive Blur.
The filter accepts the following options:
luma_radius, lr
Set luma blur filter strength, must be a value in range 0.1-4.0,
default value is 1.0. A greater value will result in a more blurred
image, and in slower processing.
luma_pre_filter_radius, lpfr
Set luma pre-filter radius, must be a value in the 0.1-2.0 range,
default value is 1.0.
luma_strength, ls
Set luma maximum difference between pixels to still be considered,
must be a value in the 0.1-100.0 range, default value is 1.0.
chroma_radius, cr
Set chroma blur filter strength, must be a value in range -0.9-4.0.
A greater value will result in a more blurred image, and in slower
processing.
chroma_pre_filter_radius, cpfr
Set chroma pre-filter radius, must be a value in the -0.9-2.0
range.
chroma_strength, cs
Set chroma maximum difference between pixels to still be
considered, must be a value in the -0.9-100.0 range.
Each chroma option value, if not explicitly specified, is set to the
corresponding luma option value.
scale
Scale (resize) the input video, using the libswscale library.
The scale filter forces the output display aspect ratio to be the same
of the input, by changing the output sample aspect ratio.
If the input image format is different from the format requested by the
next filter, the scale filter will convert the input to the requested
format.
Options
The filter accepts the following options, or any of the options
supported by the libswscale scaler.
See the ffmpeg-scaler manual for the complete list of scaler options.
width, w
height, h
Set the output video dimension expression. Default value is the
input dimension.
If the width or w value is 0, the input width is used for the
output. If the height or h value is 0, the input height is used for
the output.
If one and only one of the values is -n with n >= 1, the scale
filter will use a value that maintains the aspect ratio of the
input image, calculated from the other specified dimension. After
that it will, however, make sure that the calculated dimension is
divisible by n and adjust the value if necessary.
If both values are -n with n >= 1, the behavior will be identical
to both values being set to 0 as previously detailed.
See below for the list of accepted constants for use in the
dimension expression.
eval
Specify when to evaluate width and height expression. It accepts
the following values:
init
Only evaluate expressions once during the filter initialization
or when a command is processed.
frame
Evaluate expressions for each incoming frame.
Default value is init.
interl
Set the interlacing mode. It accepts the following values:
1 Force interlaced aware scaling.
0 Do not apply interlaced scaling.
-1 Select interlaced aware scaling depending on whether the source
frames are flagged as interlaced or not.
Default value is 0.
flags
Set libswscale scaling flags. See the ffmpeg-scaler manual for the
complete list of values. If not explicitly specified the filter
applies the default flags.
param0, param1
Set libswscale input parameters for scaling algorithms that need
them. See the ffmpeg-scaler manual for the complete documentation.
If not explicitly specified the filter applies empty parameters.
size, s
Set the video size. For the syntax of this option, check the "Video
size" section in the ffmpeg-utils manual.
in_color_matrix
out_color_matrix
Set in/output YCbCr color space type.
This allows the autodetected value to be overridden as well as
allows forcing a specific value used for the output and encoder.
If not specified, the color space type depends on the pixel format.
Possible values:
auto
Choose automatically.
bt709
Format conforming to International Telecommunication Union
(ITU) Recommendation BT.709.
fcc Set color space conforming to the United States Federal
Communications Commission (FCC) Code of Federal Regulations
(CFR) Title 47 (2003) 73.682 (a).
bt601
Set color space conforming to:
o ITU Radiocommunication Sector (ITU-R) Recommendation BT.601
o ITU-R Rec. BT.470-6 (1998) Systems B, B1, and G
o Society of Motion Picture and Television Engineers (SMPTE)
ST 170:2004
smpte240m
Set color space conforming to SMPTE ST 240:1999.
in_range
out_range
Set in/output YCbCr sample range.
This allows the autodetected value to be overridden as well as
allows forcing a specific value used for the output and encoder. If
not specified, the range depends on the pixel format. Possible
values:
auto/unknown
Choose automatically.
jpeg/full/pc
Set full range (0-255 in case of 8-bit luma).
mpeg/limited/tv
Set "MPEG" range (16-235 in case of 8-bit luma).
force_original_aspect_ratio
Enable decreasing or increasing output video width or height if
necessary to keep the original aspect ratio. Possible values:
disable
Scale the video as specified and disable this feature.
decrease
The output video dimensions will automatically be decreased if
needed.
increase
The output video dimensions will automatically be increased if
needed.
One useful instance of this option is that when you know a specific
device's maximum allowed resolution, you can use this to limit the
output video to that, while retaining the aspect ratio. For
example, device A allows 1280x720 playback, and your video is
1920x800. Using this option (set it to decrease) and specifying
1280x720 to the command line makes the output 1280x533.
Please note that this is a different thing than specifying -1 for w
or h, you still need to specify the output resolution for this
option to work.
The values of the w and h options are expressions containing the
following constants:
in_w
in_h
The input width and height
iw
ih These are the same as in_w and in_h.
out_w
out_h
The output (scaled) width and height
ow
oh These are the same as out_w and out_h
a The same as iw / ih
sar input sample aspect ratio
dar The input display aspect ratio. Calculated from "(iw / ih) * sar".
hsub
vsub
horizontal and vertical input chroma subsample values. For example
for the pixel format "yuv422p" hsub is 2 and vsub is 1.
ohsub
ovsub
horizontal and vertical output chroma subsample values. For example
for the pixel format "yuv422p" hsub is 2 and vsub is 1.
Examples
o Scale the input video to a size of 200x100
scale=w=200:h=100
This is equivalent to:
scale=200:100
or:
scale=200x100
o Specify a size abbreviation for the output size:
scale=qcif
which can also be written as:
scale=size=qcif
o Scale the input to 2x:
scale=w=2*iw:h=2*ih
o The above is the same as:
scale=2*in_w:2*in_h
o Scale the input to 2x with forced interlaced scaling:
scale=2*iw:2*ih:interl=1
o Scale the input to half size:
scale=w=iw/2:h=ih/2
o Increase the width, and set the height to the same size:
scale=3/2*iw:ow
o Seek Greek harmony:
scale=iw:1/PHI*iw
scale=ih*PHI:ih
o Increase the height, and set the width to 3/2 of the height:
scale=w=3/2*oh:h=3/5*ih
o Increase the size, making the size a multiple of the chroma
subsample values:
scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
o Increase the width to a maximum of 500 pixels, keeping the same
aspect ratio as the input:
scale=w='min(500\, iw*3/2):h=-1'
o Make pixels square by combining scale and setsar:
scale='trunc(ih*dar):ih',setsar=1/1
o Make pixels square by combining scale and setsar, making sure the
resulting resolution is even (required by some codecs):
scale='trunc(ih*dar/2)*2:trunc(ih/2)*2',setsar=1/1
Commands
This filter supports the following commands:
width, w
height, h
Set the output video dimension expression. The command accepts the
same syntax of the corresponding option.
If the specified expression is not valid, it is kept at its current
value.
scale_npp
Use the NVIDIA Performance Primitives (libnpp) to perform scaling
and/or pixel format conversion on CUDA video frames. Setting the output
width and height works in the same way as for the scale filter.
The following additional options are accepted:
format
The pixel format of the output CUDA frames. If set to the string
"same" (the default), the input format will be kept. Note that
automatic format negotiation and conversion is not yet supported
for hardware frames
interp_algo
The interpolation algorithm used for resizing. One of the
following:
nn Nearest neighbour.
linear
cubic
cubic2p_bspline
2-parameter cubic (B=1, C=0)
cubic2p_catmullrom
2-parameter cubic (B=0, C=1/2)
cubic2p_b05c03
2-parameter cubic (B=1/2, C=3/10)
super
Supersampling
lanczos
scale2ref
Scale (resize) the input video, based on a reference video.
See the scale filter for available options, scale2ref supports the same
but uses the reference video instead of the main input as basis.
scale2ref also supports the following additional constants for the w
and h options:
main_w
main_h
The main input video's width and height
main_a
The same as main_w / main_h
main_sar
The main input video's sample aspect ratio
main_dar, mdar
The main input video's display aspect ratio. Calculated from
"(main_w / main_h) * main_sar".
main_hsub
main_vsub
The main input video's horizontal and vertical chroma subsample
values. For example for the pixel format "yuv422p" hsub is 2 and
vsub is 1.
Examples
o Scale a subtitle stream (b) to match the main video (a) in size
before overlaying
'scale2ref[b][a];[a][b]overlay'
selectivecolor
Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of
colors (such as "reds", "yellows", "greens", "cyans", ...). The
adjustment range is defined by the "purity" of the color (that is, how
saturated it already is).
This filter is similar to the Adobe Photoshop Selective Color tool.
The filter accepts the following options:
correction_method
Select color correction method.
Available values are:
absolute
Specified adjustments are applied "as-is" (added/subtracted to
original pixel component value).
relative
Specified adjustments are relative to the original component
value.
Default is "absolute".
reds
Adjustments for red pixels (pixels where the red component is the
maximum)
yellows
Adjustments for yellow pixels (pixels where the blue component is
the minimum)
greens
Adjustments for green pixels (pixels where the green component is
the maximum)
cyans
Adjustments for cyan pixels (pixels where the red component is the
minimum)
blues
Adjustments for blue pixels (pixels where the blue component is the
maximum)
magentas
Adjustments for magenta pixels (pixels where the green component is
the minimum)
whites
Adjustments for white pixels (pixels where all components are
greater than 128)
neutrals
Adjustments for all pixels except pure black and pure white
blacks
Adjustments for black pixels (pixels where all components are
lesser than 128)
psfile
Specify a Photoshop selective color file (".asv") to import the
settings from.
All the adjustment settings (reds, yellows, ...) accept up to 4 space
separated floating point adjustment values in the [-1,1] range,
respectively to adjust the amount of cyan, magenta, yellow and black
for the pixels of its range.
Examples
o Increase cyan by 50% and reduce yellow by 33% in every green areas,
and increase magenta by 27% in blue areas:
selectivecolor=greens=.5 0 -.33 0:blues=0 .27
o Use a Photoshop selective color preset:
selectivecolor=psfile=MySelectiveColorPresets/Misty.asv
separatefields
The "separatefields" takes a frame-based video input and splits each
frame into its components fields, producing a new half height clip with
twice the frame rate and twice the frame count.
This filter use field-dominance information in frame to decide which of
each pair of fields to place first in the output. If it gets it wrong
use setfield filter before "separatefields" filter.
setdar, setsar
The "setdar" filter sets the Display Aspect Ratio for the filter output
video.
This is done by changing the specified Sample (aka Pixel) Aspect Ratio,
according to the following equation:
= / *
Keep in mind that the "setdar" filter does not modify the pixel
dimensions of the video frame. Also, the display aspect ratio set by
this filter may be changed by later filters in the filterchain, e.g. in
case of scaling or if another "setdar" or a "setsar" filter is applied.
The "setsar" filter sets the Sample (aka Pixel) Aspect Ratio for the
filter output video.
Note that as a consequence of the application of this filter, the
output display aspect ratio will change according to the equation
above.
Keep in mind that the sample aspect ratio set by the "setsar" filter
may be changed by later filters in the filterchain, e.g. if another
"setsar" or a "setdar" filter is applied.
It accepts the following parameters:
r, ratio, dar ("setdar" only), sar ("setsar" only)
Set the aspect ratio used by the filter.
The parameter can be a floating point number string, an expression,
or a string of the form num:den, where num and den are the
numerator and denominator of the aspect ratio. If the parameter is
not specified, it is assumed the value "0". In case the form
"num:den" is used, the ":" character should be escaped.
max Set the maximum integer value to use for expressing numerator and
denominator when reducing the expressed aspect ratio to a rational.
Default value is 100.
The parameter sar is an expression containing the following constants:
E, PI, PHI
These are approximated values for the mathematical constants e
(Euler's number), pi (Greek pi), and phi (the golden ratio).
w, h
The input width and height.
a These are the same as w / h.
sar The input sample aspect ratio.
dar The input display aspect ratio. It is the same as (w / h) * sar.
hsub, vsub
Horizontal and vertical chroma subsample values. For example, for
the pixel format "yuv422p" hsub is 2 and vsub is 1.
Examples
o To change the display aspect ratio to 16:9, specify one of the
following:
setdar=dar=1.77777
setdar=dar=16/9
o To change the sample aspect ratio to 10:11, specify:
setsar=sar=10/11
o To set a display aspect ratio of 16:9, and specify a maximum
integer value of 1000 in the aspect ratio reduction, use the
command:
setdar=ratio=16/9:max=1000
setfield
Force field for the output video frame.
The "setfield" filter marks the interlace type field for the output
frames. It does not change the input frame, but only sets the
corresponding property, which affects how the frame is treated by
following filters (e.g. "fieldorder" or "yadif").
The filter accepts the following options:
mode
Available values are:
auto
Keep the same field property.
bff Mark the frame as bottom-field-first.
tff Mark the frame as top-field-first.
prog
Mark the frame as progressive.
setparams
Force frame parameter for the output video frame.
The "setparams" filter marks interlace and color range for the output
frames. It does not change the input frame, but only sets the
corresponding property, which affects how the frame is treated by
filters/encoders.
field_mode
Available values are:
auto
Keep the same field property (default).
bff Mark the frame as bottom-field-first.
tff Mark the frame as top-field-first.
prog
Mark the frame as progressive.
range
Available values are:
auto
Keep the same color range property (default).
unspecified, unknown
Mark the frame as unspecified color range.
limited, tv, mpeg
Mark the frame as limited range.
full, pc, jpeg
Mark the frame as full range.
color_primaries
Set the color primaries. Available values are:
auto
Keep the same color primaries property (default).
bt709
unknown
bt470m
bt470bg
smpte170m
smpte240m
film
bt2020
smpte428
smpte431
smpte432
jedec-p22
color_trc
Set the color transfert. Available values are:
auto
Keep the same color trc property (default).
bt709
unknown
bt470m
bt470bg
smpte170m
smpte240m
linear
log100
log316
iec61966-2-4
bt1361e
iec61966-2-1
bt2020-10
bt2020-12
smpte2084
smpte428
arib-std-b67
colorspace
Set the colorspace. Available values are:
auto
Keep the same colorspace property (default).
gbr
bt709
unknown
fcc
bt470bg
smpte170m
smpte240m
ycgco
bt2020nc
bt2020c
smpte2085
chroma-derived-nc
chroma-derived-c
ictcp
showinfo
Show a line containing various information for each input video frame.
The input video is not modified.
The shown line contains a sequence of key/value pairs of the form
key:value.
The following values are shown in the output:
n The (sequential) number of the input frame, starting from 0.
pts The Presentation TimeStamp of the input frame, expressed as a
number of time base units. The time base unit depends on the filter
input pad.
pts_time
The Presentation TimeStamp of the input frame, expressed as a
number of seconds.
pos The position of the frame in the input stream, or -1 if this
information is unavailable and/or meaningless (for example in case
of synthetic video).
fmt The pixel format name.
sar The sample aspect ratio of the input frame, expressed in the form
num/den.
s The size of the input frame. For the syntax of this option, check
the "Video size" section in the ffmpeg-utils manual.
i The type of interlaced mode ("P" for "progressive", "T" for top
field first, "B" for bottom field first).
iskey
This is 1 if the frame is a key frame, 0 otherwise.
type
The picture type of the input frame ("I" for an I-frame, "P" for a
P-frame, "B" for a B-frame, or "?" for an unknown type). Also
refer to the documentation of the "AVPictureType" enum and of the
"av_get_picture_type_char" function defined in libavutil/avutil.h.
checksum
The Adler-32 checksum (printed in hexadecimal) of all the planes of
the input frame.
plane_checksum
The Adler-32 checksum (printed in hexadecimal) of each plane of the
input frame, expressed in the form "[c0 c1 c2 c3]".
showpalette
Displays the 256 colors palette of each frame. This filter is only
relevant for pal8 pixel format frames.
It accepts the following option:
s Set the size of the box used to represent one palette color entry.
Default is 30 (for a "30x30" pixel box).
shuffleframes
Reorder and/or duplicate and/or drop video frames.
It accepts the following parameters:
mapping
Set the destination indexes of input frames. This is space or '|'
separated list of indexes that maps input frames to output frames.
Number of indexes also sets maximal value that each index may have.
'-1' index have special meaning and that is to drop frame.
The first frame has the index 0. The default is to keep the input
unchanged.
Examples
o Swap second and third frame of every three frames of the input:
ffmpeg -i INPUT -vf "shuffleframes=0 2 1" OUTPUT
o Swap 10th and 1st frame of every ten frames of the input:
ffmpeg -i INPUT -vf "shuffleframes=9 1 2 3 4 5 6 7 8 0" OUTPUT
shuffleplanes
Reorder and/or duplicate video planes.
It accepts the following parameters:
map0
The index of the input plane to be used as the first output plane.
map1
The index of the input plane to be used as the second output plane.
map2
The index of the input plane to be used as the third output plane.
map3
The index of the input plane to be used as the fourth output plane.
The first plane has the index 0. The default is to keep the input
unchanged.
Examples
o Swap the second and third planes of the input:
ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
signalstats
Evaluate various visual metrics that assist in determining issues
associated with the digitization of analog video media.
By default the filter will log these metadata values:
YMIN
Display the minimal Y value contained within the input frame.
Expressed in range of [0-255].
YLOW
Display the Y value at the 10% percentile within the input frame.
Expressed in range of [0-255].
YAVG
Display the average Y value within the input frame. Expressed in
range of [0-255].
YHIGH
Display the Y value at the 90% percentile within the input frame.
Expressed in range of [0-255].
YMAX
Display the maximum Y value contained within the input frame.
Expressed in range of [0-255].
UMIN
Display the minimal U value contained within the input frame.
Expressed in range of [0-255].
ULOW
Display the U value at the 10% percentile within the input frame.
Expressed in range of [0-255].
UAVG
Display the average U value within the input frame. Expressed in
range of [0-255].
UHIGH
Display the U value at the 90% percentile within the input frame.
Expressed in range of [0-255].
UMAX
Display the maximum U value contained within the input frame.
Expressed in range of [0-255].
VMIN
Display the minimal V value contained within the input frame.
Expressed in range of [0-255].
VLOW
Display the V value at the 10% percentile within the input frame.
Expressed in range of [0-255].
VAVG
Display the average V value within the input frame. Expressed in
range of [0-255].
VHIGH
Display the V value at the 90% percentile within the input frame.
Expressed in range of [0-255].
VMAX
Display the maximum V value contained within the input frame.
Expressed in range of [0-255].
SATMIN
Display the minimal saturation value contained within the input
frame. Expressed in range of [0-~181.02].
SATLOW
Display the saturation value at the 10% percentile within the input
frame. Expressed in range of [0-~181.02].
SATAVG
Display the average saturation value within the input frame.
Expressed in range of [0-~181.02].
SATHIGH
Display the saturation value at the 90% percentile within the input
frame. Expressed in range of [0-~181.02].
SATMAX
Display the maximum saturation value contained within the input
frame. Expressed in range of [0-~181.02].
HUEMED
Display the median value for hue within the input frame. Expressed
in range of [0-360].
HUEAVG
Display the average value for hue within the input frame. Expressed
in range of [0-360].
YDIF
Display the average of sample value difference between all values
of the Y plane in the current frame and corresponding values of the
previous input frame. Expressed in range of [0-255].
UDIF
Display the average of sample value difference between all values
of the U plane in the current frame and corresponding values of the
previous input frame. Expressed in range of [0-255].
VDIF
Display the average of sample value difference between all values
of the V plane in the current frame and corresponding values of the
previous input frame. Expressed in range of [0-255].
YBITDEPTH
Display bit depth of Y plane in current frame. Expressed in range
of [0-16].
UBITDEPTH
Display bit depth of U plane in current frame. Expressed in range
of [0-16].
VBITDEPTH
Display bit depth of V plane in current frame. Expressed in range
of [0-16].
The filter accepts the following options:
stat
out stat specify an additional form of image analysis. out output
video with the specified type of pixel highlighted.
Both options accept the following values:
tout
Identify temporal outliers pixels. A temporal outlier is a
pixel unlike the neighboring pixels of the same field. Examples
of temporal outliers include the results of video dropouts,
head clogs, or tape tracking issues.
vrep
Identify vertical line repetition. Vertical line repetition
includes similar rows of pixels within a frame. In born-digital
video vertical line repetition is common, but this pattern is
uncommon in video digitized from an analog source. When it
occurs in video that results from the digitization of an analog
source it can indicate concealment from a dropout compensator.
brng
Identify pixels that fall outside of legal broadcast range.
color, c
Set the highlight color for the out option. The default color is
yellow.
Examples
o Output data of various video metrics:
ffprobe -f lavfi movie=example.mov,signalstats="stat=tout+vrep+brng" -show_frames
o Output specific data about the minimum and maximum values of the Y
plane per frame:
ffprobe -f lavfi movie=example.mov,signalstats -show_entries frame_tags=lavfi.signalstats.YMAX,lavfi.signalstats.YMIN
o Playback video while highlighting pixels that are outside of
broadcast range in red.
ffplay example.mov -vf signalstats="out=brng:color=red"
o Playback video with signalstats metadata drawn over the frame.
ffplay example.mov -vf signalstats=stat=brng+vrep+tout,drawtext=fontfile=FreeSerif.ttf:textfile=signalstat_drawtext.txt
The contents of signalstat_drawtext.txt used in the command are:
time %{pts:hms}
Y (%{metadata:lavfi.signalstats.YMIN}-%{metadata:lavfi.signalstats.YMAX})
U (%{metadata:lavfi.signalstats.UMIN}-%{metadata:lavfi.signalstats.UMAX})
V (%{metadata:lavfi.signalstats.VMIN}-%{metadata:lavfi.signalstats.VMAX})
saturation maximum: %{metadata:lavfi.signalstats.SATMAX}
signature
Calculates the MPEG-7 Video Signature. The filter can handle more than
one input. In this case the matching between the inputs can be
calculated additionally. The filter always passes through the first
input. The signature of each stream can be written into a file.
It accepts the following options:
detectmode
Enable or disable the matching process.
Available values are:
off Disable the calculation of a matching (default).
full
Calculate the matching for the whole video and output whether
the whole video matches or only parts.
fast
Calculate only until a matching is found or the video ends.
Should be faster in some cases.
nb_inputs
Set the number of inputs. The option value must be a non negative
integer. Default value is 1.
filename
Set the path to which the output is written. If there is more than
one input, the path must be a prototype, i.e. must contain %d or
%0nd (where n is a positive integer), that will be replaced with
the input number. If no filename is specified, no output will be
written. This is the default.
format
Choose the output format.
Available values are:
binary
Use the specified binary representation (default).
xml Use the specified xml representation.
th_d
Set threshold to detect one word as similar. The option value must
be an integer greater than zero. The default value is 9000.
th_dc
Set threshold to detect all words as similar. The option value must
be an integer greater than zero. The default value is 60000.
th_xh
Set threshold to detect frames as similar. The option value must be
an integer greater than zero. The default value is 116.
th_di
Set the minimum length of a sequence in frames to recognize it as
matching sequence. The option value must be a non negative integer
value. The default value is 0.
th_it
Set the minimum relation, that matching frames to all frames must
have. The option value must be a double value between 0 and 1. The
default value is 0.5.
Examples
o To calculate the signature of an input video and store it in
signature.bin:
ffmpeg -i input.mkv -vf signature=filename=signature.bin -map 0:v -f null -
o To detect whether two videos match and store the signatures in XML
format in signature0.xml and signature1.xml:
ffmpeg -i input1.mkv -i input2.mkv -filter_complex "[0:v][1:v] signature=nb_inputs=2:detectmode=full:format=xml:filename=signature%d.xml" -map :v -f null -
smartblur
Blur the input video without impacting the outlines.
It accepts the following options:
luma_radius, lr
Set the luma radius. The option value must be a float number in the
range [0.1,5.0] that specifies the variance of the gaussian filter
used to blur the image (slower if larger). Default value is 1.0.
luma_strength, ls
Set the luma strength. The option value must be a float number in
the range [-1.0,1.0] that configures the blurring. A value included
in [0.0,1.0] will blur the image whereas a value included in
[-1.0,0.0] will sharpen the image. Default value is 1.0.
luma_threshold, lt
Set the luma threshold used as a coefficient to determine whether a
pixel should be blurred or not. The option value must be an integer
in the range [-30,30]. A value of 0 will filter all the image, a
value included in [0,30] will filter flat areas and a value
included in [-30,0] will filter edges. Default value is 0.
chroma_radius, cr
Set the chroma radius. The option value must be a float number in
the range [0.1,5.0] that specifies the variance of the gaussian
filter used to blur the image (slower if larger). Default value is
luma_radius.
chroma_strength, cs
Set the chroma strength. The option value must be a float number in
the range [-1.0,1.0] that configures the blurring. A value included
in [0.0,1.0] will blur the image whereas a value included in
[-1.0,0.0] will sharpen the image. Default value is luma_strength.
chroma_threshold, ct
Set the chroma threshold used as a coefficient to determine whether
a pixel should be blurred or not. The option value must be an
integer in the range [-30,30]. A value of 0 will filter all the
image, a value included in [0,30] will filter flat areas and a
value included in [-30,0] will filter edges. Default value is
luma_threshold.
If a chroma option is not explicitly set, the corresponding luma value
is set.
ssim
Obtain the SSIM (Structural SImilarity Metric) between two input
videos.
This filter takes in input two input videos, the first input is
considered the "main" source and is passed unchanged to the output. The
second input is used as a "reference" video for computing the SSIM.
Both video inputs must have the same resolution and pixel format for
this filter to work correctly. Also it assumes that both inputs have
the same number of frames, which are compared one by one.
The filter stores the calculated SSIM of each frame.
The description of the accepted parameters follows.
stats_file, f
If specified the filter will use the named file to save the SSIM of
each individual frame. When filename equals "-" the data is sent to
standard output.
The file printed if stats_file is selected, contains a sequence of
key/value pairs of the form key:value for each compared couple of
frames.
A description of each shown parameter follows:
n sequential number of the input frame, starting from 1
Y, U, V, R, G, B
SSIM of the compared frames for the component specified by the
suffix.
All SSIM of the compared frames for the whole frame.
dB Same as above but in dB representation.
This filter also supports the framesync options.
For example:
movie=ref_movie.mpg, setpts=PTS-STARTPTS [main];
[main][ref] ssim="stats_file=stats.log" [out]
On this example the input file being processed is compared with the
reference file ref_movie.mpg. The SSIM of each individual frame is
stored in stats.log.
Another example with both psnr and ssim at same time:
ffmpeg -i main.mpg -i ref.mpg -lavfi "ssim;[0:v][1:v]psnr" -f null -
stereo3d
Convert between different stereoscopic image formats.
The filters accept the following options:
in Set stereoscopic image format of input.
Available values for input image formats are:
sbsl
side by side parallel (left eye left, right eye right)
sbsr
side by side crosseye (right eye left, left eye right)
sbs2l
side by side parallel with half width resolution (left eye
left, right eye right)
sbs2r
side by side crosseye with half width resolution (right eye
left, left eye right)
abl above-below (left eye above, right eye below)
abr above-below (right eye above, left eye below)
ab2l
above-below with half height resolution (left eye above, right
eye below)
ab2r
above-below with half height resolution (right eye above, left
eye below)
al alternating frames (left eye first, right eye second)
ar alternating frames (right eye first, left eye second)
irl interleaved rows (left eye has top row, right eye starts on
next row)
irr interleaved rows (right eye has top row, left eye starts on
next row)
icl interleaved columns, left eye first
icr interleaved columns, right eye first
Default value is sbsl.
out Set stereoscopic image format of output.
sbsl
side by side parallel (left eye left, right eye right)
sbsr
side by side crosseye (right eye left, left eye right)
sbs2l
side by side parallel with half width resolution (left eye
left, right eye right)
sbs2r
side by side crosseye with half width resolution (right eye
left, left eye right)
abl above-below (left eye above, right eye below)
abr above-below (right eye above, left eye below)
ab2l
above-below with half height resolution (left eye above, right
eye below)
ab2r
above-below with half height resolution (right eye above, left
eye below)
al alternating frames (left eye first, right eye second)
ar alternating frames (right eye first, left eye second)
irl interleaved rows (left eye has top row, right eye starts on
next row)
irr interleaved rows (right eye has top row, left eye starts on
next row)
arbg
anaglyph red/blue gray (red filter on left eye, blue filter on
right eye)
argg
anaglyph red/green gray (red filter on left eye, green filter
on right eye)
arcg
anaglyph red/cyan gray (red filter on left eye, cyan filter on
right eye)
arch
anaglyph red/cyan half colored (red filter on left eye, cyan
filter on right eye)
arcc
anaglyph red/cyan color (red filter on left eye, cyan filter on
right eye)
arcd
anaglyph red/cyan color optimized with the least squares
projection of dubois (red filter on left eye, cyan filter on
right eye)
agmg
anaglyph green/magenta gray (green filter on left eye, magenta
filter on right eye)
agmh
anaglyph green/magenta half colored (green filter on left eye,
magenta filter on right eye)
agmc
anaglyph green/magenta colored (green filter on left eye,
magenta filter on right eye)
agmd
anaglyph green/magenta color optimized with the least squares
projection of dubois (green filter on left eye, magenta filter
on right eye)
aybg
anaglyph yellow/blue gray (yellow filter on left eye, blue
filter on right eye)
aybh
anaglyph yellow/blue half colored (yellow filter on left eye,
blue filter on right eye)
aybc
anaglyph yellow/blue colored (yellow filter on left eye, blue
filter on right eye)
aybd
anaglyph yellow/blue color optimized with the least squares
projection of dubois (yellow filter on left eye, blue filter on
right eye)
ml mono output (left eye only)
mr mono output (right eye only)
chl checkerboard, left eye first
chr checkerboard, right eye first
icl interleaved columns, left eye first
icr interleaved columns, right eye first
hdmi
HDMI frame pack
Default value is arcd.
Examples
o Convert input video from side by side parallel to anaglyph
yellow/blue dubois:
stereo3d=sbsl:aybd
o Convert input video from above below (left eye above, right eye
below) to side by side crosseye.
stereo3d=abl:sbsr
streamselect, astreamselect
Select video or audio streams.
The filter accepts the following options:
inputs
Set number of inputs. Default is 2.
map Set input indexes to remap to outputs.
Commands
The "streamselect" and "astreamselect" filter supports the following
commands:
map Set input indexes to remap to outputs.
Examples
o Select first 5 seconds 1st stream and rest of time 2nd stream:
sendcmd='5.0 streamselect map 1',streamselect=inputs=2:map=0
o Same as above, but for audio:
asendcmd='5.0 astreamselect map 1',astreamselect=inputs=2:map=0
sobel
Apply sobel operator to input video stream.
The filter accepts the following option:
planes
Set which planes will be processed, unprocessed planes will be
copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
delta
Set value which will be added to filtered result.
spp
Apply a simple postprocessing filter that compresses and decompresses
the image at several (or - in the case of quality level 6 - all) shifts
and average the results.
The filter accepts the following options:
quality
Set quality. This option defines the number of levels for
averaging. It accepts an integer in the range 0-6. If set to 0, the
filter will have no effect. A value of 6 means the higher quality.
For each increment of that value the speed drops by a factor of
approximately 2. Default value is 3.
qp Force a constant quantization parameter. If not set, the filter
will use the QP from the video stream (if available).
mode
Set thresholding mode. Available modes are:
hard
Set hard thresholding (default).
soft
Set soft thresholding (better de-ringing effect, but likely
blurrier).
use_bframe_qp
Enable the use of the QP from the B-Frames if set to 1. Using this
option may cause flicker since the B-Frames have often larger QP.
Default is 0 (not enabled).
sr
Scale the input by applying one of the super-resolution methods based
on convolutional neural networks. Supported models:
o Super-Resolution Convolutional Neural Network model (SRCNN). See
.
o Efficient Sub-Pixel Convolutional Neural Network model (ESPCN).
See .
Training scripts as well as scripts for model generation are provided
in the repository at .
The filter accepts the following options:
dnn_backend
Specify which DNN backend to use for model loading and execution.
This option accepts the following values:
native
Native implementation of DNN loading and execution.
tensorflow
TensorFlow backend. To enable this backend you need to install
the TensorFlow for C library (see