A codec is the format in which your video will be encoded. Vimeo accepts most major video codecs, but for best results we recommend using H.264. If you’re uploading High Definition (HD) video, choose the High Profile H.264 setting instead of Main Profile.
In computer science and information theory, data compression, source coding, or bit-rate
reduction involves encoding information using fewer bits than the original representation. Compression can be either lossy or lossless. Lossless compression reduces bits by identifying
and eliminating statistical redundancy. No information is
lost in lossless compression. Lossy
compression reduces bits by identifying unnecessary information and
removing it.The process of reducing the size of a
data file is popularly referred to as data compression, although its formal
name is source coding (coding done at the source of the data before it is
stored or transmitted).
Compression is useful because it helps reduce
resources usage, such as data storage space or transmission capacity. Because compressed data must be
decompressed to use, this extra processing imposes computational or other costs
through decompression; this situation is far from being a free lunch.
Data compression is subject to a space-time complexity trade-off. For instance, a compression
scheme for video may require expensive hardware for
the video to be decompressed fast enough to be viewed as it is being
decompressed, and the option to decompress the video in full before watching it
may be inconvenient or require additional storage. The design of data
compression schemes involves trade-offs among various factors, including the
degree of compression, the amount of distortion introduced (e.g., when
using lossy data compression), and the computational
resources required to compress and uncompress the data.
New alternatives to traditional systems (which
sample at full resolution, then compress) provide efficient resource usage
based on principles of compressed sensing. Compressed sensing
techniques circumvent the need for data compression by sampling off on a
cleverly selected basis.
Lossless data
compression algorithms usually
exploit statistical
redundancy to
represent data more concisely without losing information, so that the process is
reversible. Lossless compression is possible because most real-world data has
statistical redundancy. For example, an image may have areas of colour that do
not change over several pixels; instead of coding "red pixel, red pixel,
..." the data may be encoded as "279 red pixels". This is a
basic example of run-length encoding;
there are many schemes to reduce file size by eliminating redundancy.
The Lempel–Ziv (LZ)
compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for
decompression speed and compression ratio, but compression can be slow. DEFLATE
is used in PKZIP, Gzip and PNG. LZW (Lempel–Ziv–Welch) is used in GIF images. Also noteworthy is the LZR
(Lempel-Ziv–Renau) algorithm, which serves as the basis for the Zip method.
LZ methods use a table-based compression model where table entries are
substituted for repeated strings of data. For most LZ methods, this table is
generated dynamically from earlier data in the input. The table itself is often Huffman encoded(e.g. SHRI, LZX). A current LZ-based
coding scheme that performs well is LZX, used in Microsoft's CAB format.
The best modern lossless compressors use probabilistic models, such as prediction
by partial matching. The Burrows–Wheeler
transform can also be
viewed as an indirect form of statistical modelling.
The class of grammar-based codes are gaining popularity because they
can extremely compress highly
repetitive text, for
instance, biological data collection of same or related species, huge versioned
document collection, internet archives, etc. The basic task of grammar-based
codes is constructing a context-free grammar deriving a single string. Sequitur and Re-Pair are practical grammar
compression algorithms for which public codes are available.
In a further refinement of these techniques,
statistical predictions can be coupled to an algorithm called arithmetic coding. Arithmetic coding, invented
by Jorma Rissanen, and turned into a practical
method by Witten ,
Neal, and Clearly, achieves superior compression to the better-known Huffman
algorithm and lends itself especially well to adaptive data compression tasks
where the predictions are strongly context-dependent. Arithmetic coding is used
in the bi-level image compression standard JBIG,
and the document compression standard DjVu.
The text entry system Dasher is
an inverse arithmetic coder.
0 comments:
Post a Comment