The feature was implemented by removing all 0 bytes in before the next
start code (and all 0 bytes at the end of the buffer). The problem is
that a slice structure may very well end in 0 bytes. The only way to
determine the end of the slice structure with confidence is
implementing a parser for the whole slice structure.
The result of removing bytes belonging to the slice structure may or
may not end in visual artifacts upon decoding. Other results include
error message by the decoder (e.g. ffmpeg which reports errors such as
"slice mismatch" or "motion vectors not available").
I lack the time and motivation to implement a proper slice parser. As
the current behavior is dangerous and just plain wrong, I'm removing
the feature again. It was introduced in release 5.8.0 response to
issue #734, which will now remain not implemented.
Fixes#2045.
Otherwise the GUI will generate instructions for mkvmerge for track
IDs that mkvmerge won't use, and mkvmerge in turn aborts with an
error.
Fixes#2039.
The function is available from the "additional modifications" dialog.
For most entries the smallest start timestamp of all chapters on the
same level higher than the current chapter's start timestamp will be
used as its end timestamp. If there is no such chapter, the parent
chapter's end timestamp will be used instead.
If the chapters were loaded from a Matroska file, the end timestamp
for very last chapter on the top-most level will be derived from the
file's duration.
Implements #1887.
Earlier versions fail to build on both my development system as well
as my CentOS 7 BuildBot CI instance. Therefore I cannot properly
support that version anymore.
See #2037.
This avoids clashing with Windows' input method for arbitrary
characters by pressing and holding `Alt` and typing the codepoint on
the number pad.
Implements #2034.
Whenever a sequence parameter set or picture parameter set
changes (meaning an SPS with the same ID as an earlier SPS but with
different content is found), all frames queued for order & timestamp
calculation must be flushed. Otherwise frame order calculation will be
based on wrong values for some frames and on correct values for other
frames.
This is the HEVC/h.265 equivalent of #2028.
Whenever a sequence parameter set or picture parameter set
changes (meaning an SPS with the same ID as an earlier SPS but with
different content is found), all frames queued for order & timestamp
calculation must be flushed. Otherwise frame order calculation will be
based on wrong values for some frames and on correct values for other
frames.
Fixes#2028.
It seems that dragging & dropping sometimes leaves Qt's internal state
somewhat foobared if at least one of the columns is hidden at the time
the items are dropped. This causes subsequent drag attempts to
segfault in the "start drag" function trying to serialize the standard
items' states.
For some reason iterating over all rows, all columns, for all parents
in the model and requesting the corresponding QStandardItem fixes the
internal state to the point that dragging doesn't crash anymore.
Fixes#2009.
The iconv version on macOS doesn't support that encoding. At the
moment mkvmerge only requires that encoding when reading the station
names from MPEG transport streams, and those are only shown to the
user as a help for deciding which tracks to select. Therefore the
information isn't critical, and failure to decode it properly doesn't
warrant a warning.
Fixes#2023.
Other track types such as DTS will already fetch more PS packets from
the stream if detection fails on the first packet. The same logic is
now applied to (E-)AC-3 tracks: as long as the track parameters cannot
be determined and the probe range hasn't been exceeded, fetch more
data from the stream and retry detection.
This enables track detection even if the first PS packets contain too
little (E-)AC-3 data.
Fixes#2016.
Otherwise the translations of an existing installation of MKVToolNix
might be used causing tests such as the file size formatting functions
to fail as translated unit names are used.
Fixes#2011.
SDT = service description table
The information output is a list of three-element maps:
• the program number
• the service provider's name (think TV station name, e.g. "ARD")
• the service's name (think TV channel name, e.g. "arte HD")
The program number corresponds to the track property `program_number`.
See #1990 for the future use case: presenting this information in the
GUI.
Earlier versions of mkvmerge used to detect all tracks in MPEG
transport streams with multiple programs, even though the code wasn't
really implemented & tested for that. However, some tracks (usually
those from the second or a later program) were broken: they might not
contain any data, or only invalid data.
On top of that mkvmerge v12.0.0 contains a fix for #1980 where a track
isn't part of a PMT at all. An unintentional consequence of that fix
was that mkvmerge no longer detected all of the tracks in
multi-program streams. The reason is that in order to detect tracks
not mentioned in a PMT mkvmerge has to do detection by content in the
PES packets. That's only implemented for AAC at the moment. All other
tracks will be blacklisted as soon as they're found.
This wouldn't be a problem if all PMTs of all programs were always
located right at the start of the file with nothing in
between. Unfortunately many files contain track content between
PMTs. So that workflow was:
• mkvmerge finds first PMT, determines types for tracks listed in it
• mkvmerge now considers the PMT to be found
• Continuing scanning the file mkvmerge encounters content for tracks
not listed in the first PMT, attempting type detection by content,
failing for most and blacklisting their PIDs
• Next a second PMT is found, however, the PIDs listed in that PMT may
have already been found and blacklisted before — therefor they won't
be considered anymore
With this fix mkvmerge actively looks for the PMTs for all
programs. Detection by content is only attempted once all PMTs have
been located. That way all tracks will be detected again.
A side effect of either this patch or one of the other ones before is
that the track content is now OK. I don't know exactly why or which
commit actually fixed it.
Fixes parts of #1990.
If a packet is encountered for a PID that's not listed in the PMT,
mkvmerge will now attempt to determine its type by looking at the
first couple of bytes. Only checks for AAC (ADTS only, not LOAS) and
AC-3 are implemented.
Fixes#1980.
In an earlier commit I introduced a workaround for h.264/h.265 files
being mis-detected as MPEG transport streams. That workaround was used
when the first bytes in a file were a valid h.264/h.265 start code.
Unfortunately this prevents the detection of valid MPEG transport
streams if they do indeed contain such a start code.
The fix is to remove the aforementioned workaround. Instead mkvmerge
now requires 333 KB of consecutive MPEG transport stream headers
inside the first 1 MB of the file, which amounts to ~1680 consecutive
headers. This reliably prevents the mis-detection as h.264/h.265 and
still allows for detection of real transport streams even if they
start with a h.264/h.265 start code.
During file type detection the MPEG TS reader uses the AAC parser to
detect the multiplex mode. Later on it creates the AAC framer which in
turn contains its own instance of an AAC parser. This new instance
does its own multiplex mode detection.
For LOAS/LATM the detection can only succeed if the program mux
configuration is found. If it is not part of the first first PES
packet, then the framer's instance may get the detection wrong: it
does find LOAS/LATM headers, but as the program mux configuration
hasn't been parsed yet it'll continue detection and often happen over
ADTS headers instead.
The second detection is not only harmful, it's also superfluous as the
result is already known to the upper layer (the MPEG TS
reader). Therefore pass that information through from the reader via
the framer to the framer's parser.
Fixes#1957.
In order to fix#1924, I added bitstream restriction handling code in
the VUI parser in commit 2a385ab1ec.
Unfortunately I didn't realize that the code was present but in the
wrong place. It was only called if timing information was present,
too.
The result of commit 2a385ab1ec was that
the bitstream restriction was now handled twice if timing information
was present.
The superfluous and wrongfully-placed copy has been removed. This
fixes#1924 properly. It also fixes#1958.
In release the default "play audio file" action added contains the
wrong path. Sound files are actually installed in
<MTX_INSTALLATION_DIRECTORY>\data\sounds\… and the configuration added
contains <MTX_INSTALLATION_DIRECTORY>\sounds\…
The default has been fixed, and existing configurations will be
updated to the new path if they match the default, incorrect one.
Fixes#1956.
relocate_written_data is called in the following situation:
* track headers need to be re-written
* at least one frame has been written already
* the space left right after the track headers does not suffice to
expand the track headers
In such a case all frames that have been written already will be
moved.
However, in certain split modes then current file may actually be a
null I/O, meaning that the current output is discarded. A null I/O
object doesn't return anything when reading for it, causing an endless
loop in the relocation code which calls `read` as often as needed
until everything's been read — which can never happen with a null I/O
object.
However, it makes no sense to try to actually read the data in such a
case, as it will be discarded anyway. Therefore just avoid trying to
read the data in the first place.
Fixes#1944.
Using only a single one may lead to false positives and consequently
to wrong track parameters, especially if the file was cut at an
arbitrary position.
Fixes the audio-related part of #1938.
The old calculation method assumed that all picture set arrays are
always present in the HEVCC. This is not the case: arrays without
picture sets should not be written. Therefore their fixed size
overhead must not be added to the expected list size.
In order not to have to calculate the size in advance, the code has
been changed to write to an auto-resizing instance of mm_mem_io_c.
This is another fix for the video-related part of #1938.
The number of parameter set arrays is not the sum of the number of
VPS, SPS, PPS and SEI NALUs, but the number of different types. For
example, if there's one VPS, one SPS, two PPS and no SEI NALUs, the
number of parameter set arrays must be three and not four.
Fixes the video-related part of #1938.
Internally the default duration given on the command line is stored as
the duration of a progressive frames. Additionally the framed
AVC/h.264 output module doesn't actually check whether or not the
current block contains a frame or a field. This combination leads to
the situation that specifying a default duration that signals
interlacing (e.g. 50i) results in an actual default duration of 40ms,
that of a progressive frame.
This change passes the information provided by the user about frame
vs. fields from the command line through to the output module so that
it can react accordingly.
Fixes#1916.
The VUI parameter copy method was simply missing the code for copying
the "bitstream_restriction_flag" and all of its dependant
fields (see ITU-T "H.265 12/2016" annex E.2.1).
Fixes#1924.
Before this commit mkvextract used to assume that the "Format" line
always contained all known fields. For files where this wasn't the
case the text was then empty as mkvextract was assuming that there are
more fields present than there actually were.
This commit changes mkvextract to simply follow the given field order
from an existing "Format" line.
Fixes#1913.
Certain files seem to lack the "default display window" data (well,
the flag that signals its presence/absence). Therefore a
standards-compliant parser would try to read that flag, but the data
read would belong to another flag (the "vui_timing_info_present_flag").
ffmpeg has a heuristic in place for detecting such invalid default
display window parameters. This commit implements the same heuristic
in mkvmerge.
Fixes#1907.
In certain cases there are several timestamps from the source
container queued up for a given position of the NALU in the stream. In
such cases using the first available timestamp will result in
audio/video desync. Instead the last timestamp whose stream position
is smaller than or equal to the NALU's stream position should be used.
Fixes the HEVC equivalent of the problem with AVC described in #1908.
In certain cases there are several timestamps from the source
container queued up for a given position of the NALU in the stream. In
such cases using the first available timestamp will result in
audio/video desync. Instead the last timestamp whose stream position
is smaller than or equal to the NALU's stream position should be used.
Fixes#1908.
mkvmerge will now shift all timestamps in a file up so that no
timestamp read from the file is smaller than zero. Before this change
it was up to the output module to cope with timestamps < 0, which most
simply couldn't, and the result was audio/video desynchronization.
The return value of that function is an unsigned 64bit
integer. However, Matroska files can have negative timestamps as the
relative timestamp fields in both the SimpleBlock and the BlockGroup
structures are signed. Combined with a low ClusterTimecode element
this can result in negative timestamps.
The handling for edit lists ('elst' atoms) and composition timestamp
offsets ('ctts' atoms) were not working well together causing offsets
to be applied to certain track types in certain situations. This led
to offsets between tracks, e.g. when no edit lists were in play but
CTTS atoms were used.
Fixes#1889.
My own algorithm was producing integer overflows when compiled with gcc
6.2.0 for Windows. This resulted in negative timestamps being used and
all timestamps being shifted up by the inverse of the lowest
timestamp. For example, if the lowest timestamp after that overflow was
-00:15:00, then the very first timestamp in the file (which is usually
00:00:00) was 00:15:00.
Boost's rational class has better overflow checks and reduces values by
their greatest common divisor. Therefore they're much less likely to
occur.
Fixes#1883.
Otherwise the parts that were skipped were not taken into account
leading to a too-high duration (and consequently to a too-low number of
bits per second in the "BPS" statistics tag).
Fixes#1885.
MP4 DASH files can contain more than one copy of the "moov"
atom. Parsing it multiple times would mean that tracks, chunk offset
tables, sample to chunk tables etc. would be filled multiple times as
well.
Commit 2b5e8c86a6 made this easier to
trigger, though the problem could have been hit with earlier code,
too. Before that commit header parsing stopped as soon as the first
"moov" and "mdat" atoms were found. Multiple "moov" atoms before the
first "mdat" atom would therefore have triggered the bug, but I'm not
aware of that having ever happened.
Fixes#1877.
The old code used a way of iterating over the table that would get hung
up on duplicate entries. Additionally it relied on the table being
sorted in ascending order in the source file.
The result in such a case was that only key frames up to and including
the first duplicate frame index in the key frame index table were marked
as key frames in the output file, even though more key frames index
entries might exist.
The new method simply iterates over the whole table once, from start to
finish, and looks the referenced frames up properly.
This is the last part of the fix for #1867.
The chunk table can be filled via two different kinds of atoms:
• The "stco"/"co64" atoms in the normal "moov" atoms and
• the "trun" atoms in the "moof" atoms used for DASH.
The latter already know often each chunk applies: exactly
once. Therefore their "size" member is already set.
It's different for the chunks read from "stco"/"co64", though. For them
the times they apply (their "size" member) is derived from the chunk map
table. However, the chunk map table only knows a start index into the
chunk table but not an end index. Therefore the last chunk map entry is
applied from its start index until the end of the chunk table.
This overwrites the "size" member that's already set for chunks read
from "trun" atoms, though. The result is that data is read from the
wrong portion of the file.
Part of the fix for #1867.
Edit lists don't have to be different, bu they can be. Therefore do what
ffmpeg does: parse all of them and only use the last one parsed.
Part of the fix for #1867.
The old code was trying to save time by only scanning until the first
"mdat" (the encoded data for all the frames) and first "moov" (track
headers etc.) atoms were found.
If a "moof" (for segmented headers in MP4 DASH) atom was found during
that scan, it switched to scanning all top level atoms in order to find
all other "moof" atoms.
However, there are DASH files where the first "moof" atom is located
behind the first "mdat" and "moov" atoms. Reading such a file mkvmerge
aborted its top-level atom scan before encountering the "moof" atom and
therefore never switched to DASH mode. The result was that only a small
portion of the file was read: as much as was given via the tables in the
"moov" atom.
The new code does away with that shortcut and always scans all top level
atoms. It will therefore switch to DASH mode even if the first "moof"
atom is located behind the first "mdat" and "moov" atoms.
Part of the fix for #1867.
Some source files only provide one timestamp every n AC-3 frames. In
such situations the next provided timestamp might arrive before all of
the data for the previous AC-3 frame has been flushed (due to the AC-3
parser buffering data in order to determine whether or not a dependent
frame is following). The result is a single gap of one frame after frame
number n - 1.
Fixes#1864.
The default palette used will not look good in most of the cases, but
it's hard to guess a palette and pretty much impossible without actually
decoding a lot of the packets.
Implements #1854.
The process can take a lot of time, therefore only do it if there's a
reasonable chance that files will have to be cleaned up — which is after
a version change.
Another piece of the fix for #1860.
With a large number of files cleaning the cache can take quite some
time. During that time file identification won't work as it tries to
acquire a lock that's already held by the cleanup process.
With this change the cleanup process will release the lock after having
processed each file allowing the identification process to obtain the
lock temporarily.
Fixes#1860.
Things that are checked include:
• Can the mkvmerge executable be found?
• Can the mkvmerge executable be executed?
• Is mkvmerge's version the same as the GUI's?
• Only on Windows: Does the 'magic.mgc' file exist?
All of these are causes of problems that have been reported by users
multiple times over the years.
Both codes come from the range "qaa–qtz" which is "reserved for local
use". Adding all of them would blow up the list of available languages
overly much, but adding just two is quite OK. These two are often used
in France.
See #1848.
This prevents the error message "not enough space on disk" being shown
twice.
Whenever a write fails, an exception is throw, and an appropriate error
message is shown. This is the first time.
Next mkvmerge goes into its cleanup routine. There it closes the output
file. The output file uses a write buffer implementation. Before closing
the file any outstanding buffered content is written to the disk. The
disk is still full, though, triggering another exception, and another
error message is output.
The workaround is to discard any buffered content still remaining at
cleanup time. This is safe as the output file is closed manually normal
program flow. Therefore no buffered content is lost in a successful run.
Fixes#1850.
There are MPEG TS files where subtitle packets are multiplexed with
audio and video packets properly, meaning that packets that are supposed
to be shown/played together are also stored next to each other. However,
the timestamps in the PES streams have huge differences. For example,
the first timestamps for audio and video packets are around 00:11:08.418
whereas the timestamps for corresponding subtitle packets start at
09:19:25.912.
This workaround attempts to detect such situations. In that case
mkvmerge will discard subtitle timestamps and use the most recent audio
or video timestamp instead.
Implements #1841.
Before the defaults were applied before the result was stored in the
cache. The problem with that is that changing the defaults in the
preferences did not affect cached results. Adding a file the second time
was using cache data which had the old defaults applied.
Now the defaults are applied after the result has been stored in the
cache. Upon retrieval from the cache the current defaults are applied,
too.
In this case the track contains MP3 data. However, the ESDS's object
type ID field is set to 0x6b in the headers indicating MP2. Additionally
the track's fields for channels & sampling frequency are set to 0.
Fixes#1844.