Compare commits

...

56 Commits

Author SHA1 Message Date
retouching
09eda16882
fix(dl): delete old file after repackage (#114)
* fix(dl): delete old file after repackage

* fix(dl): using original_path instead of self.path in repackage method
2024-06-03 16:57:26 +01:00
rlaphoenix
a95d32de9e chore: Add config to gitignore 2024-05-17 02:29:46 +01:00
rlaphoenix
221cd145c4 refactor(dl): Make Widevine CDM config optional
With this change you no longer have to define/configure a CDM to load. This is something that isn't necessary for a lot of services.

Note: It's also now less hand-holdy in terms of correct config formatting/values. I.e. if you define a cdm by profile for a service slightly incorrectly, say a typo on the service or profile name, it will no longer warn you.
2024-05-17 01:52:45 +01:00
rlaphoenix
0310646cb2 fix(Subtitle): Skip merging segmented WebVTT if only 1 segment 2024-05-17 01:42:44 +01:00
rlaphoenix
3426fc145f fix(HLS): Decrypt AES-encrypted segments separately
We cannot merge all the encrypted AES-128-CBC (ClearKey) segments and then decrypt them in one go because each segment should be padded to a 16-byte boundary in CBC mode.

Since it uses PKCS#5 or #7 style (cant remember which) then the merged file has a 15 in 16 chance to fail the boundary check. And in the 1 in 16 odds that it passes the boundary check, it will not decrypt properly as each segment's padding will be treated as actual data, and not padding.
2024-05-17 01:15:37 +01:00
rlaphoenix
e57d755837 fix(clearkey): Do not pad data before decryption
This is seemingly unnecessary and simply incorrect at least for two sources (VGTV, and TRUTV).

Without this change it is not possible to correctly merge all segments without at least some problem in the resulting file.
2024-05-17 01:00:11 +01:00
rlaphoenix
03f3fec5cc refactor(dl): Only log errors/warnings from mkvmerge, list after message 2024-05-16 18:12:57 +01:00
rlaphoenix
2acee30e54 fix(utilities): Prevent finding the same box index over and over
Since it removed the data before the found box's index(-4), all loops would only find the same box at the same index again, but this time the box index would be 4 since all previous data was removed in the prior loop. Since the index-=4 code is only run if the index > 4, this never run on the second loop, and since this data now does not have the box length, Box.parse failed with an IOError.

This corrects looping through boxes and correctly obtains and parses each box.
2024-05-15 17:54:21 +01:00
rlaphoenix
2e697d93fc fix(dl): Log output from mkvmerge on failure 2024-05-15 14:00:38 +01:00
rlaphoenix
f08402d795 refactor: Warn falling back to requests as aria2c doesn't support Range 2024-05-11 22:59:31 +01:00
rlaphoenix
5ef95e942a fix(DASH): Use SegmentTemplate endNumber if available 2024-05-11 22:15:05 +01:00
rlaphoenix
dde55fd708 fix(DASH): Correct SegmentTemplate range stop value
Since range(start, stop) is start-inclusive but stop-exclusive, and DASH startNumber of SegmentTemplate typically will be 1 or not specified (defaulting to 1) it effectively worked by coincidence.

However, if startNumber was anything other than 1 than we will have a problem.
2024-05-11 22:13:28 +01:00
rlaphoenix
345cc5aba6
Merge pull request #110 from adbbbb/master
Adding Arm64 OSX Shaka support
2024-05-11 20:13:30 +01:00
rlaphoenix
145e7a6c17 docs(contributors): Add adbbbb to Contributor list 2024-05-11 20:13:01 +01:00
Adam
5706bb1417 fix(binaries): Search for Arm64 builds of Shaka-Packager 2024-05-11 20:11:29 +01:00
rlaphoenix
85246ab419
Merge pull request #109 from pandamoon21/master
Fix uppercase letters in the fonts extension - Font attachment
2024-05-11 17:46:04 +01:00
rlaphoenix
71a3a4e2c4 docs(contributors): Add pandamoon21 to Contributor list 2024-05-11 17:45:10 +01:00
pandamoon21
06d414975c fix(Attachment): Check mime-type case-insensitively 2024-05-11 17:43:32 +01:00
rlaphoenix
f419e04fad refactor(Track): Ensure data property is a defaultdict with dict factory
This is so both internal code and service code can save data to sub-keys without the parent keys needing to exist.

A doc-string is now set to the data property denoting some keys as reserved as well as their typing and meaning.

This also fixes a bug introduced in v3.3.3 where it will fail to download tracks without the "hls" key in the data property. This can happen when manually making Audio tracks using the HLS descriptor, and not putting any of the hls data the HLS class sets in to_tracks().
2024-05-09 15:15:22 +01:00
rlaphoenix
50d6f3a64d docs(changelog): Add v3.3.3 Changes 2024-05-07 07:10:20 +01:00
rlaphoenix
259434b59d docs(version): Bump to v3.3.3 2024-05-07 07:10:02 +01:00
rlaphoenix
7df8be46da build(poetry): Update dependencies
We can remove explicit dependency on language-data and marisa-trie because langcodes v3.3.0 now depends on language-data 1.2.0 and language-data 1.2.0 now depends on marisa-trie 1.1.0.
2024-05-07 07:06:22 +01:00
rlaphoenix
7aa797a4cc
Merge pull request #67 from Shivelight/feature/fix-webvtt-timestamp
Correct timestamps when merging fragmented WebVTT
2024-05-07 06:54:42 +01:00
Shivelight
0ba45decc6 fix(Subtitle): Correct timestamps when merging fragmented WebVTT
This applies the X-TIMESTAMP-MAP data to timestamps as it reads through a concatenated (merged) WebVTT file to correct timestamps on segmented WebVTT streams. It then removes the X-TIMESTAMP-MAP header.

The timescale and segment duration information is saved in the Subtitle's data dictionary under the hls/dash key: timescale (dash-only) and segment_durations. Note that this information will only be available post-download.

This is done regardless if you are converting to another subtitle or not, since the downloader automatically and forcefully concatenated the segmented subtitle data. We do not support the use of segmented Subtitles for downloading or otherwise, nor do we plan to.
2024-05-06 18:18:23 +01:00
rlaphoenix
af95ba062a refactor(env): Shorten paths on Windows with env vars 2024-04-24 05:56:05 +01:00
rlaphoenix
3bfd96d53c fix(dl): Automatically convert TTML Subs to WebVTT for MKV support 2024-04-24 05:35:24 +01:00
rlaphoenix
f23100077e refactor(dl): Improve readability of download worker errors
Now it will no longer print the full traceback for errors caused by a missing binary file. Other errors still include it and now explicitly label them as unexpected. CalledProcessError handling is now merged with all non-environment related errors and explicitly mentions that a binary call failed.
2024-04-24 05:28:10 +01:00
rlaphoenix
fd64e6acf4 refactor(utilities): Remove get_binary_path, use binaries.find instead
The function now located at core/binaries should only be used by services to find a specific binary not listed in there already, or if the name of the binary needed by your service differs.
2024-04-24 05:10:34 +01:00
rlaphoenix
677fd9c56a feat(binaries): Move all binary definitions to core/binaries file
This simplifies and centralizes all definitions on where these binaries can be found to a singular reference, making it easier to modify, edit, and improve.
2024-04-24 05:07:25 +01:00
rlaphoenix
9768de8bf2 feat(env): List possible config path locations when not found 2024-04-19 19:28:15 +01:00
rlaphoenix
959b62222e fix(env): List all directories as table in info 2024-04-19 19:27:33 +01:00
rlaphoenix
c101136d55 refactor(Config): Move possible config paths out of func to constant 2024-04-19 19:23:56 +01:00
rlaphoenix
4f1dfd7dd1 refactor(curl-impersonate): Update the default browser to chrome124 2024-04-18 09:50:17 +01:00
rlaphoenix
c859465af2 refactor(curl-impersonate): Remove manual fix for curl proxy SSL
The new version of curl-cffi includes the proper fix for applying ca-bundles to proxy connections making this manual fix no longer required.
2024-04-18 09:49:35 +01:00
rlaphoenix
d1ae361afc docs(changelog): Add v3.3.2 Changes 2024-04-16 06:07:00 +01:00
rlaphoenix
a62dcff9ad docs(version): Bump to v3.3.2 2024-04-16 06:06:44 +01:00
rlaphoenix
920ce8375b build(poetry): Update dependencies 2024-04-16 06:06:05 +01:00
rlaphoenix
3abb869d80
Merge pull request #100 from retouching/patch-1
Check if width and height is digit if it's an str
2024-04-16 05:36:59 +01:00
rlaphoenix
cbcb7e31b0 docs(contributors): Add retouching to Contributor list 2024-04-16 05:35:57 +01:00
retouching
4335806ca2 fix(Video): Allow specifying width/height as str, cast to int
We simply check the type near the top of the constructor, and later in the code it casts to int and handles failures there too (e.g., if the str is not a number, it will be handled).
2024-04-16 05:33:37 +01:00
rlaphoenix
a850a35f3e fix(Basic): Return None not Exception if no proxy configured 2024-04-16 05:27:17 +01:00
rlaphoenix
09e80feee5 fix(cfg): Use loaded config path instead of hardcoded default 2024-04-14 03:44:30 +01:00
rlaphoenix
f521ced3fe refactor(env): Use -- to indicate no config found/loaded 2024-04-14 03:42:41 +01:00
rlaphoenix
b4e28050ab fix(env): List used config path, otherwise the default path 2024-04-14 03:35:17 +01:00
rlaphoenix
646c35fc1b fix(Subtitle): Optionalise constructor args, add doc-string & checks
Some HLS playlists can have extremely limited information so to accommodate this we need to make the Subtitle track support having almost no information. This isn't ideal but it's really the only solution.
2024-04-14 03:26:35 +01:00
rlaphoenix
7fa0ff1fc0 refactor(Subtitle): Do not print "?"/"Unknown" values in str() 2024-04-14 03:25:22 +01:00
rlaphoenix
5c7c080a34 fix(HLS): Ensure playlist.stream_info.resolution exists before use 2024-04-14 03:15:11 +01:00
rlaphoenix
1db8944b09 fix(HLS): Ensure playlist.stream_info.codecs exists before use 2024-04-14 03:14:45 +01:00
rlaphoenix
43585a76cb fix(Audio): Optionalise constructor args, add doc-string & checks
Some HLS playlists can have extremely limited information so to accommodate this we need to make the Audio track support having almost no information. This isn't ideal but it's really the only solution.
2024-04-14 03:13:46 +01:00
rlaphoenix
8ca91efbc5 refactor(Audio): Do not print "?"/"Unknown" values in str() 2024-04-14 03:12:50 +01:00
rlaphoenix
57b042fa4b refactor(Audio): List lang after codec for consistency with other Tracks 2024-04-14 03:08:40 +01:00
rlaphoenix
642ad393b6 style: Move __...__ methods after constructors 2024-04-14 03:07:23 +01:00
rlaphoenix
23485bc820 refactor(Video): Return None if no m3u RANGE, not SDR 2024-04-14 02:40:16 +01:00
rlaphoenix
15d73be532 fix(Video): Optionalise constructor args, add doc-string & checks
Some HLS playlists can have extremely limited information so to accommodate this we need to make the Video track support having almost no information. This isn't ideal but it's really the only solution.
2024-04-14 02:36:55 +01:00
rlaphoenix
9ddd9ad474 refactor(Video): Do not print "?"/"Unknown" values in str() 2024-04-14 02:32:34 +01:00
rlaphoenix
dae83b0bd5 fix(Video): Ensure track is supported in change_color_range() 2024-04-14 02:31:31 +01:00
31 changed files with 989 additions and 393 deletions

2
.gitignore vendored
View File

@ -1,4 +1,6 @@
# devine # devine
devine.yaml
devine.yml
*.mkv *.mkv
*.mp4 *.mp4
*.exe *.exe

View File

@ -7,6 +7,52 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
Versions [3.0.0] and older use a format based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), Versions [3.0.0] and older use a format based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
but versions thereafter use a custom changelog format using [git-cliff](https://git-cliff.org). but versions thereafter use a custom changelog format using [git-cliff](https://git-cliff.org).
## [3.3.3] - 2024-05-07
### Bug Fixes
- *dl*: Automatically convert TTML Subs to WebVTT for MKV support
- *Subtitle*: Correct timestamps when merging fragmented WebVTT
### Changes
- *env*: List all directories as table in info
- *env*: List possible config path locations when not found
- *binaries*: Move all binary definitions to core/binaries file
- *curl-impersonate*: Remove manual fix for curl proxy SSL
- *curl-impersonate*: Update the default browser to chrome124
- *Config*: Move possible config paths out of func to constant
- *utilities*: Remove get_binary_path, use binaries.find instead
- *dl*: Improve readability of download worker errors
- *env*: Shorten paths on Windows with env vars
## [3.3.2] - 2024-04-16
### Bug Fixes
- *Video*: Ensure track is supported in change_color_range()
- *Video*: Optionalise constructor args, add doc-string & checks
- *Audio*: Optionalise constructor args, add doc-string & checks
- *Subtitle*: Optionalise constructor args, add doc-string & checks
- *HLS*: Ensure playlist.stream_info.codecs exists before use
- *HLS*: Ensure playlist.stream_info.resolution exists before use
- *env*: List used config path, otherwise the default path
- *cfg*: Use loaded config path instead of hardcoded default
- *Basic*: Return None not Exception if no proxy configured
### Changes
- *Video*: Do not print "?"/"Unknown" values in str()
- *Audio*: Do not print "?"/"Unknown" values in str()
- *Subtitle*: Do not print "?"/"Unknown" values in str()
- *Audio*: List lang after codec for consistency with other Tracks
- *Video*: Return None if no m3u RANGE, not SDR
- *env*: Use -- to indicate no config found/loaded
### New Contributors
- [retouching](https://github.com/retouching)
## [3.3.1] - 2024-04-05 ## [3.3.1] - 2024-04-05
### Features ### Features
@ -768,6 +814,8 @@ This release brings a huge change to the fundamentals of Devine's logging, UI, a
Initial public release under the name Devine. Initial public release under the name Devine.
[3.3.3]: https://github.com/devine-dl/devine/releases/tag/v3.3.3
[3.3.2]: https://github.com/devine-dl/devine/releases/tag/v3.3.2
[3.3.1]: https://github.com/devine-dl/devine/releases/tag/v3.3.1 [3.3.1]: https://github.com/devine-dl/devine/releases/tag/v3.3.1
[3.3.0]: https://github.com/devine-dl/devine/releases/tag/v3.3.0 [3.3.0]: https://github.com/devine-dl/devine/releases/tag/v3.3.0
[3.2.0]: https://github.com/devine-dl/devine/releases/tag/v3.2.0 [3.2.0]: https://github.com/devine-dl/devine/releases/tag/v3.2.0

View File

@ -342,6 +342,9 @@ Please refrain from spam or asking for questions that infringe upon a Service's
<a href="https://github.com/Hollander-1908"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/93162595?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="Hollander-1908"/></a> <a href="https://github.com/Hollander-1908"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/93162595?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="Hollander-1908"/></a>
<a href="https://github.com/Shivelight"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/20620780?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="Shivelight"/></a> <a href="https://github.com/Shivelight"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/20620780?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="Shivelight"/></a>
<a href="https://github.com/knowhere01"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/113712042?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="knowhere01"/></a> <a href="https://github.com/knowhere01"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/113712042?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="knowhere01"/></a>
<a href="https://github.com/retouching"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/33735357?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="retouching"/></a>
<a href="https://github.com/pandamoon21"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/33972938?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="pandamoon21"/></a>
<a href="https://github.com/adbbbb"><img src="https://images.weserv.nl/?url=avatars.githubusercontent.com/u/56319336?v=4&h=25&w=25&fit=cover&mask=circle&maxage=7d" alt="adbbbb"/></a>
## Licensing ## Licensing

View File

@ -5,7 +5,7 @@ import sys
import click import click
from ruamel.yaml import YAML from ruamel.yaml import YAML
from devine.core.config import config from devine.core.config import config, get_config_path
from devine.core.constants import context_settings from devine.core.constants import context_settings
@ -36,15 +36,15 @@ def cfg(ctx: click.Context, key: str, value: str, unset: bool, list_: bool) -> N
log = logging.getLogger("cfg") log = logging.getLogger("cfg")
config_path = config.directories.user_configs / config.filenames.root_config
yaml, data = YAML(), None yaml, data = YAML(), None
yaml.default_flow_style = False yaml.default_flow_style = False
if config_path.is_file():
config_path = get_config_path() or config.directories.user_configs / config.filenames.root_config
if config_path.exists():
data = yaml.load(config_path) data = yaml.load(config_path)
if not data: if not data:
log.warning(f"{config_path} has no configuration data, yet") log.warning("No config file was found or it has no data, yet")
# yaml.load() returns `None` if the input data is blank instead of a usable object # yaml.load() returns `None` if the input data is blank instead of a usable object
# force a usable object by making one and removing the only item within it # force a usable object by making one and removing the only item within it
data = yaml.load("""__TEMP__: null""") data = yaml.load("""__TEMP__: null""")

View File

@ -38,6 +38,7 @@ from rich.table import Table
from rich.text import Text from rich.text import Text
from rich.tree import Tree from rich.tree import Tree
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.console import console from devine.core.console import console
from devine.core.constants import DOWNLOAD_LICENCE_ONLY, AnyTrack, context_settings from devine.core.constants import DOWNLOAD_LICENCE_ONLY, AnyTrack, context_settings
@ -51,7 +52,7 @@ from devine.core.titles import Movie, Song, Title_T
from devine.core.titles.episode import Episode from devine.core.titles.episode import Episode
from devine.core.tracks import Audio, Subtitle, Tracks, Video from devine.core.tracks import Audio, Subtitle, Tracks, Video
from devine.core.tracks.attachment import Attachment from devine.core.tracks.attachment import Attachment
from devine.core.utilities import get_binary_path, get_system_fonts, is_close_match, time_elapsed_since from devine.core.utilities import get_system_fonts, is_close_match, time_elapsed_since
from devine.core.utils.click_types import LANGUAGE_RANGE, QUALITY_LIST, SEASON_RANGE, ContextData, MultipleChoice from devine.core.utils.click_types import LANGUAGE_RANGE, QUALITY_LIST, SEASON_RANGE, ContextData, MultipleChoice
from devine.core.utils.collections import merge_dict from devine.core.utils.collections import merge_dict
from devine.core.utils.subprocess import ffprobe from devine.core.utils.subprocess import ffprobe
@ -177,9 +178,10 @@ class dl:
except ValueError as e: except ValueError as e:
self.log.error(f"Failed to load Widevine CDM, {e}") self.log.error(f"Failed to load Widevine CDM, {e}")
sys.exit(1) sys.exit(1)
self.log.info( if self.cdm:
f"Loaded {self.cdm.__class__.__name__} Widevine CDM: {self.cdm.system_id} (L{self.cdm.security_level})" self.log.info(
) f"Loaded {self.cdm.__class__.__name__} Widevine CDM: {self.cdm.system_id} (L{self.cdm.security_level})"
)
with console.status("Loading Key Vaults...", spinner="dots"): with console.status("Loading Key Vaults...", spinner="dots"):
self.vaults = Vaults(self.service) self.vaults = Vaults(self.service)
@ -198,7 +200,7 @@ class dl:
self.proxy_providers.append(Basic(**config.proxy_providers["basic"])) self.proxy_providers.append(Basic(**config.proxy_providers["basic"]))
if config.proxy_providers.get("nordvpn"): if config.proxy_providers.get("nordvpn"):
self.proxy_providers.append(NordVPN(**config.proxy_providers["nordvpn"])) self.proxy_providers.append(NordVPN(**config.proxy_providers["nordvpn"]))
if get_binary_path("hola-proxy"): if binaries.HolaProxy:
self.proxy_providers.append(Hola()) self.proxy_providers.append(Hola())
for proxy_provider in self.proxy_providers: for proxy_provider in self.proxy_providers:
self.log.info(f"Loaded {proxy_provider.__class__.__name__}: {proxy_provider}") self.log.info(f"Loaded {proxy_provider.__class__.__name__}: {proxy_provider}")
@ -546,14 +548,17 @@ class dl:
except Exception as e: # noqa except Exception as e: # noqa
error_messages = [ error_messages = [
":x: Download Failed...", ":x: Download Failed...",
" One of the track downloads had an error!",
" See the error trace above for more information."
] ]
if isinstance(e, subprocess.CalledProcessError): if isinstance(e, EnvironmentError):
# ignore process exceptions as proper error logs are already shown error_messages.append(f" {e}")
error_messages.append(f" Process exit code: {e.returncode}")
else: else:
console.print_exception() error_messages.append(" An unexpected error occurred in one of the download workers.",)
if hasattr(e, "returncode"):
error_messages.append(f" Binary call failed, Process exit code: {e.returncode}")
error_messages.append(" See the error trace above for more information.")
if isinstance(e, subprocess.CalledProcessError):
# CalledProcessError already lists the exception trace
console.print_exception()
console.print(Padding( console.print(Padding(
Group(*error_messages), Group(*error_messages),
(1, 5) (1, 5)
@ -610,11 +615,14 @@ class dl:
break break
video_track_n += 1 video_track_n += 1
if sub_format: with console.status("Converting Subtitles..."):
with console.status(f"Converting Subtitles to {sub_format.name}..."): for subtitle in title.tracks.subtitles:
for subtitle in title.tracks.subtitles: if sub_format:
if subtitle.codec != sub_format: if subtitle.codec != sub_format:
subtitle.convert(sub_format) subtitle.convert(sub_format)
elif subtitle.codec == Subtitle.Codec.TimedTextMarkupLang:
# MKV does not support TTML, VTT is the next best option
subtitle.convert(Subtitle.Codec.WebVTT)
with console.status("Checking Subtitles for Fonts..."): with console.status("Checking Subtitles for Fonts..."):
font_names = [] font_names = []
@ -694,16 +702,22 @@ class dl:
): ):
for task_id, task_tracks in multiplex_tasks: for task_id, task_tracks in multiplex_tasks:
progress.start_task(task_id) # TODO: Needed? progress.start_task(task_id) # TODO: Needed?
muxed_path, return_code = task_tracks.mux( muxed_path, return_code, errors = task_tracks.mux(
str(title), str(title),
progress=partial(progress.update, task_id=task_id), progress=partial(progress.update, task_id=task_id),
delete=False delete=False
) )
muxed_paths.append(muxed_path) muxed_paths.append(muxed_path)
if return_code == 1: if return_code >= 2:
self.log.warning("mkvmerge had at least one warning, will continue anyway...") self.log.error(f"Failed to Mux video to Matroska file ({return_code}):")
elif return_code >= 2: elif return_code == 1 or errors:
self.log.error(f"Failed to Mux video to Matroska file ({return_code})") self.log.warning("mkvmerge had at least one warning or error, continuing anyway...")
for line in errors:
if line.startswith("#GUI#error"):
self.log.error(line)
else:
self.log.warning(line)
if return_code >= 2:
sys.exit(1) sys.exit(1)
for video_track in task_tracks.videos: for video_track in task_tracks.videos:
video_track.delete() video_track.delete()
@ -923,21 +937,21 @@ class dl:
return Credential.loads(credentials) # type: ignore return Credential.loads(credentials) # type: ignore
@staticmethod @staticmethod
def get_cdm(service: str, profile: Optional[str] = None) -> WidevineCdm: def get_cdm(service: str, profile: Optional[str] = None) -> Optional[WidevineCdm]:
""" """
Get CDM for a specified service (either Local or Remote CDM). Get CDM for a specified service (either Local or Remote CDM).
Raises a ValueError if there's a problem getting a CDM. Raises a ValueError if there's a problem getting a CDM.
""" """
cdm_name = config.cdm.get(service) or config.cdm.get("default") cdm_name = config.cdm.get(service) or config.cdm.get("default")
if not cdm_name: if not cdm_name:
raise ValueError("A CDM to use wasn't listed in the config") return None
if isinstance(cdm_name, dict): if isinstance(cdm_name, dict):
if not profile: if not profile:
raise ValueError("CDM config is mapped for profiles, but no profile was chosen") return None
cdm_name = cdm_name.get(profile) or config.cdm.get("default") cdm_name = cdm_name.get(profile) or config.cdm.get("default")
if not cdm_name: if not cdm_name:
raise ValueError(f"A CDM to use was not mapped for the profile {profile}") return None
cdm_api = next(iter(x for x in config.remote_cdm if x["name"] == cdm_name), None) cdm_api = next(iter(x for x in config.remote_cdm if x["name"] == cdm_name), None)
if cdm_api: if cdm_api:

View File

@ -1,10 +1,17 @@
import logging import logging
import os
import shutil import shutil
import sys
from pathlib import Path
from typing import Optional from typing import Optional
import click import click
from rich.padding import Padding
from rich.table import Table
from rich.tree import Tree
from devine.core.config import config from devine.core.config import POSSIBLE_CONFIG_PATHS, config, config_path
from devine.core.console import console
from devine.core.constants import context_settings from devine.core.constants import context_settings
from devine.core.services import Services from devine.core.services import Services
@ -18,13 +25,42 @@ def env() -> None:
def info() -> None: def info() -> None:
"""Displays information about the current environment.""" """Displays information about the current environment."""
log = logging.getLogger("env") log = logging.getLogger("env")
log.info(f"[Root Config] : {config.directories.user_configs / config.filenames.root_config}")
log.info(f"[Cookies] : {config.directories.cookies}") if config_path:
log.info(f"[WVDs] : {config.directories.wvds}") log.info(f"Config loaded from {config_path}")
log.info(f"[Cache] : {config.directories.cache}") else:
log.info(f"[Logs] : {config.directories.logs}") tree = Tree("No config file found, you can use any of the following locations:")
log.info(f"[Temp Files] : {config.directories.temp}") for i, path in enumerate(POSSIBLE_CONFIG_PATHS, start=1):
log.info(f"[Downloads] : {config.directories.downloads}") tree.add(f"[repr.number]{i}.[/] [text2]{path.resolve()}[/]")
console.print(Padding(
tree,
(0, 5)
))
table = Table(title="Directories", expand=True)
table.add_column("Name", no_wrap=True)
table.add_column("Path")
path_vars = {
x: Path(os.getenv(x))
for x in ("TEMP", "APPDATA", "LOCALAPPDATA", "USERPROFILE")
if sys.platform == "win32" and os.getenv(x)
}
for name in sorted(dir(config.directories)):
if name.startswith("__") or name == "app_dirs":
continue
path = getattr(config.directories, name).resolve()
for var, var_path in path_vars.items():
if path.is_relative_to(var_path):
path = rf"%{var}%\{path.relative_to(var_path)}"
break
table.add_row(name.title(), str(path))
console.print(Padding(
table,
(1, 5)
))
@env.group(name="clear", short_help="Clear an environment directory.", context_settings=context_settings) @env.group(name="clear", short_help="Clear an environment directory.", context_settings=context_settings)

View File

@ -12,13 +12,13 @@ from rich.rule import Rule
from rich.tree import Tree from rich.tree import Tree
from devine.commands.dl import dl from devine.commands.dl import dl
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.console import console from devine.core.console import console
from devine.core.constants import context_settings from devine.core.constants import context_settings
from devine.core.proxies import Basic, Hola, NordVPN from devine.core.proxies import Basic, Hola, NordVPN
from devine.core.service import Service from devine.core.service import Service
from devine.core.services import Services from devine.core.services import Services
from devine.core.utilities import get_binary_path
from devine.core.utils.click_types import ContextData from devine.core.utils.click_types import ContextData
from devine.core.utils.collections import merge_dict from devine.core.utils.collections import merge_dict
@ -72,7 +72,7 @@ def search(
proxy_providers.append(Basic(**config.proxy_providers["basic"])) proxy_providers.append(Basic(**config.proxy_providers["basic"]))
if config.proxy_providers.get("nordvpn"): if config.proxy_providers.get("nordvpn"):
proxy_providers.append(NordVPN(**config.proxy_providers["nordvpn"])) proxy_providers.append(NordVPN(**config.proxy_providers["nordvpn"]))
if get_binary_path("hola-proxy"): if binaries.HolaProxy:
proxy_providers.append(Hola()) proxy_providers.append(Hola())
for proxy_provider in proxy_providers: for proxy_provider in proxy_providers:
log.info(f"Loaded {proxy_provider.__class__.__name__}: {proxy_provider}") log.info(f"Loaded {proxy_provider.__class__.__name__}: {proxy_provider}")

View File

@ -2,9 +2,9 @@ import subprocess
import click import click
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.constants import context_settings from devine.core.constants import context_settings
from devine.core.utilities import get_binary_path
@click.command( @click.command(
@ -29,11 +29,10 @@ def serve(host: str, port: int, caddy: bool) -> None:
from pywidevine import serve from pywidevine import serve
if caddy: if caddy:
executable = get_binary_path("caddy") if not binaries.Caddy:
if not executable:
raise click.ClickException("Caddy executable \"caddy\" not found but is required for --caddy.") raise click.ClickException("Caddy executable \"caddy\" not found but is required for --caddy.")
caddy_p = subprocess.Popen([ caddy_p = subprocess.Popen([
executable, binaries.Caddy,
"run", "run",
"--config", str(config.directories.user_configs / "Caddyfile") "--config", str(config.directories.user_configs / "Caddyfile")
]) ])

View File

@ -4,8 +4,8 @@ from pathlib import Path
import click import click
from pymediainfo import MediaInfo from pymediainfo import MediaInfo
from devine.core import binaries
from devine.core.constants import context_settings from devine.core.constants import context_settings
from devine.core.utilities import get_binary_path
@click.group(short_help="Various helper scripts and programs.", context_settings=context_settings) @click.group(short_help="Various helper scripts and programs.", context_settings=context_settings)
@ -38,8 +38,7 @@ def crop(path: Path, aspect: str, letter: bool, offset: int, preview: bool) -> N
as it may go from being 2px away from a perfect crop, to 20px over-cropping as it may go from being 2px away from a perfect crop, to 20px over-cropping
again due to sub-sampled chroma. again due to sub-sampled chroma.
""" """
executable = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not executable:
raise click.ClickException("FFmpeg executable \"ffmpeg\" not found but is required.") raise click.ClickException("FFmpeg executable \"ffmpeg\" not found but is required.")
if path.is_dir(): if path.is_dir():
@ -87,7 +86,7 @@ def crop(path: Path, aspect: str, letter: bool, offset: int, preview: bool) -> N
]))))] ]))))]
ffmpeg_call = subprocess.Popen([ ffmpeg_call = subprocess.Popen([
executable, "-y", binaries.FFMPEG, "-y",
"-i", str(video_path), "-i", str(video_path),
"-map", "0:v:0", "-map", "0:v:0",
"-c", "copy", "-c", "copy",
@ -95,7 +94,7 @@ def crop(path: Path, aspect: str, letter: bool, offset: int, preview: bool) -> N
] + out_path, stdout=subprocess.PIPE) ] + out_path, stdout=subprocess.PIPE)
try: try:
if preview: if preview:
previewer = get_binary_path("mpv", "ffplay") previewer = binaries.MPV or binaries.FFPlay
if not previewer: if not previewer:
raise click.ClickException("MPV/FFplay executables weren't found but are required for previewing.") raise click.ClickException("MPV/FFplay executables weren't found but are required for previewing.")
subprocess.Popen((previewer, "-"), stdin=ffmpeg_call.stdout) subprocess.Popen((previewer, "-"), stdin=ffmpeg_call.stdout)
@ -120,8 +119,7 @@ def range_(path: Path, full: bool, preview: bool) -> None:
then you're video may have the range set to the wrong value. Flip its range to the then you're video may have the range set to the wrong value. Flip its range to the
opposite value and see if that fixes it. opposite value and see if that fixes it.
""" """
executable = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not executable:
raise click.ClickException("FFmpeg executable \"ffmpeg\" not found but is required.") raise click.ClickException("FFmpeg executable \"ffmpeg\" not found but is required.")
if path.is_dir(): if path.is_dir():
@ -157,7 +155,7 @@ def range_(path: Path, full: bool, preview: bool) -> None:
]))))] ]))))]
ffmpeg_call = subprocess.Popen([ ffmpeg_call = subprocess.Popen([
executable, "-y", binaries.FFMPEG, "-y",
"-i", str(video_path), "-i", str(video_path),
"-map", "0:v:0", "-map", "0:v:0",
"-c", "copy", "-c", "copy",
@ -165,7 +163,7 @@ def range_(path: Path, full: bool, preview: bool) -> None:
] + out_path, stdout=subprocess.PIPE) ] + out_path, stdout=subprocess.PIPE)
try: try:
if preview: if preview:
previewer = get_binary_path("mpv", "ffplay") previewer = binaries.MPV or binaries.FFPlay
if not previewer: if not previewer:
raise click.ClickException("MPV/FFplay executables weren't found but are required for previewing.") raise click.ClickException("MPV/FFplay executables weren't found but are required for previewing.")
subprocess.Popen((previewer, "-"), stdin=ffmpeg_call.stdout) subprocess.Popen((previewer, "-"), stdin=ffmpeg_call.stdout)
@ -188,8 +186,7 @@ def test(path: Path, map_: str) -> None:
You may choose specific streams using the -m/--map parameter. E.g., You may choose specific streams using the -m/--map parameter. E.g.,
'0:v:0' to test the first video stream, or '0:a' to test all audio streams. '0:v:0' to test the first video stream, or '0:a' to test all audio streams.
""" """
executable = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not executable:
raise click.ClickException("FFmpeg executable \"ffmpeg\" not found but is required.") raise click.ClickException("FFmpeg executable \"ffmpeg\" not found but is required.")
if path.is_dir(): if path.is_dir():
@ -199,7 +196,7 @@ def test(path: Path, map_: str) -> None:
for video_path in paths: for video_path in paths:
print("Starting...") print("Starting...")
p = subprocess.Popen([ p = subprocess.Popen([
executable, "-hide_banner", binaries.FFMPEG, "-hide_banner",
"-benchmark", "-benchmark",
"-i", str(video_path), "-i", str(video_path),
"-map", map_, "-map", map_,

View File

@ -1 +1 @@
__version__ = "3.3.1" __version__ = "3.3.3"

46
devine/core/binaries.py Normal file
View File

@ -0,0 +1,46 @@
import shutil
import sys
from pathlib import Path
from typing import Optional
__shaka_platform = {
"win32": "win",
"darwin": "osx"
}.get(sys.platform, sys.platform)
def find(*names: str) -> Optional[Path]:
"""Find the path of the first found binary name."""
for name in names:
path = shutil.which(name)
if path:
return Path(path)
return None
FFMPEG = find("ffmpeg")
FFProbe = find("ffprobe")
FFPlay = find("ffplay")
SubtitleEdit = find("SubtitleEdit")
ShakaPackager = find(
"shaka-packager",
"packager",
f"packager-{__shaka_platform}",
f"packager-{__shaka_platform}-arm64",
f"packager-{__shaka_platform}-x64"
)
Aria2 = find("aria2c", "aria2")
CCExtractor = find(
"ccextractor",
"ccextractorwin",
"ccextractorwinfull"
)
HolaProxy = find("hola-proxy")
MPV = find("mpv")
Caddy = find("caddy")
__all__ = (
"FFMPEG", "FFProbe", "FFPlay", "SubtitleEdit", "ShakaPackager",
"Aria2", "CCExtractor", "HolaProxy", "MPV", "Caddy", "find"
)

View File

@ -77,29 +77,27 @@ class Config:
return cls(**yaml.safe_load(path.read_text(encoding="utf8")) or {}) return cls(**yaml.safe_load(path.read_text(encoding="utf8")) or {})
# noinspection PyProtectedMember
POSSIBLE_CONFIG_PATHS = (
# The Devine Namespace Folder (e.g., %appdata%/Python/Python311/site-packages/devine)
Config._Directories.namespace_dir / Config._Filenames.root_config,
# The Parent Folder to the Devine Namespace Folder (e.g., %appdata%/Python/Python311/site-packages)
Config._Directories.namespace_dir.parent / Config._Filenames.root_config,
# The AppDirs User Config Folder (e.g., %localappdata%/devine)
Config._Directories.user_configs / Config._Filenames.root_config
)
def get_config_path() -> Optional[Path]: def get_config_path() -> Optional[Path]:
""" """
Get Path to Config from various locations. Get Path to Config from any one of the possible locations.
Looks for a config file in the following folders in order:
1. The Devine Namespace Folder (e.g., %appdata%/Python/Python311/site-packages/devine)
2. The Parent Folder to the Devine Namespace Folder (e.g., %appdata%/Python/Python311/site-packages)
3. The AppDirs User Config Folder (e.g., %localappdata%/devine)
Returns None if no config file could be found. Returns None if no config file could be found.
""" """
# noinspection PyProtectedMember for path in POSSIBLE_CONFIG_PATHS:
path = Config._Directories.namespace_dir / Config._Filenames.root_config if path.exists():
if not path.exists(): return path
# noinspection PyProtectedMember return None
path = Config._Directories.namespace_dir.parent / Config._Filenames.root_config
if not path.exists():
# noinspection PyProtectedMember
path = Config._Directories.user_configs / Config._Filenames.root_config
if not path.exists():
path = None
return path
config_path = get_config_path() config_path = get_config_path()

View File

@ -15,10 +15,11 @@ from requests.cookies import cookiejar_from_dict, get_cookie_header
from rich import filesize from rich import filesize
from rich.text import Text from rich.text import Text
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.console import console from devine.core.console import console
from devine.core.constants import DOWNLOAD_CANCELLED from devine.core.constants import DOWNLOAD_CANCELLED
from devine.core.utilities import get_binary_path, get_extension, get_free_port from devine.core.utilities import get_extension, get_free_port
def rpc(caller: Callable, secret: str, method: str, params: Optional[list[Any]] = None) -> Any: def rpc(caller: Callable, secret: str, method: str, params: Optional[list[Any]] = None) -> Any:
@ -87,8 +88,7 @@ def download(
if not isinstance(urls, list): if not isinstance(urls, list):
urls = [urls] urls = [urls]
executable = get_binary_path("aria2c", "aria2") if not binaries.Aria2:
if not executable:
raise EnvironmentError("Aria2c executable not found...") raise EnvironmentError("Aria2c executable not found...")
if proxy and not proxy.lower().startswith("http://"): if proxy and not proxy.lower().startswith("http://"):
@ -186,7 +186,7 @@ def download(
try: try:
p = subprocess.Popen( p = subprocess.Popen(
[ [
executable, binaries.Aria2,
*arguments *arguments
], ],
stdin=subprocess.PIPE, stdin=subprocess.PIPE,

View File

@ -6,7 +6,6 @@ from http.cookiejar import CookieJar
from pathlib import Path from pathlib import Path
from typing import Any, Generator, MutableMapping, Optional, Union from typing import Any, Generator, MutableMapping, Optional, Union
from curl_cffi import CurlOpt
from curl_cffi.requests import Session from curl_cffi.requests import Session
from rich import filesize from rich import filesize
@ -18,7 +17,7 @@ MAX_ATTEMPTS = 5
RETRY_WAIT = 2 RETRY_WAIT = 2
CHUNK_SIZE = 1024 CHUNK_SIZE = 1024
PROGRESS_WINDOW = 5 PROGRESS_WINDOW = 5
BROWSER = config.curl_impersonate.get("browser", "chrome120") BROWSER = config.curl_impersonate.get("browser", "chrome124")
def download( def download(
@ -53,11 +52,6 @@ def download(
for one-time request changes like a header, cookie, or proxy. For example, for one-time request changes like a header, cookie, or proxy. For example,
to request Byte-ranges use e.g., `headers={"Range": "bytes=0-128"}`. to request Byte-ranges use e.g., `headers={"Range": "bytes=0-128"}`.
""" """
# https://github.com/yifeikong/curl_cffi/issues/6#issuecomment-2028518677
# must be applied here since the `session.curl` is thread-localized
# noinspection PyProtectedMember
session.curl.setopt(CurlOpt.PROXY_CAINFO, session.curl._cacert)
save_dir = save_path.parent save_dir = save_path.parent
control_file = save_path.with_name(f"{save_path.name}.!dev") control_file = save_path.with_name(f"{save_path.name}.!dev")

View File

@ -7,7 +7,7 @@ from typing import Optional, Union
from urllib.parse import urljoin from urllib.parse import urljoin
from Cryptodome.Cipher import AES from Cryptodome.Cipher import AES
from Cryptodome.Util.Padding import pad, unpad from Cryptodome.Util.Padding import unpad
from m3u8.model import Key from m3u8.model import Key
from requests import Session from requests import Session
@ -43,7 +43,7 @@ class ClearKey:
decrypted = AES. \ decrypted = AES. \
new(self.key, AES.MODE_CBC, self.iv). \ new(self.key, AES.MODE_CBC, self.iv). \
decrypt(pad(path.read_bytes(), AES.block_size)) decrypt(path.read_bytes())
try: try:
decrypted = unpad(decrypted, AES.block_size) decrypted = unpad(decrypted, AES.block_size)

View File

@ -3,7 +3,6 @@ from __future__ import annotations
import base64 import base64
import shutil import shutil
import subprocess import subprocess
import sys
import textwrap import textwrap
from pathlib import Path from pathlib import Path
from typing import Any, Callable, Optional, Union from typing import Any, Callable, Optional, Union
@ -17,10 +16,11 @@ from pywidevine.pssh import PSSH
from requests import Session from requests import Session
from rich.text import Text from rich.text import Text
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.console import console from devine.core.console import console
from devine.core.constants import AnyTrack from devine.core.constants import AnyTrack
from devine.core.utilities import get_binary_path, get_boxes from devine.core.utilities import get_boxes
from devine.core.utils.subprocess import ffprobe from devine.core.utils.subprocess import ffprobe
@ -223,9 +223,7 @@ class Widevine:
if not self.content_keys: if not self.content_keys:
raise ValueError("Cannot decrypt a Track without any Content Keys...") raise ValueError("Cannot decrypt a Track without any Content Keys...")
platform = {"win32": "win", "darwin": "osx"}.get(sys.platform, sys.platform) if not binaries.ShakaPackager:
executable = get_binary_path("shaka-packager", "packager", f"packager-{platform}", f"packager-{platform}-x64")
if not executable:
raise EnvironmentError("Shaka Packager executable not found but is required.") raise EnvironmentError("Shaka Packager executable not found but is required.")
if not path or not path.exists(): if not path or not path.exists():
raise ValueError("Tried to decrypt a file that does not exist.") raise ValueError("Tried to decrypt a file that does not exist.")
@ -252,7 +250,7 @@ class Widevine:
] ]
p = subprocess.Popen( p = subprocess.Popen(
[executable, *arguments], [binaries.ShakaPackager, *arguments],
stdout=subprocess.DEVNULL, stdout=subprocess.DEVNULL,
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True universal_newlines=True

View File

@ -285,12 +285,16 @@ class DASH:
segment_base = adaptation_set.find("SegmentBase") segment_base = adaptation_set.find("SegmentBase")
segments: list[tuple[str, Optional[str]]] = [] segments: list[tuple[str, Optional[str]]] = []
segment_timescale: float = 0
segment_durations: list[int] = []
track_kid: Optional[UUID] = None track_kid: Optional[UUID] = None
if segment_template is not None: if segment_template is not None:
segment_template = copy(segment_template) segment_template = copy(segment_template)
start_number = int(segment_template.get("startNumber") or 1) start_number = int(segment_template.get("startNumber") or 1)
end_number = int(segment_template.get("endNumber") or 0) or None
segment_timeline = segment_template.find("SegmentTimeline") segment_timeline = segment_template.find("SegmentTimeline")
segment_timescale = float(segment_template.get("timescale") or 1)
for item in ("initialization", "media"): for item in ("initialization", "media"):
value = segment_template.get(item) value = segment_template.get(item)
@ -318,17 +322,18 @@ class DASH:
track_kid = track.get_key_id(init_data) track_kid = track.get_key_id(init_data)
if segment_timeline is not None: if segment_timeline is not None:
seg_time_list = []
current_time = 0 current_time = 0
for s in segment_timeline.findall("S"): for s in segment_timeline.findall("S"):
if s.get("t"): if s.get("t"):
current_time = int(s.get("t")) current_time = int(s.get("t"))
for _ in range(1 + (int(s.get("r") or 0))): for _ in range(1 + (int(s.get("r") or 0))):
seg_time_list.append(current_time) segment_durations.append(current_time)
current_time += int(s.get("d")) current_time += int(s.get("d"))
seg_num_list = list(range(start_number, len(seg_time_list) + start_number))
for t, n in zip(seg_time_list, seg_num_list): if not end_number:
end_number = len(segment_durations)
for t, n in zip(segment_durations, range(start_number, end_number + 1)):
segments.append(( segments.append((
DASH.replace_fields( DASH.replace_fields(
segment_template.get("media"), segment_template.get("media"),
@ -342,11 +347,12 @@ class DASH:
if not period_duration: if not period_duration:
raise ValueError("Duration of the Period was unable to be determined.") raise ValueError("Duration of the Period was unable to be determined.")
period_duration = DASH.pt_to_sec(period_duration) period_duration = DASH.pt_to_sec(period_duration)
segment_duration = float(segment_template.get("duration")) segment_duration = float(segment_template.get("duration")) or 1
segment_timescale = float(segment_template.get("timescale") or 1)
total_segments = math.ceil(period_duration / (segment_duration / segment_timescale))
for s in range(start_number, start_number + total_segments): if not end_number:
end_number = math.ceil(period_duration / (segment_duration / segment_timescale))
for s in range(start_number, end_number + 1):
segments.append(( segments.append((
DASH.replace_fields( DASH.replace_fields(
segment_template.get("media"), segment_template.get("media"),
@ -356,7 +362,11 @@ class DASH:
Time=s Time=s
), None ), None
)) ))
# TODO: Should we floor/ceil/round, or is int() ok?
segment_durations.append(int(segment_duration))
elif segment_list is not None: elif segment_list is not None:
segment_timescale = float(segment_list.get("timescale") or 1)
init_data = None init_data = None
initialization = segment_list.find("Initialization") initialization = segment_list.find("Initialization")
if initialization is not None: if initialization is not None:
@ -388,6 +398,7 @@ class DASH:
media_url, media_url,
segment_url.get("mediaRange") segment_url.get("mediaRange")
)) ))
segment_durations.append(int(segment_url.get("duration") or 1))
elif segment_base is not None: elif segment_base is not None:
media_range = None media_range = None
init_data = None init_data = None
@ -420,6 +431,10 @@ class DASH:
log.debug(track.url) log.debug(track.url)
sys.exit(1) sys.exit(1)
# TODO: Should we floor/ceil/round, or is int() ok?
track.data["dash"]["timescale"] = int(segment_timescale)
track.data["dash"]["segment_durations"] = segment_durations
if not track.drm and isinstance(track, (Video, Audio)): if not track.drm and isinstance(track, (Video, Audio)):
try: try:
track.drm = [Widevine.from_init_data(init_data)] track.drm = [Widevine.from_init_data(init_data)]
@ -457,6 +472,7 @@ class DASH:
if downloader.__name__ == "aria2c" and any(bytes_range is not None for url, bytes_range in segments): if downloader.__name__ == "aria2c" and any(bytes_range is not None for url, bytes_range in segments):
# aria2(c) is shit and doesn't support the Range header, fallback to the requests downloader # aria2(c) is shit and doesn't support the Range header, fallback to the requests downloader
downloader = requests_downloader downloader = requests_downloader
log.warning("Falling back to the requests downloader as aria2(c) doesn't support the Range header")
for status_update in downloader( for status_update in downloader(
urls=[ urls=[

View File

@ -19,12 +19,13 @@ from pywidevine.cdm import Cdm as WidevineCdm
from pywidevine.pssh import PSSH from pywidevine.pssh import PSSH
from requests import Session from requests import Session
from devine.core import binaries
from devine.core.constants import DOWNLOAD_CANCELLED, DOWNLOAD_LICENCE_ONLY, AnyTrack from devine.core.constants import DOWNLOAD_CANCELLED, DOWNLOAD_LICENCE_ONLY, AnyTrack
from devine.core.downloaders import requests as requests_downloader from devine.core.downloaders import requests as requests_downloader
from devine.core.drm import DRM_T, ClearKey, Widevine from devine.core.drm import DRM_T, ClearKey, Widevine
from devine.core.events import events from devine.core.events import events
from devine.core.tracks import Audio, Subtitle, Tracks, Video from devine.core.tracks import Audio, Subtitle, Tracks, Video
from devine.core.utilities import get_binary_path, get_extension, is_close_match, try_ensure_utf8 from devine.core.utilities import get_extension, is_close_match, try_ensure_utf8
class HLS: class HLS:
@ -101,7 +102,8 @@ class HLS:
try: try:
# TODO: Any better way to figure out the primary track type? # TODO: Any better way to figure out the primary track type?
Video.Codec.from_codecs(playlist.stream_info.codecs) if playlist.stream_info.codecs:
Video.Codec.from_codecs(playlist.stream_info.codecs)
except ValueError: except ValueError:
primary_track_type = Audio primary_track_type = Audio
else: else:
@ -110,7 +112,10 @@ class HLS:
tracks.add(primary_track_type( tracks.add(primary_track_type(
id_=hex(crc32(str(playlist).encode()))[2:], id_=hex(crc32(str(playlist).encode()))[2:],
url=urljoin(playlist.base_uri, playlist.uri), url=urljoin(playlist.base_uri, playlist.uri),
codec=primary_track_type.Codec.from_codecs(playlist.stream_info.codecs), codec=(
primary_track_type.Codec.from_codecs(playlist.stream_info.codecs)
if playlist.stream_info.codecs else None
),
language=language, # HLS manifests do not seem to have language info language=language, # HLS manifests do not seem to have language info
is_original_lang=True, # TODO: All we can do is assume Yes is_original_lang=True, # TODO: All we can do is assume Yes
bitrate=playlist.stream_info.average_bandwidth or playlist.stream_info.bandwidth, bitrate=playlist.stream_info.average_bandwidth or playlist.stream_info.bandwidth,
@ -125,10 +130,10 @@ class HLS:
**(dict( **(dict(
range_=Video.Range.DV if any( range_=Video.Range.DV if any(
codec.split(".")[0] in ("dva1", "dvav", "dvhe", "dvh1") codec.split(".")[0] in ("dva1", "dvav", "dvhe", "dvh1")
for codec in playlist.stream_info.codecs.lower().split(",") for codec in (playlist.stream_info.codecs or "").lower().split(",")
) else Video.Range.from_m3u_range_tag(playlist.stream_info.video_range), ) else Video.Range.from_m3u_range_tag(playlist.stream_info.video_range),
width=playlist.stream_info.resolution[0], width=playlist.stream_info.resolution[0] if playlist.stream_info.resolution else None,
height=playlist.stream_info.resolution[1], height=playlist.stream_info.resolution[1] if playlist.stream_info.resolution else None,
fps=playlist.stream_info.frame_rate fps=playlist.stream_info.frame_rate
) if primary_track_type is Video else {}) ) if primary_track_type is Video else {})
)) ))
@ -249,17 +254,24 @@ class HLS:
progress(total=total_segments) progress(total=total_segments)
downloader = track.downloader downloader = track.downloader
if (
downloader.__name__ == "aria2c" and
any(x.byterange for x in master.segments if x not in unwanted_segments)
):
downloader = requests_downloader
log.warning("Falling back to the requests downloader as aria2(c) doesn't support the Range header")
urls: list[dict[str, Any]] = [] urls: list[dict[str, Any]] = []
segment_durations: list[int] = []
range_offset = 0 range_offset = 0
for segment in master.segments: for segment in master.segments:
if segment in unwanted_segments: if segment in unwanted_segments:
continue continue
segment_durations.append(int(segment.duration))
if segment.byterange: if segment.byterange:
if downloader.__name__ == "aria2c":
# aria2(c) is shit and doesn't support the Range header, fallback to the requests downloader
downloader = requests_downloader
byte_range = HLS.calculate_byte_range(segment.byterange, range_offset) byte_range = HLS.calculate_byte_range(segment.byterange, range_offset)
range_offset = byte_range.split("-")[0] range_offset = byte_range.split("-")[0]
else: else:
@ -272,6 +284,8 @@ class HLS:
} if byte_range else {} } if byte_range else {}
}) })
track.data["hls"]["segment_durations"] = segment_durations
segment_save_dir = save_dir / "segments" segment_save_dir = save_dir / "segments"
for status_update in downloader( for status_update in downloader(
@ -373,15 +387,27 @@ class HLS:
elif len(files) != range_len: elif len(files) != range_len:
raise ValueError(f"Missing {range_len - len(files)} segment files for {segment_range}...") raise ValueError(f"Missing {range_len - len(files)} segment files for {segment_range}...")
merge( if isinstance(drm, Widevine):
to=merged_path, # with widevine we can merge all segments and decrypt once
via=files, merge(
delete=True, to=merged_path,
include_map_data=True via=files,
) delete=True,
include_map_data=True
drm.decrypt(merged_path) )
merged_path.rename(decrypted_path) drm.decrypt(merged_path)
merged_path.rename(decrypted_path)
else:
# with other drm we must decrypt separately and then merge them
# for aes this is because each segment likely has 16-byte padding
for file in files:
drm.decrypt(file)
merge(
to=merged_path,
via=files,
delete=True,
include_map_data=True
)
events.emit( events.emit(
events.Types.TRACK_DECRYPTED, events.Types.TRACK_DECRYPTED,
@ -552,8 +578,7 @@ class HLS:
Returns the file size of the merged file. Returns the file size of the merged file.
""" """
ffmpeg = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not ffmpeg:
raise EnvironmentError("FFmpeg executable was not found but is required to merge HLS segments.") raise EnvironmentError("FFmpeg executable was not found but is required to merge HLS segments.")
demuxer_file = segments[0].parent / "ffmpeg_concat_demuxer.txt" demuxer_file = segments[0].parent / "ffmpeg_concat_demuxer.txt"
@ -563,7 +588,7 @@ class HLS:
])) ]))
subprocess.check_call([ subprocess.check_call([
ffmpeg, "-hide_banner", binaries.FFMPEG, "-hide_banner",
"-loglevel", "panic", "-loglevel", "panic",
"-f", "concat", "-f", "concat",
"-safe", "0", "-safe", "0",

View File

@ -35,7 +35,7 @@ class Basic(Proxy):
servers: Optional[Union[str, list[str]]] = self.countries.get(country_code) servers: Optional[Union[str, list[str]]] = self.countries.get(country_code)
if not servers: if not servers:
raise ValueError(f"There's no proxies configured for \"{country_code}\"...") return None
if isinstance(servers, str): if isinstance(servers, str):
proxy = servers proxy = servers

View File

@ -3,8 +3,8 @@ import re
import subprocess import subprocess
from typing import Optional from typing import Optional
from devine.core import binaries
from devine.core.proxies.proxy import Proxy from devine.core.proxies.proxy import Proxy
from devine.core.utilities import get_binary_path
class Hola(Proxy): class Hola(Proxy):
@ -13,7 +13,7 @@ class Hola(Proxy):
Proxy Service using Hola's direct connections via the hola-proxy project. Proxy Service using Hola's direct connections via the hola-proxy project.
https://github.com/Snawoot/hola-proxy https://github.com/Snawoot/hola-proxy
""" """
self.binary = get_binary_path("hola-proxy") self.binary = binaries.HolaProxy
if not self.binary: if not self.binary:
raise EnvironmentError("hola-proxy executable not found but is required for the Hola proxy provider.") raise EnvironmentError("hola-proxy executable not found but is required for the Hola proxy provider.")

View File

@ -37,7 +37,7 @@ class Attachment:
mime_type = { mime_type = {
".ttf": "application/x-truetype-font", ".ttf": "application/x-truetype-font",
".otf": "application/vnd.ms-opentype" ".otf": "application/vnd.ms-opentype"
}.get(path.suffix, mimetypes.guess_type(path)[0]) }.get(path.suffix.lower(), mimetypes.guess_type(path)[0])
if not mime_type: if not mime_type:
raise ValueError("The attachment mime-type could not be automatically detected.") raise ValueError("The attachment mime-type could not be automatically detected.")

View File

@ -64,18 +64,80 @@ class Audio(Track):
return Audio.Codec.OGG return Audio.Codec.OGG
raise ValueError(f"The Content Profile '{profile}' is not a supported Audio Codec") raise ValueError(f"The Content Profile '{profile}' is not a supported Audio Codec")
def __init__(self, *args: Any, codec: Audio.Codec, bitrate: Union[str, int, float], def __init__(
channels: Optional[Union[str, int, float]] = None, joc: int = 0, descriptive: bool = False, self,
**kwargs: Any): *args: Any,
codec: Optional[Audio.Codec] = None,
bitrate: Optional[Union[str, int, float]] = None,
channels: Optional[Union[str, int, float]] = None,
joc: Optional[int] = None,
descriptive: Union[bool, int] = False,
**kwargs: Any
):
"""
Create a new Audio track object.
Parameters:
codec: An Audio.Codec enum representing the audio codec.
If not specified, MediaInfo will be used to retrieve the codec
once the track has been downloaded.
bitrate: A number or float representing the average bandwidth in bytes/s.
Float values are rounded up to the nearest integer.
channels: A number, float, or string representing the number of audio channels.
Strings may represent numbers or floats. Expanded layouts like 7.1.1 is
not supported. All numbers and strings will be cast to float.
joc: The number of Joint-Object-Coding Channels/Objects in the audio stream.
descriptive: Mark this audio as being descriptive audio for the blind.
Note: If codec, bitrate, channels, or joc is not specified some checks may be
skipped or assume a value. Specifying as much information as possible is highly
recommended.
"""
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
# required
if not isinstance(codec, (Audio.Codec, type(None))):
raise TypeError(f"Expected codec to be a {Audio.Codec}, not {codec!r}")
if not isinstance(bitrate, (str, int, float, type(None))):
raise TypeError(f"Expected bitrate to be a {str}, {int}, or {float}, not {bitrate!r}")
if not isinstance(channels, (str, int, float, type(None))):
raise TypeError(f"Expected channels to be a {str}, {int}, or {float}, not {channels!r}")
if not isinstance(joc, (int, type(None))):
raise TypeError(f"Expected joc to be a {int}, not {joc!r}")
if (
not isinstance(descriptive, (bool, int)) or
(isinstance(descriptive, int) and descriptive not in (0, 1))
):
raise TypeError(f"Expected descriptive to be a {bool} or bool-like {int}, not {descriptive!r}")
self.codec = codec self.codec = codec
self.bitrate = int(math.ceil(float(bitrate))) if bitrate else None
self.channels = self.parse_channels(channels) if channels else None try:
# optional self.bitrate = int(math.ceil(float(bitrate))) if bitrate else None
except (ValueError, TypeError) as e:
raise ValueError(f"Expected bitrate to be a number or float, {e}")
try:
self.channels = self.parse_channels(channels) if channels else None
except (ValueError, NotImplementedError) as e:
raise ValueError(f"Expected channels to be a number, float, or a string, {e}")
self.joc = joc self.joc = joc
self.descriptive = bool(descriptive) self.descriptive = bool(descriptive)
def __str__(self) -> str:
return " | ".join(filter(bool, [
"AUD",
f"[{self.codec.value}]" if self.codec else None,
str(self.language),
", ".join(filter(bool, [
str(self.channels) if self.channels else None,
f"JOC {self.joc}" if self.joc else None,
])),
f"{self.bitrate // 1000} kb/s" if self.bitrate else None,
self.get_track_name(),
self.edition
]))
@staticmethod @staticmethod
def parse_channels(channels: Union[str, int, float]) -> float: def parse_channels(channels: Union[str, int, float]) -> float:
""" """
@ -109,16 +171,5 @@ class Audio(Track):
track_name += flag track_name += flag
return track_name or None return track_name or None
def __str__(self) -> str:
return " | ".join(filter(bool, [
"AUD",
f"[{self.codec.value}]",
str(self.channels or "?") + (f" (JOC {self.joc})" if self.joc else ""),
f"{self.bitrate // 1000 if self.bitrate else '?'} kb/s",
str(self.language),
self.get_track_name(),
self.edition
]))
__all__ = ("Audio",) __all__ = ("Audio",)

View File

@ -7,7 +7,7 @@ from enum import Enum
from functools import partial from functools import partial
from io import BytesIO from io import BytesIO
from pathlib import Path from pathlib import Path
from typing import Any, Callable, Iterable, Optional from typing import Any, Callable, Iterable, Optional, Union
import pycaption import pycaption
import requests import requests
@ -17,8 +17,10 @@ from pycaption.geometry import Layout
from pymp4.parser import MP4 from pymp4.parser import MP4
from subtitle_filter import Subtitles from subtitle_filter import Subtitles
from devine.core import binaries
from devine.core.tracks.track import Track from devine.core.tracks.track import Track
from devine.core.utilities import get_binary_path, try_ensure_utf8 from devine.core.utilities import try_ensure_utf8
from devine.core.utils.webvtt import merge_segmented_webvtt
class Subtitle(Track): class Subtitle(Track):
@ -74,22 +76,22 @@ class Subtitle(Track):
return Subtitle.Codec.TimedTextMarkupLang return Subtitle.Codec.TimedTextMarkupLang
raise ValueError(f"The Content Profile '{profile}' is not a supported Subtitle Codec") raise ValueError(f"The Content Profile '{profile}' is not a supported Subtitle Codec")
def __init__(self, *args: Any, codec: Subtitle.Codec, cc: bool = False, sdh: bool = False, forced: bool = False, def __init__(
**kwargs: Any): self,
*args: Any,
codec: Optional[Subtitle.Codec] = None,
cc: bool = False,
sdh: bool = False,
forced: bool = False,
**kwargs: Any
):
""" """
Information on Subtitle Types: Create a new Subtitle track object.
https://bit.ly/2Oe4fLC (3PlayMedia Blog on SUB vs CC vs SDH).
However, I wouldn't pay much attention to the claims about SDH needing to
be in the original source language. It's logically not true.
CC == Closed Captions. Source: Basically every site.
SDH = Subtitles for the Deaf or Hard-of-Hearing. Source: Basically every site.
HOH = Exact same as SDH. Is a term used in the UK. Source: https://bit.ly/2PGJatz (ICO UK)
More in-depth information, examples, and stuff to look for can be found in the Parameter
explanation list below.
Parameters: Parameters:
codec: A Subtitle.Codec enum representing the subtitle format.
If not specified, MediaInfo will be used to retrieve the format
once the track has been downloaded.
cc: Closed Caption. cc: Closed Caption.
- Intended as if you couldn't hear the audio at all. - Intended as if you couldn't hear the audio at all.
- Can have Sound as well as Dialogue, but doesn't have to. - Can have Sound as well as Dialogue, but doesn't have to.
@ -125,20 +127,57 @@ class Subtitle(Track):
no other way to reliably work with Forced subtitles where multiple no other way to reliably work with Forced subtitles where multiple
forced subtitles may be in the output file. Just know what to expect forced subtitles may be in the output file. Just know what to expect
with "forced" subtitles. with "forced" subtitles.
Note: If codec is not specified some checks may be skipped or assume a value.
Specifying as much information as possible is highly recommended.
Information on Subtitle Types:
https://bit.ly/2Oe4fLC (3PlayMedia Blog on SUB vs CC vs SDH).
However, I wouldn't pay much attention to the claims about SDH needing to
be in the original source language. It's logically not true.
CC == Closed Captions. Source: Basically every site.
SDH = Subtitles for the Deaf or Hard-of-Hearing. Source: Basically every site.
HOH = Exact same as SDH. Is a term used in the UK. Source: https://bit.ly/2PGJatz (ICO UK)
More in-depth information, examples, and stuff to look for can be found in the Parameter
explanation list above.
""" """
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
if not isinstance(codec, (Subtitle.Codec, type(None))):
raise TypeError(f"Expected codec to be a {Subtitle.Codec}, not {codec!r}")
if not isinstance(cc, (bool, int)) or (isinstance(cc, int) and cc not in (0, 1)):
raise TypeError(f"Expected cc to be a {bool} or bool-like {int}, not {cc!r}")
if not isinstance(sdh, (bool, int)) or (isinstance(sdh, int) and sdh not in (0, 1)):
raise TypeError(f"Expected sdh to be a {bool} or bool-like {int}, not {sdh!r}")
if not isinstance(forced, (bool, int)) or (isinstance(forced, int) and forced not in (0, 1)):
raise TypeError(f"Expected forced to be a {bool} or bool-like {int}, not {forced!r}")
self.codec = codec self.codec = codec
self.cc = bool(cc) self.cc = bool(cc)
self.sdh = bool(sdh) self.sdh = bool(sdh)
self.forced = bool(forced)
if self.cc and self.sdh: if self.cc and self.sdh:
raise ValueError("A text track cannot be both CC and SDH.") raise ValueError("A text track cannot be both CC and SDH.")
self.forced = bool(forced)
if (self.cc or self.sdh) and self.forced: if self.forced and (self.cc or self.sdh):
raise ValueError("A text track cannot be CC/SDH as well as Forced.") raise ValueError("A text track cannot be CC/SDH as well as Forced.")
# TODO: Migrate to new event observer system
# Called after Track has been converted to another format # Called after Track has been converted to another format
self.OnConverted: Optional[Callable[[Subtitle.Codec], None]] = None self.OnConverted: Optional[Callable[[Subtitle.Codec], None]] = None
def __str__(self) -> str:
return " | ".join(filter(bool, [
"SUB",
f"[{self.codec.value}]" if self.codec else None,
str(self.language),
self.get_track_name()
]))
def get_track_name(self) -> Optional[str]: def get_track_name(self) -> Optional[str]:
"""Return the base Track Name.""" """Return the base Track Name."""
track_name = super().get_track_name() or "" track_name = super().get_track_name() or ""
@ -164,6 +203,26 @@ class Subtitle(Track):
self.convert(Subtitle.Codec.TimedTextMarkupLang) self.convert(Subtitle.Codec.TimedTextMarkupLang)
elif self.codec == Subtitle.Codec.fVTT: elif self.codec == Subtitle.Codec.fVTT:
self.convert(Subtitle.Codec.WebVTT) self.convert(Subtitle.Codec.WebVTT)
elif self.codec == Subtitle.Codec.WebVTT:
text = self.path.read_text("utf8")
if self.descriptor == Track.Descriptor.DASH:
if len(self.data["dash"]["segment_durations"]) > 1:
text = merge_segmented_webvtt(
text,
segment_durations=self.data["dash"]["segment_durations"],
timescale=self.data["dash"]["timescale"]
)
elif self.descriptor == Track.Descriptor.HLS:
if len(self.data["hls"]["segment_durations"]) > 1:
text = merge_segmented_webvtt(
text,
segment_durations=self.data["hls"]["segment_durations"],
timescale=1 # ?
)
caption_set = pycaption.WebVTTReader().read(text)
Subtitle.merge_same_cues(caption_set)
subtitle_text = pycaption.WebVTTWriter().write(caption_set)
self.path.write_text(subtitle_text, encoding="utf8")
def convert(self, codec: Subtitle.Codec) -> Path: def convert(self, codec: Subtitle.Codec) -> Path:
""" """
@ -196,14 +255,13 @@ class Subtitle(Track):
output_path = self.path.with_suffix(f".{codec.value.lower()}") output_path = self.path.with_suffix(f".{codec.value.lower()}")
sub_edit_executable = get_binary_path("SubtitleEdit") if binaries.SubtitleEdit and self.codec not in (Subtitle.Codec.fTTML, Subtitle.Codec.fVTT):
if sub_edit_executable and self.codec not in (Subtitle.Codec.fTTML, Subtitle.Codec.fVTT):
sub_edit_format = { sub_edit_format = {
Subtitle.Codec.SubStationAlphav4: "AdvancedSubStationAlpha", Subtitle.Codec.SubStationAlphav4: "AdvancedSubStationAlpha",
Subtitle.Codec.TimedTextMarkupLang: "TimedText1.0" Subtitle.Codec.TimedTextMarkupLang: "TimedText1.0"
}.get(codec, codec.name) }.get(codec, codec.name)
sub_edit_args = [ sub_edit_args = [
sub_edit_executable, binaries.SubtitleEdit,
"/Convert", self.path, sub_edit_format, "/Convert", self.path, sub_edit_format,
f"/outputfilename:{output_path.name}", f"/outputfilename:{output_path.name}",
"/encoding:utf8" "/encoding:utf8"
@ -271,14 +329,7 @@ class Subtitle(Track):
caption_lists[language] = caption_list caption_lists[language] = caption_list
caption_set: pycaption.CaptionSet = pycaption.CaptionSet(caption_lists) caption_set: pycaption.CaptionSet = pycaption.CaptionSet(caption_lists)
elif codec == Subtitle.Codec.WebVTT: elif codec == Subtitle.Codec.WebVTT:
text = try_ensure_utf8(data).decode("utf8") text = Subtitle.space_webvtt_headers(data)
# Segmented VTT when merged may have the WEBVTT headers part of the next caption
# if they are not separated far enough from the previous caption, hence the \n\n
text = text. \
replace("WEBVTT", "\n\nWEBVTT"). \
replace("\r", ""). \
replace("\n\n\n", "\n \n\n"). \
replace("\n\n<", "\n<")
caption_set = pycaption.WebVTTReader().read(text) caption_set = pycaption.WebVTTReader().read(text)
else: else:
raise ValueError(f"Unknown Subtitle format \"{codec}\"...") raise ValueError(f"Unknown Subtitle format \"{codec}\"...")
@ -295,6 +346,27 @@ class Subtitle(Track):
return caption_set return caption_set
@staticmethod
def space_webvtt_headers(data: Union[str, bytes]):
"""
Space out the WEBVTT Headers from Captions.
Segmented VTT when merged may have the WEBVTT headers part of the next caption
as they were not separated far enough from the previous caption and ended up
being considered as caption text rather than the header for the next segment.
"""
if isinstance(data, bytes):
data = try_ensure_utf8(data).decode("utf8")
elif not isinstance(data, str):
raise ValueError(f"Expecting data to be a str, not {data!r}")
text = data.replace("WEBVTT", "\n\nWEBVTT").\
replace("\r", "").\
replace("\n\n\n", "\n \n\n").\
replace("\n\n<", "\n<")
return text
@staticmethod @staticmethod
def merge_same_cues(caption_set: pycaption.CaptionSet): def merge_same_cues(caption_set: pycaption.CaptionSet):
"""Merge captions with the same timecodes and text as one in-place.""" """Merge captions with the same timecodes and text as one in-place."""
@ -463,8 +535,7 @@ class Subtitle(Track):
if not self.path or not self.path.exists(): if not self.path or not self.path.exists():
raise ValueError("You must download the subtitle track first.") raise ValueError("You must download the subtitle track first.")
executable = get_binary_path("SubtitleEdit") if binaries.SubtitleEdit:
if executable:
if self.codec == Subtitle.Codec.SubStationAlphav4: if self.codec == Subtitle.Codec.SubStationAlphav4:
output_format = "AdvancedSubStationAlpha" output_format = "AdvancedSubStationAlpha"
elif self.codec == Subtitle.Codec.TimedTextMarkupLang: elif self.codec == Subtitle.Codec.TimedTextMarkupLang:
@ -473,7 +544,7 @@ class Subtitle(Track):
output_format = self.codec.name output_format = self.codec.name
subprocess.run( subprocess.run(
[ [
executable, binaries.SubtitleEdit,
"/Convert", self.path, output_format, "/Convert", self.path, output_format,
"/encoding:utf8", "/encoding:utf8",
"/overwrite", "/overwrite",
@ -502,8 +573,7 @@ class Subtitle(Track):
if not self.path or not self.path.exists(): if not self.path or not self.path.exists():
raise ValueError("You must download the subtitle track first.") raise ValueError("You must download the subtitle track first.")
executable = get_binary_path("SubtitleEdit") if not binaries.SubtitleEdit:
if not executable:
raise EnvironmentError("SubtitleEdit executable not found...") raise EnvironmentError("SubtitleEdit executable not found...")
if self.codec == Subtitle.Codec.SubStationAlphav4: if self.codec == Subtitle.Codec.SubStationAlphav4:
@ -515,7 +585,7 @@ class Subtitle(Track):
subprocess.run( subprocess.run(
[ [
executable, binaries.SubtitleEdit,
"/Convert", self.path, output_format, "/Convert", self.path, output_format,
"/ReverseRtlStartEnd", "/ReverseRtlStartEnd",
"/encoding:utf8", "/encoding:utf8",
@ -525,13 +595,5 @@ class Subtitle(Track):
stdout=subprocess.DEVNULL stdout=subprocess.DEVNULL
) )
def __str__(self) -> str:
return " | ".join(filter(bool, [
"SUB",
f"[{self.codec.value}]",
str(self.language),
self.get_track_name()
]))
__all__ = ("Subtitle",) __all__ = ("Subtitle",)

View File

@ -4,6 +4,7 @@ import logging
import re import re
import shutil import shutil
import subprocess import subprocess
from collections import defaultdict
from copy import copy from copy import copy
from enum import Enum from enum import Enum
from functools import partial from functools import partial
@ -15,12 +16,13 @@ from zlib import crc32
from langcodes import Language from langcodes import Language
from requests import Session from requests import Session
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.constants import DOWNLOAD_CANCELLED, DOWNLOAD_LICENCE_ONLY from devine.core.constants import DOWNLOAD_CANCELLED, DOWNLOAD_LICENCE_ONLY
from devine.core.downloaders import aria2c, curl_impersonate, requests from devine.core.downloaders import aria2c, curl_impersonate, requests
from devine.core.drm import DRM_T, Widevine from devine.core.drm import DRM_T, Widevine
from devine.core.events import events from devine.core.events import events
from devine.core.utilities import get_binary_path, get_boxes, try_ensure_utf8 from devine.core.utilities import get_boxes, try_ensure_utf8
from devine.core.utils.subprocess import ffprobe from devine.core.utils.subprocess import ffprobe
@ -41,7 +43,7 @@ class Track:
drm: Optional[Iterable[DRM_T]] = None, drm: Optional[Iterable[DRM_T]] = None,
edition: Optional[str] = None, edition: Optional[str] = None,
downloader: Optional[Callable] = None, downloader: Optional[Callable] = None,
data: Optional[dict] = None, data: Optional[Union[dict, defaultdict]] = None,
id_: Optional[str] = None, id_: Optional[str] = None,
) -> None: ) -> None:
if not isinstance(url, (str, list)): if not isinstance(url, (str, list)):
@ -62,8 +64,8 @@ class Track:
raise TypeError(f"Expected edition to be a {str}, not {type(edition)}") raise TypeError(f"Expected edition to be a {str}, not {type(edition)}")
if not isinstance(downloader, (Callable, type(None))): if not isinstance(downloader, (Callable, type(None))):
raise TypeError(f"Expected downloader to be a {Callable}, not {type(downloader)}") raise TypeError(f"Expected downloader to be a {Callable}, not {type(downloader)}")
if not isinstance(data, (dict, type(None))): if not isinstance(data, (dict, defaultdict, type(None))):
raise TypeError(f"Expected data to be a {dict}, not {type(data)}") raise TypeError(f"Expected data to be a {dict} or {defaultdict}, not {type(data)}")
invalid_urls = ", ".join(set(type(x) for x in url if not isinstance(x, str))) invalid_urls = ", ".join(set(type(x) for x in url if not isinstance(x, str)))
if invalid_urls: if invalid_urls:
@ -92,6 +94,7 @@ class Track:
self.drm = drm self.drm = drm
self.edition: str = edition self.edition: str = edition
self.downloader = downloader self.downloader = downloader
self._data: defaultdict[Any, Any] = defaultdict(dict)
self.data = data or {} self.data = data or {}
if self.name is None: if self.name is None:
@ -131,6 +134,42 @@ class Track:
def __eq__(self, other: Any) -> bool: def __eq__(self, other: Any) -> bool:
return isinstance(other, Track) and self.id == other.id return isinstance(other, Track) and self.id == other.id
@property
def data(self) -> defaultdict[Any, Any]:
"""
Arbitrary track data dictionary.
A defaultdict is used with a dict as the factory for easier
nested saving and safer exists-checks.
Reserved keys:
- "hls" used by the HLS class.
- playlist: m3u8.model.Playlist - The primary track information.
- media: m3u8.model.Media - The audio/subtitle track information.
- segment_durations: list[int] - A list of each segment's duration.
- "dash" used by the DASH class.
- manifest: lxml.ElementTree - DASH MPD manifest.
- period: lxml.Element - The period of this track.
- adaptation_set: lxml.Element - The adaptation set of this track.
- representation: lxml.Element - The representation of this track.
- timescale: int - The timescale of the track's segments.
- segment_durations: list[int] - A list of each segment's duration.
You should not add, change, or remove any data within reserved keys.
You may use their data but do note that the values of them may change
or be removed at any point.
"""
return self._data
@data.setter
def data(self, value: Union[dict, defaultdict]) -> None:
if not isinstance(value, (dict, defaultdict)):
raise TypeError(f"Expected data to be a {dict} or {defaultdict}, not {type(value)}")
if isinstance(value, dict):
value = defaultdict(dict, **value)
self._data = value
def download( def download(
self, self,
session: Session, session: Session,
@ -470,8 +509,7 @@ class Track:
if not self.path or not self.path.exists(): if not self.path or not self.path.exists():
raise ValueError("Cannot repackage a Track that has not been downloaded.") raise ValueError("Cannot repackage a Track that has not been downloaded.")
executable = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not executable:
raise EnvironmentError("FFmpeg executable \"ffmpeg\" was not found but is required for this call.") raise EnvironmentError("FFmpeg executable \"ffmpeg\" was not found but is required for this call.")
original_path = self.path original_path = self.path
@ -480,7 +518,7 @@ class Track:
def _ffmpeg(extra_args: list[str] = None): def _ffmpeg(extra_args: list[str] = None):
subprocess.run( subprocess.run(
[ [
executable, "-hide_banner", binaries.FFMPEG, "-hide_banner",
"-loglevel", "error", "-loglevel", "error",
"-i", original_path, "-i", original_path,
*(extra_args or []), *(extra_args or []),
@ -504,6 +542,7 @@ class Track:
else: else:
raise raise
original_path.unlink()
self.path = output_path self.path = output_path

View File

@ -316,7 +316,7 @@ class Tracks:
][:per_language or None]) ][:per_language or None])
return selected return selected
def mux(self, title: str, delete: bool = True, progress: Optional[partial] = None) -> tuple[Path, int]: def mux(self, title: str, delete: bool = True, progress: Optional[partial] = None) -> tuple[Path, int, list[str]]:
""" """
Multiplex all the Tracks into a Matroska Container file. Multiplex all the Tracks into a Matroska Container file.
@ -410,15 +410,18 @@ class Tracks:
# let potential failures go to caller, caller should handle # let potential failures go to caller, caller should handle
try: try:
errors = []
p = subprocess.Popen([ p = subprocess.Popen([
*cl, *cl,
"--output", str(output_path), "--output", str(output_path),
"--gui-mode" "--gui-mode"
], text=True, stdout=subprocess.PIPE) ], text=True, stdout=subprocess.PIPE)
for line in iter(p.stdout.readline, ""): for line in iter(p.stdout.readline, ""):
if line.startswith("#GUI#error") or line.startswith("#GUI#warning"):
errors.append(line)
if "progress" in line: if "progress" in line:
progress(total=100, completed=int(line.strip()[14:-1])) progress(total=100, completed=int(line.strip()[14:-1]))
return output_path, p.wait() return output_path, p.wait(), errors
finally: finally:
if chapters_path: if chapters_path:
# regardless of delete param, we delete as it's a file we made during muxing # regardless of delete param, we delete as it's a file we made during muxing

View File

@ -10,10 +10,11 @@ from typing import Any, Optional, Union
from langcodes import Language from langcodes import Language
from devine.core import binaries
from devine.core.config import config from devine.core.config import config
from devine.core.tracks.subtitle import Subtitle from devine.core.tracks.subtitle import Subtitle
from devine.core.tracks.track import Track from devine.core.tracks.track import Track
from devine.core.utilities import FPS, get_binary_path, get_boxes from devine.core.utilities import FPS, get_boxes
class Video(Track): class Video(Track):
@ -141,9 +142,11 @@ class Video(Track):
return Video.Range.SDR return Video.Range.SDR
@staticmethod @staticmethod
def from_m3u_range_tag(tag: str) -> Video.Range: def from_m3u_range_tag(tag: str) -> Optional[Video.Range]:
tag = (tag or "").upper().replace('"', '').strip() tag = (tag or "").upper().replace('"', '').strip()
if not tag or tag == "SDR": if not tag:
return None
if tag == "SDR":
return Video.Range.SDR return Video.Range.SDR
elif tag == "PQ": elif tag == "PQ":
return Video.Range.HDR10 # technically could be any PQ-transfer range return Video.Range.HDR10 # technically could be any PQ-transfer range
@ -152,35 +155,110 @@ class Video(Track):
# for some reason there's no Dolby Vision info tag # for some reason there's no Dolby Vision info tag
raise ValueError(f"The M3U Range Tag '{tag}' is not a supported Video Range") raise ValueError(f"The M3U Range Tag '{tag}' is not a supported Video Range")
def __init__(self, *args: Any, codec: Video.Codec, range_: Video.Range, bitrate: Union[str, int, float], def __init__(
width: int, height: int, fps: Optional[Union[str, int, float]] = None, **kwargs: Any) -> None: self,
*args: Any,
codec: Optional[Video.Codec] = None,
range_: Optional[Video.Range] = None,
bitrate: Optional[Union[str, int, float]] = None,
width: Optional[int] = None,
height: Optional[int] = None,
fps: Optional[Union[str, int, float]] = None,
**kwargs: Any
) -> None:
"""
Create a new Video track object.
Parameters:
codec: A Video.Codec enum representing the video codec.
If not specified, MediaInfo will be used to retrieve the codec
once the track has been downloaded.
range_: A Video.Range enum representing the video color range.
Defaults to SDR if not specified.
bitrate: A number or float representing the average bandwidth in bytes/s.
Float values are rounded up to the nearest integer.
width: The horizontal resolution of the video.
height: The vertical resolution of the video.
fps: A number, float, or string representing the frames/s of the video.
Strings may represent numbers, floats, or a fraction (num/den).
All strings will be cast to either a number or float.
Note: If codec, bitrate, width, height, or fps is not specified some checks
may be skipped or assume a value. Specifying as much information as possible
is highly recommended.
"""
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
# required
if not isinstance(codec, (Video.Codec, type(None))):
raise TypeError(f"Expected codec to be a {Video.Codec}, not {codec!r}")
if not isinstance(range_, (Video.Range, type(None))):
raise TypeError(f"Expected range_ to be a {Video.Range}, not {range_!r}")
if not isinstance(bitrate, (str, int, float, type(None))):
raise TypeError(f"Expected bitrate to be a {str}, {int}, or {float}, not {bitrate!r}")
if not isinstance(width, (int, str, type(None))):
raise TypeError(f"Expected width to be a {int}, not {width!r}")
if not isinstance(height, (int, str, type(None))):
raise TypeError(f"Expected height to be a {int}, not {height!r}")
if not isinstance(fps, (str, int, float, type(None))):
raise TypeError(f"Expected fps to be a {str}, {int}, or {float}, not {fps!r}")
self.codec = codec self.codec = codec
self.range = range_ or Video.Range.SDR self.range = range_ or Video.Range.SDR
self.bitrate = int(math.ceil(float(bitrate))) if bitrate else None
self.width = int(width) try:
self.height = int(height) self.bitrate = int(math.ceil(float(bitrate))) if bitrate else None
# optional except (ValueError, TypeError) as e:
self.fps = FPS.parse(str(fps)) if fps else None raise ValueError(f"Expected bitrate to be a number or float, {e}")
try:
self.width = int(width or 0) or None
except ValueError as e:
raise ValueError(f"Expected width to be a number, not {width!r}, {e}")
try:
self.height = int(height or 0) or None
except ValueError as e:
raise ValueError(f"Expected height to be a number, not {height!r}, {e}")
try:
self.fps = (FPS.parse(str(fps)) or None) if fps else None
except Exception as e:
raise ValueError(
"Expected fps to be a number, float, or a string as numerator/denominator form, " +
str(e)
)
def __str__(self) -> str: def __str__(self) -> str:
fps = f"{self.fps:.3f}" if self.fps else "Unknown"
return " | ".join(filter(bool, [ return " | ".join(filter(bool, [
"VID", "VID",
f"[{self.codec.value}, {self.range.name}]", "[" + (", ".join(filter(bool, [
self.codec.value if self.codec else None,
self.range.name
]))) + "]",
str(self.language), str(self.language),
f"{self.width}x{self.height} @ {self.bitrate // 1000 if self.bitrate else '?'} kb/s, {fps} FPS", ", ".join(filter(bool, [
" @ ".join(filter(bool, [
f"{self.width}x{self.height}" if self.width and self.height else None,
f"{self.bitrate // 1000} kb/s" if self.bitrate else None
])),
f"{self.fps:.3f} FPS" if self.fps else None
])),
self.edition self.edition
])) ]))
def change_color_range(self, range_: int) -> None: def change_color_range(self, range_: int) -> None:
"""Change the Video's Color Range to Limited (0) or Full (1).""" """Change the Video's Color Range to Limited (0) or Full (1)."""
if not self.path or not self.path.exists(): if not self.path or not self.path.exists():
raise ValueError("Cannot repackage a Track that has not been downloaded.") raise ValueError("Cannot change the color range flag on a Video that has not been downloaded.")
if not self.codec:
raise ValueError("Cannot change the color range flag on a Video that has no codec specified.")
if self.codec not in (Video.Codec.AVC, Video.Codec.HEVC):
raise NotImplementedError(
"Cannot change the color range flag on this Video as "
f"it's codec, {self.codec.value}, is not yet supported."
)
executable = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not executable:
raise EnvironmentError("FFmpeg executable \"ffmpeg\" was not found but is required for this call.") raise EnvironmentError("FFmpeg executable \"ffmpeg\" was not found but is required for this call.")
filter_key = { filter_key = {
@ -192,7 +270,7 @@ class Video(Track):
output_path = original_path.with_stem(f"{original_path.stem}_{['limited', 'full'][range_]}_range") output_path = original_path.with_stem(f"{original_path.stem}_{['limited', 'full'][range_]}_range")
subprocess.run([ subprocess.run([
executable, "-hide_banner", binaries.FFMPEG, "-hide_banner",
"-loglevel", "panic", "-loglevel", "panic",
"-i", original_path, "-i", original_path,
"-codec", "copy", "-codec", "copy",
@ -210,8 +288,7 @@ class Video(Track):
if not self.path: if not self.path:
raise ValueError("You must download the track first.") raise ValueError("You must download the track first.")
executable = get_binary_path("ccextractor", "ccextractorwin", "ccextractorwinfull") if not binaries.CCExtractor:
if not executable:
raise EnvironmentError("ccextractor executable was not found.") raise EnvironmentError("ccextractor executable was not found.")
# ccextractor often fails in weird ways unless we repack # ccextractor often fails in weird ways unless we repack
@ -221,7 +298,7 @@ class Video(Track):
try: try:
subprocess.run([ subprocess.run([
executable, binaries.CCExtractor,
"-trim", "-trim",
"-nobom", "-nobom",
"-noru", "-ru1", "-noru", "-ru1",
@ -302,8 +379,7 @@ class Video(Track):
if not self.path or not self.path.exists(): if not self.path or not self.path.exists():
raise ValueError("Cannot clean a Track that has not been downloaded.") raise ValueError("Cannot clean a Track that has not been downloaded.")
executable = get_binary_path("ffmpeg") if not binaries.FFMPEG:
if not executable:
raise EnvironmentError("FFmpeg executable \"ffmpeg\" was not found but is required for this call.") raise EnvironmentError("FFmpeg executable \"ffmpeg\" was not found but is required for this call.")
log = logging.getLogger("x264-clean") log = logging.getLogger("x264-clean")
@ -324,7 +400,7 @@ class Video(Track):
original_path = self.path original_path = self.path
cleaned_path = original_path.with_suffix(f".cleaned{original_path.suffix}") cleaned_path = original_path.with_suffix(f".cleaned{original_path.suffix}")
subprocess.run([ subprocess.run([
executable, "-hide_banner", binaries.FFMPEG, "-hide_banner",
"-loglevel", "panic", "-loglevel", "panic",
"-i", original_path, "-i", original_path,
"-map_metadata", "-1", "-map_metadata", "-1",

View File

@ -3,7 +3,6 @@ import contextlib
import importlib.util import importlib.util
import os import os
import re import re
import shutil
import socket import socket
import sys import sys
import time import time
@ -87,15 +86,6 @@ def import_module_by_path(path: Path) -> ModuleType:
return module return module
def get_binary_path(*names: str) -> Optional[Path]:
"""Find the path of the first found binary name."""
for name in names:
path = shutil.which(name)
if path:
return Path(path)
return None
def sanitize_filename(filename: str, spacer: str = ".") -> str: def sanitize_filename(filename: str, spacer: str = ".") -> str:
""" """
Sanitize a string to be filename safe. Sanitize a string to be filename safe.
@ -133,18 +123,18 @@ def get_boxes(data: bytes, box_type: bytes, as_bytes: bool = False) -> Box:
# since it doesn't care what child box the wanted box is from, this works fine. # since it doesn't care what child box the wanted box is from, this works fine.
if not isinstance(data, (bytes, bytearray)): if not isinstance(data, (bytes, bytearray)):
raise ValueError("data must be bytes") raise ValueError("data must be bytes")
offset = 0
while True: while True:
try: try:
index = data.index(box_type) index = data[offset:].index(box_type)
except ValueError: except ValueError:
break break
if index < 0: if index < 0:
break break
if index > 4: index -= 4 # size is before box type and is 4 bytes long
index -= 4 # size is before box type and is 4 bytes long
data = data[index:]
try: try:
box = Box.parse(data) box = Box.parse(data[offset:][index:])
except IOError: except IOError:
# since get_init_segment might cut off unexpectedly, pymp4 may be unable to read # since get_init_segment might cut off unexpectedly, pymp4 may be unable to read
# the expected amounts of data and complain, so let's just end the function here # the expected amounts of data and complain, so let's just end the function here
@ -157,6 +147,7 @@ def get_boxes(data: bytes, box_type: bytes, as_bytes: bool = False) -> Box:
raise e raise e
if as_bytes: if as_bytes:
box = Box.build(box) box = Box.build(box)
offset += index + len(Box.build(box))
yield box yield box

View File

@ -3,11 +3,16 @@ import subprocess
from pathlib import Path from pathlib import Path
from typing import Union from typing import Union
from devine.core import binaries
def ffprobe(uri: Union[bytes, Path]) -> dict: def ffprobe(uri: Union[bytes, Path]) -> dict:
"""Use ffprobe on the provided data to get stream information.""" """Use ffprobe on the provided data to get stream information."""
if not binaries.FFProbe:
raise EnvironmentError("FFProbe executable \"ffprobe\" not found but is required.")
args = [ args = [
"ffprobe", binaries.FFProbe,
"-v", "quiet", "-v", "quiet",
"-of", "json", "-of", "json",
"-show_streams" "-show_streams"

191
devine/core/utils/webvtt.py Normal file
View File

@ -0,0 +1,191 @@
import re
import sys
import typing
from typing import Optional
from pycaption import Caption, CaptionList, CaptionNode, CaptionReadError, WebVTTReader, WebVTTWriter
class CaptionListExt(CaptionList):
@typing.no_type_check
def __init__(self, iterable=None, layout_info=None):
self.first_segment_mpegts = 0
super().__init__(iterable, layout_info)
class CaptionExt(Caption):
@typing.no_type_check
def __init__(self, start, end, nodes, style=None, layout_info=None, segment_index=0, mpegts=0, cue_time=0.0):
style = style or {}
self.segment_index: int = segment_index
self.mpegts: float = mpegts
self.cue_time: float = cue_time
super().__init__(start, end, nodes, style, layout_info)
class WebVTTReaderExt(WebVTTReader):
# HLS extension support <https://datatracker.ietf.org/doc/html/rfc8216#section-3.5>
RE_TIMESTAMP_MAP = re.compile(r"X-TIMESTAMP-MAP.*")
RE_MPEGTS = re.compile(r"MPEGTS:(\d+)")
RE_LOCAL = re.compile(r"LOCAL:((?:(\d{1,}):)?(\d{2}):(\d{2})\.(\d{3}))")
def _parse(self, lines: list[str]) -> CaptionList:
captions = CaptionListExt()
start = None
end = None
nodes: list[CaptionNode] = []
layout_info = None
found_timing = False
segment_index = -1
mpegts = 0
cue_time = 0.0
# The first segment MPEGTS is needed to calculate the rest. It is possible that
# the first segment contains no cue and is ignored by pycaption, this acts as a fallback.
captions.first_segment_mpegts = 0
for i, line in enumerate(lines):
if "-->" in line:
found_timing = True
timing_line = i
last_start_time = captions[-1].start if captions else 0
try:
start, end, layout_info = self._parse_timing_line(line, last_start_time)
except CaptionReadError as e:
new_msg = f"{e.args[0]} (line {timing_line})"
tb = sys.exc_info()[2]
raise type(e)(new_msg).with_traceback(tb) from None
elif "" == line:
if found_timing and nodes:
found_timing = False
caption = CaptionExt(
start,
end,
nodes,
layout_info=layout_info,
segment_index=segment_index,
mpegts=mpegts,
cue_time=cue_time,
)
captions.append(caption)
nodes = []
elif "WEBVTT" in line:
# Merged segmented VTT doesn't have index information, track manually.
segment_index += 1
mpegts = 0
cue_time = 0.0
elif m := self.RE_TIMESTAMP_MAP.match(line):
if r := self.RE_MPEGTS.search(m.group()):
mpegts = int(r.group(1))
cue_time = self._parse_local(m.group())
# Early assignment in case the first segment contains no cue.
if segment_index == 0:
captions.first_segment_mpegts = mpegts
else:
if found_timing:
if nodes:
nodes.append(CaptionNode.create_break())
nodes.append(CaptionNode.create_text(self._decode(line)))
else:
# it's a comment or some metadata; ignore it
pass
# Add a last caption if there are remaining nodes
if nodes:
caption = CaptionExt(start, end, nodes, layout_info=layout_info, segment_index=segment_index, mpegts=mpegts)
captions.append(caption)
return captions
@staticmethod
def _parse_local(string: str) -> float:
"""
Parse WebVTT LOCAL time and convert it to seconds.
"""
m = WebVTTReaderExt.RE_LOCAL.search(string)
if not m:
return 0
parsed = m.groups()
if not parsed:
return 0
hours = int(parsed[1])
minutes = int(parsed[2])
seconds = int(parsed[3])
milliseconds = int(parsed[4])
return (milliseconds / 1000) + seconds + (minutes * 60) + (hours * 3600)
def merge_segmented_webvtt(vtt_raw: str, segment_durations: Optional[list[int]] = None, timescale: int = 1) -> str:
"""
Merge Segmented WebVTT data.
Parameters:
vtt_raw: The concatenated WebVTT files to merge. All WebVTT headers must be
appropriately spaced apart, or it may produce unwanted effects like
considering headers as captions, timestamp lines, etc.
segment_durations: A list of each segment's duration. If not provided it will try
to get it from the X-TIMESTAMP-MAP headers, specifically the MPEGTS number.
timescale: The number of time units per second.
This parses the X-TIMESTAMP-MAP data to compute new absolute timestamps, replacing
the old start and end timestamp values. All X-TIMESTAMP-MAP header information will
be removed from the output as they are no longer of concern. Consider this function
the opposite of a WebVTT Segmenter, a WebVTT Joiner of sorts.
Algorithm borrowed from N_m3u8DL-RE and shaka-player.
"""
MPEG_TIMESCALE = 90_000
vtt = WebVTTReaderExt().read(vtt_raw)
for lang in vtt.get_languages():
prev_caption = None
duplicate_index: list[int] = []
captions = vtt.get_captions(lang)
if captions[0].segment_index == 0:
first_segment_mpegts = captions[0].mpegts
else:
first_segment_mpegts = segment_durations[0] if segment_durations else captions.first_segment_mpegts
caption: CaptionExt
for i, caption in enumerate(captions):
# DASH WebVTT doesn't have MPEGTS timestamp like HLS. Instead,
# calculate the timestamp from SegmentTemplate/SegmentList duration.
likely_dash = first_segment_mpegts == 0 and caption.mpegts == 0
if likely_dash and segment_durations:
duration = segment_durations[caption.segment_index]
caption.mpegts = MPEG_TIMESCALE * (duration / timescale)
if caption.mpegts == 0:
continue
seconds = (caption.mpegts - first_segment_mpegts) / MPEG_TIMESCALE - caption.cue_time
offset = seconds * 1_000_000 # pycaption use microseconds
if caption.start < offset:
caption.start += offset
caption.end += offset
# If the difference between current and previous captions is <=1ms
# and the payload is equal then splice.
if (
prev_caption
and not caption.is_empty()
and (caption.start - prev_caption.end) <= 1000 # 1ms in microseconds
and caption.get_text() == prev_caption.get_text()
):
prev_caption.end = caption.end
duplicate_index.append(i)
prev_caption = caption
# Remove duplicate
captions[:] = [c for c_index, c in enumerate(captions) if c_index not in set(duplicate_index)]
return WebVTTWriter().write(vtt)

300
poetry.lock generated
View File

@ -2,87 +2,87 @@
[[package]] [[package]]
name = "aiohttp" name = "aiohttp"
version = "3.9.3" version = "3.9.4"
description = "Async http client/server framework (asyncio)" description = "Async http client/server framework (asyncio)"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "aiohttp-3.9.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:939677b61f9d72a4fa2a042a5eee2a99a24001a67c13da113b2e30396567db54"}, {file = "aiohttp-3.9.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:76d32588ef7e4a3f3adff1956a0ba96faabbdee58f2407c122dd45aa6e34f372"},
{file = "aiohttp-3.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1f5cd333fcf7590a18334c90f8c9147c837a6ec8a178e88d90a9b96ea03194cc"}, {file = "aiohttp-3.9.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:56181093c10dbc6ceb8a29dfeea1e815e1dfdc020169203d87fd8d37616f73f9"},
{file = "aiohttp-3.9.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:82e6aa28dd46374f72093eda8bcd142f7771ee1eb9d1e223ff0fa7177a96b4a5"}, {file = "aiohttp-3.9.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c7a5b676d3c65e88b3aca41816bf72831898fcd73f0cbb2680e9d88e819d1e4d"},
{file = "aiohttp-3.9.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f56455b0c2c7cc3b0c584815264461d07b177f903a04481dfc33e08a89f0c26b"}, {file = "aiohttp-3.9.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d1df528a85fb404899d4207a8d9934cfd6be626e30e5d3a5544a83dbae6d8a7e"},
{file = "aiohttp-3.9.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bca77a198bb6e69795ef2f09a5f4c12758487f83f33d63acde5f0d4919815768"}, {file = "aiohttp-3.9.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f595db1bceabd71c82e92df212dd9525a8a2c6947d39e3c994c4f27d2fe15b11"},
{file = "aiohttp-3.9.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e083c285857b78ee21a96ba1eb1b5339733c3563f72980728ca2b08b53826ca5"}, {file = "aiohttp-3.9.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9c0b09d76e5a4caac3d27752027fbd43dc987b95f3748fad2b924a03fe8632ad"},
{file = "aiohttp-3.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ab40e6251c3873d86ea9b30a1ac6d7478c09277b32e14745d0d3c6e76e3c7e29"}, {file = "aiohttp-3.9.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:689eb4356649ec9535b3686200b231876fb4cab4aca54e3bece71d37f50c1d13"},
{file = "aiohttp-3.9.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:df822ee7feaaeffb99c1a9e5e608800bd8eda6e5f18f5cfb0dc7eeb2eaa6bbec"}, {file = "aiohttp-3.9.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a3666cf4182efdb44d73602379a66f5fdfd5da0db5e4520f0ac0dcca644a3497"},
{file = "aiohttp-3.9.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:acef0899fea7492145d2bbaaaec7b345c87753168589cc7faf0afec9afe9b747"}, {file = "aiohttp-3.9.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b65b0f8747b013570eea2f75726046fa54fa8e0c5db60f3b98dd5d161052004a"},
{file = "aiohttp-3.9.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:cd73265a9e5ea618014802ab01babf1940cecb90c9762d8b9e7d2cc1e1969ec6"}, {file = "aiohttp-3.9.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:a1885d2470955f70dfdd33a02e1749613c5a9c5ab855f6db38e0b9389453dce7"},
{file = "aiohttp-3.9.3-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a78ed8a53a1221393d9637c01870248a6f4ea5b214a59a92a36f18151739452c"}, {file = "aiohttp-3.9.4-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:0593822dcdb9483d41f12041ff7c90d4d1033ec0e880bcfaf102919b715f47f1"},
{file = "aiohttp-3.9.3-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:6b0e029353361f1746bac2e4cc19b32f972ec03f0f943b390c4ab3371840aabf"}, {file = "aiohttp-3.9.4-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:47f6eb74e1ecb5e19a78f4a4228aa24df7fbab3b62d4a625d3f41194a08bd54f"},
{file = "aiohttp-3.9.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7cf5c9458e1e90e3c390c2639f1017a0379a99a94fdfad3a1fd966a2874bba52"}, {file = "aiohttp-3.9.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c8b04a3dbd54de6ccb7604242fe3ad67f2f3ca558f2d33fe19d4b08d90701a89"},
{file = "aiohttp-3.9.3-cp310-cp310-win32.whl", hash = "sha256:3e59c23c52765951b69ec45ddbbc9403a8761ee6f57253250c6e1536cacc758b"}, {file = "aiohttp-3.9.4-cp310-cp310-win32.whl", hash = "sha256:8a78dfb198a328bfb38e4308ca8167028920fb747ddcf086ce706fbdd23b2926"},
{file = "aiohttp-3.9.3-cp310-cp310-win_amd64.whl", hash = "sha256:055ce4f74b82551678291473f66dc9fb9048a50d8324278751926ff0ae7715e5"}, {file = "aiohttp-3.9.4-cp310-cp310-win_amd64.whl", hash = "sha256:e78da6b55275987cbc89141a1d8e75f5070e577c482dd48bd9123a76a96f0bbb"},
{file = "aiohttp-3.9.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:6b88f9386ff1ad91ace19d2a1c0225896e28815ee09fc6a8932fded8cda97c3d"}, {file = "aiohttp-3.9.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c111b3c69060d2bafc446917534150fd049e7aedd6cbf21ba526a5a97b4402a5"},
{file = "aiohttp-3.9.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c46956ed82961e31557b6857a5ca153c67e5476972e5f7190015018760938da2"}, {file = "aiohttp-3.9.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:efbdd51872cf170093998c87ccdf3cb5993add3559341a8e5708bcb311934c94"},
{file = "aiohttp-3.9.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:07b837ef0d2f252f96009e9b8435ec1fef68ef8b1461933253d318748ec1acdc"}, {file = "aiohttp-3.9.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7bfdb41dc6e85d8535b00d73947548a748e9534e8e4fddd2638109ff3fb081df"},
{file = "aiohttp-3.9.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dad46e6f620574b3b4801c68255492e0159d1712271cc99d8bdf35f2043ec266"}, {file = "aiohttp-3.9.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2bd9d334412961125e9f68d5b73c1d0ab9ea3f74a58a475e6b119f5293eee7ba"},
{file = "aiohttp-3.9.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5ed3e046ea7b14938112ccd53d91c1539af3e6679b222f9469981e3dac7ba1ce"}, {file = "aiohttp-3.9.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:35d78076736f4a668d57ade00c65d30a8ce28719d8a42471b2a06ccd1a2e3063"},
{file = "aiohttp-3.9.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:039df344b45ae0b34ac885ab5b53940b174530d4dd8a14ed8b0e2155b9dddccb"}, {file = "aiohttp-3.9.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:824dff4f9f4d0f59d0fa3577932ee9a20e09edec8a2f813e1d6b9f89ced8293f"},
{file = "aiohttp-3.9.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7943c414d3a8d9235f5f15c22ace69787c140c80b718dcd57caaade95f7cd93b"}, {file = "aiohttp-3.9.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:52b8b4e06fc15519019e128abedaeb56412b106ab88b3c452188ca47a25c4093"},
{file = "aiohttp-3.9.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:84871a243359bb42c12728f04d181a389718710129b36b6aad0fc4655a7647d4"}, {file = "aiohttp-3.9.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eae569fb1e7559d4f3919965617bb39f9e753967fae55ce13454bec2d1c54f09"},
{file = "aiohttp-3.9.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:5eafe2c065df5401ba06821b9a054d9cb2848867f3c59801b5d07a0be3a380ae"}, {file = "aiohttp-3.9.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:69b97aa5792428f321f72aeb2f118e56893371f27e0b7d05750bcad06fc42ca1"},
{file = "aiohttp-3.9.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:9d3c9b50f19704552f23b4eaea1fc082fdd82c63429a6506446cbd8737823da3"}, {file = "aiohttp-3.9.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4d79aad0ad4b980663316f26d9a492e8fab2af77c69c0f33780a56843ad2f89e"},
{file = "aiohttp-3.9.3-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:f033d80bc6283092613882dfe40419c6a6a1527e04fc69350e87a9df02bbc283"}, {file = "aiohttp-3.9.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:d6577140cd7db19e430661e4b2653680194ea8c22c994bc65b7a19d8ec834403"},
{file = "aiohttp-3.9.3-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:2c895a656dd7e061b2fd6bb77d971cc38f2afc277229ce7dd3552de8313a483e"}, {file = "aiohttp-3.9.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:9860d455847cd98eb67897f5957b7cd69fbcb436dd3f06099230f16a66e66f79"},
{file = "aiohttp-3.9.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1f5a71d25cd8106eab05f8704cd9167b6e5187bcdf8f090a66c6d88b634802b4"}, {file = "aiohttp-3.9.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:69ff36d3f8f5652994e08bd22f093e11cfd0444cea310f92e01b45a4e46b624e"},
{file = "aiohttp-3.9.3-cp311-cp311-win32.whl", hash = "sha256:50fca156d718f8ced687a373f9e140c1bb765ca16e3d6f4fe116e3df7c05b2c5"}, {file = "aiohttp-3.9.4-cp311-cp311-win32.whl", hash = "sha256:e27d3b5ed2c2013bce66ad67ee57cbf614288bda8cdf426c8d8fe548316f1b5f"},
{file = "aiohttp-3.9.3-cp311-cp311-win_amd64.whl", hash = "sha256:5fe9ce6c09668063b8447f85d43b8d1c4e5d3d7e92c63173e6180b2ac5d46dd8"}, {file = "aiohttp-3.9.4-cp311-cp311-win_amd64.whl", hash = "sha256:d6a67e26daa686a6fbdb600a9af8619c80a332556245fa8e86c747d226ab1a1e"},
{file = "aiohttp-3.9.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:38a19bc3b686ad55804ae931012f78f7a534cce165d089a2059f658f6c91fa60"}, {file = "aiohttp-3.9.4-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:c5ff8ff44825736a4065d8544b43b43ee4c6dd1530f3a08e6c0578a813b0aa35"},
{file = "aiohttp-3.9.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:770d015888c2a598b377bd2f663adfd947d78c0124cfe7b959e1ef39f5b13869"}, {file = "aiohttp-3.9.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d12a244627eba4e9dc52cbf924edef905ddd6cafc6513849b4876076a6f38b0e"},
{file = "aiohttp-3.9.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:ee43080e75fc92bf36219926c8e6de497f9b247301bbf88c5c7593d931426679"}, {file = "aiohttp-3.9.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:dcad56c8d8348e7e468899d2fb3b309b9bc59d94e6db08710555f7436156097f"},
{file = "aiohttp-3.9.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:52df73f14ed99cee84865b95a3d9e044f226320a87af208f068ecc33e0c35b96"}, {file = "aiohttp-3.9.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4f7e69a7fd4b5ce419238388e55abd220336bd32212c673ceabc57ccf3d05b55"},
{file = "aiohttp-3.9.3-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:dc9b311743a78043b26ffaeeb9715dc360335e5517832f5a8e339f8a43581e4d"}, {file = "aiohttp-3.9.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4870cb049f10d7680c239b55428916d84158798eb8f353e74fa2c98980dcc0b"},
{file = "aiohttp-3.9.3-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b955ed993491f1a5da7f92e98d5dad3c1e14dc175f74517c4e610b1f2456fb11"}, {file = "aiohttp-3.9.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3b2feaf1b7031ede1bc0880cec4b0776fd347259a723d625357bb4b82f62687b"},
{file = "aiohttp-3.9.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:504b6981675ace64c28bf4a05a508af5cde526e36492c98916127f5a02354d53"}, {file = "aiohttp-3.9.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:939393e8c3f0a5bcd33ef7ace67680c318dc2ae406f15e381c0054dd658397de"},
{file = "aiohttp-3.9.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a6fe5571784af92b6bc2fda8d1925cccdf24642d49546d3144948a6a1ed58ca5"}, {file = "aiohttp-3.9.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7d2334e387b2adcc944680bebcf412743f2caf4eeebd550f67249c1c3696be04"},
{file = "aiohttp-3.9.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ba39e9c8627edc56544c8628cc180d88605df3892beeb2b94c9bc857774848ca"}, {file = "aiohttp-3.9.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:e0198ea897680e480845ec0ffc5a14e8b694e25b3f104f63676d55bf76a82f1a"},
{file = "aiohttp-3.9.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:e5e46b578c0e9db71d04c4b506a2121c0cb371dd89af17a0586ff6769d4c58c1"}, {file = "aiohttp-3.9.4-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:e40d2cd22914d67c84824045861a5bb0fb46586b15dfe4f046c7495bf08306b2"},
{file = "aiohttp-3.9.3-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:938a9653e1e0c592053f815f7028e41a3062e902095e5a7dc84617c87267ebd5"}, {file = "aiohttp-3.9.4-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:aba80e77c227f4234aa34a5ff2b6ff30c5d6a827a91d22ff6b999de9175d71bd"},
{file = "aiohttp-3.9.3-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:c3452ea726c76e92f3b9fae4b34a151981a9ec0a4847a627c43d71a15ac32aa6"}, {file = "aiohttp-3.9.4-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:fb68dc73bc8ac322d2e392a59a9e396c4f35cb6fdbdd749e139d1d6c985f2527"},
{file = "aiohttp-3.9.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:ff30218887e62209942f91ac1be902cc80cddb86bf00fbc6783b7a43b2bea26f"}, {file = "aiohttp-3.9.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:f3460a92638dce7e47062cf088d6e7663adb135e936cb117be88d5e6c48c9d53"},
{file = "aiohttp-3.9.3-cp312-cp312-win32.whl", hash = "sha256:38f307b41e0bea3294a9a2a87833191e4bcf89bb0365e83a8be3a58b31fb7f38"}, {file = "aiohttp-3.9.4-cp312-cp312-win32.whl", hash = "sha256:32dc814ddbb254f6170bca198fe307920f6c1308a5492f049f7f63554b88ef36"},
{file = "aiohttp-3.9.3-cp312-cp312-win_amd64.whl", hash = "sha256:b791a3143681a520c0a17e26ae7465f1b6f99461a28019d1a2f425236e6eedb5"}, {file = "aiohttp-3.9.4-cp312-cp312-win_amd64.whl", hash = "sha256:63f41a909d182d2b78fe3abef557fcc14da50c7852f70ae3be60e83ff64edba5"},
{file = "aiohttp-3.9.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0ed621426d961df79aa3b963ac7af0d40392956ffa9be022024cd16297b30c8c"}, {file = "aiohttp-3.9.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:c3770365675f6be220032f6609a8fbad994d6dcf3ef7dbcf295c7ee70884c9af"},
{file = "aiohttp-3.9.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7f46acd6a194287b7e41e87957bfe2ad1ad88318d447caf5b090012f2c5bb528"}, {file = "aiohttp-3.9.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:305edae1dea368ce09bcb858cf5a63a064f3bff4767dec6fa60a0cc0e805a1d3"},
{file = "aiohttp-3.9.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:feeb18a801aacb098220e2c3eea59a512362eb408d4afd0c242044c33ad6d542"}, {file = "aiohttp-3.9.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:6f121900131d116e4a93b55ab0d12ad72573f967b100e49086e496a9b24523ea"},
{file = "aiohttp-3.9.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f734e38fd8666f53da904c52a23ce517f1b07722118d750405af7e4123933511"}, {file = "aiohttp-3.9.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b71e614c1ae35c3d62a293b19eface83d5e4d194e3eb2fabb10059d33e6e8cbf"},
{file = "aiohttp-3.9.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b40670ec7e2156d8e57f70aec34a7216407848dfe6c693ef131ddf6e76feb672"}, {file = "aiohttp-3.9.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:419f009fa4cfde4d16a7fc070d64f36d70a8d35a90d71aa27670bba2be4fd039"},
{file = "aiohttp-3.9.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fdd215b7b7fd4a53994f238d0f46b7ba4ac4c0adb12452beee724ddd0743ae5d"}, {file = "aiohttp-3.9.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7b39476ee69cfe64061fd77a73bf692c40021f8547cda617a3466530ef63f947"},
{file = "aiohttp-3.9.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:017a21b0df49039c8f46ca0971b3a7fdc1f56741ab1240cb90ca408049766168"}, {file = "aiohttp-3.9.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b33f34c9c7decdb2ab99c74be6443942b730b56d9c5ee48fb7df2c86492f293c"},
{file = "aiohttp-3.9.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e99abf0bba688259a496f966211c49a514e65afa9b3073a1fcee08856e04425b"}, {file = "aiohttp-3.9.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c78700130ce2dcebb1a8103202ae795be2fa8c9351d0dd22338fe3dac74847d9"},
{file = "aiohttp-3.9.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:648056db9a9fa565d3fa851880f99f45e3f9a771dd3ff3bb0c048ea83fb28194"}, {file = "aiohttp-3.9.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:268ba22d917655d1259af2d5659072b7dc11b4e1dc2cb9662fdd867d75afc6a4"},
{file = "aiohttp-3.9.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8aacb477dc26797ee089721536a292a664846489c49d3ef9725f992449eda5a8"}, {file = "aiohttp-3.9.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:17e7c051f53a0d2ebf33013a9cbf020bb4e098c4bc5bce6f7b0c962108d97eab"},
{file = "aiohttp-3.9.3-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:522a11c934ea660ff8953eda090dcd2154d367dec1ae3c540aff9f8a5c109ab4"}, {file = "aiohttp-3.9.4-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:7be99f4abb008cb38e144f85f515598f4c2c8932bf11b65add0ff59c9c876d99"},
{file = "aiohttp-3.9.3-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:5bce0dc147ca85caa5d33debc4f4d65e8e8b5c97c7f9f660f215fa74fc49a321"}, {file = "aiohttp-3.9.4-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:d58a54d6ff08d2547656356eea8572b224e6f9bbc0cf55fa9966bcaac4ddfb10"},
{file = "aiohttp-3.9.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:4b4af9f25b49a7be47c0972139e59ec0e8285c371049df1a63b6ca81fdd216a2"}, {file = "aiohttp-3.9.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:7673a76772bda15d0d10d1aa881b7911d0580c980dbd16e59d7ba1422b2d83cd"},
{file = "aiohttp-3.9.3-cp38-cp38-win32.whl", hash = "sha256:298abd678033b8571995650ccee753d9458dfa0377be4dba91e4491da3f2be63"}, {file = "aiohttp-3.9.4-cp38-cp38-win32.whl", hash = "sha256:e4370dda04dc8951012f30e1ce7956a0a226ac0714a7b6c389fb2f43f22a250e"},
{file = "aiohttp-3.9.3-cp38-cp38-win_amd64.whl", hash = "sha256:69361bfdca5468c0488d7017b9b1e5ce769d40b46a9f4a2eed26b78619e9396c"}, {file = "aiohttp-3.9.4-cp38-cp38-win_amd64.whl", hash = "sha256:eb30c4510a691bb87081192a394fb661860e75ca3896c01c6d186febe7c88530"},
{file = "aiohttp-3.9.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0fa43c32d1643f518491d9d3a730f85f5bbaedcbd7fbcae27435bb8b7a061b29"}, {file = "aiohttp-3.9.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:84e90494db7df3be5e056f91412f9fa9e611fbe8ce4aaef70647297f5943b276"},
{file = "aiohttp-3.9.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:835a55b7ca49468aaaac0b217092dfdff370e6c215c9224c52f30daaa735c1c1"}, {file = "aiohttp-3.9.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:7d4845f8501ab28ebfdbeab980a50a273b415cf69e96e4e674d43d86a464df9d"},
{file = "aiohttp-3.9.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:06a9b2c8837d9a94fae16c6223acc14b4dfdff216ab9b7202e07a9a09541168f"}, {file = "aiohttp-3.9.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:69046cd9a2a17245c4ce3c1f1a4ff8c70c7701ef222fce3d1d8435f09042bba1"},
{file = "aiohttp-3.9.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:abf151955990d23f84205286938796c55ff11bbfb4ccfada8c9c83ae6b3c89a3"}, {file = "aiohttp-3.9.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8b73a06bafc8dcc508420db43b4dd5850e41e69de99009d0351c4f3007960019"},
{file = "aiohttp-3.9.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:59c26c95975f26e662ca78fdf543d4eeaef70e533a672b4113dd888bd2423caa"}, {file = "aiohttp-3.9.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:418bb0038dfafeac923823c2e63226179976c76f981a2aaad0ad5d51f2229bca"},
{file = "aiohttp-3.9.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f95511dd5d0e05fd9728bac4096319f80615aaef4acbecb35a990afebe953b0e"}, {file = "aiohttp-3.9.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:71a8f241456b6c2668374d5d28398f8e8cdae4cce568aaea54e0f39359cd928d"},
{file = "aiohttp-3.9.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:595f105710293e76b9dc09f52e0dd896bd064a79346234b521f6b968ffdd8e58"}, {file = "aiohttp-3.9.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:935c369bf8acc2dc26f6eeb5222768aa7c62917c3554f7215f2ead7386b33748"},
{file = "aiohttp-3.9.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7c8b816c2b5af5c8a436df44ca08258fc1a13b449393a91484225fcb7545533"}, {file = "aiohttp-3.9.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:74e4e48c8752d14ecfb36d2ebb3d76d614320570e14de0a3aa7a726ff150a03c"},
{file = "aiohttp-3.9.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f1088fa100bf46e7b398ffd9904f4808a0612e1d966b4aa43baa535d1b6341eb"}, {file = "aiohttp-3.9.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:916b0417aeddf2c8c61291238ce25286f391a6acb6f28005dd9ce282bd6311b6"},
{file = "aiohttp-3.9.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f59dfe57bb1ec82ac0698ebfcdb7bcd0e99c255bd637ff613760d5f33e7c81b3"}, {file = "aiohttp-3.9.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9b6787b6d0b3518b2ee4cbeadd24a507756ee703adbac1ab6dc7c4434b8c572a"},
{file = "aiohttp-3.9.3-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:361a1026c9dd4aba0109e4040e2aecf9884f5cfe1b1b1bd3d09419c205e2e53d"}, {file = "aiohttp-3.9.4-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:221204dbda5ef350e8db6287937621cf75e85778b296c9c52260b522231940ed"},
{file = "aiohttp-3.9.3-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:363afe77cfcbe3a36353d8ea133e904b108feea505aa4792dad6585a8192c55a"}, {file = "aiohttp-3.9.4-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:10afd99b8251022ddf81eaed1d90f5a988e349ee7d779eb429fb07b670751e8c"},
{file = "aiohttp-3.9.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8e2c45c208c62e955e8256949eb225bd8b66a4c9b6865729a786f2aa79b72e9d"}, {file = "aiohttp-3.9.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2506d9f7a9b91033201be9ffe7d89c6a54150b0578803cce5cb84a943d075bc3"},
{file = "aiohttp-3.9.3-cp39-cp39-win32.whl", hash = "sha256:f7217af2e14da0856e082e96ff637f14ae45c10a5714b63c77f26d8884cf1051"}, {file = "aiohttp-3.9.4-cp39-cp39-win32.whl", hash = "sha256:e571fdd9efd65e86c6af2f332e0e95dad259bfe6beb5d15b3c3eca3a6eb5d87b"},
{file = "aiohttp-3.9.3-cp39-cp39-win_amd64.whl", hash = "sha256:27468897f628c627230dba07ec65dc8d0db566923c48f29e084ce382119802bc"}, {file = "aiohttp-3.9.4-cp39-cp39-win_amd64.whl", hash = "sha256:7d29dd5319d20aa3b7749719ac9685fbd926f71ac8c77b2477272725f882072d"},
{file = "aiohttp-3.9.3.tar.gz", hash = "sha256:90842933e5d1ff760fae6caca4b2b3edba53ba8f4b71e95dacf2818a2aca06f7"}, {file = "aiohttp-3.9.4.tar.gz", hash = "sha256:6ff71ede6d9a5a58cfb7b6fffc83ab5d4a63138276c771ac91ceaaddf5459644"},
] ]
[package.dependencies] [package.dependencies]
@ -523,28 +523,31 @@ testing = ["cssselect", "importlib-resources", "jaraco.test (>=5.1)", "lxml", "p
[[package]] [[package]]
name = "curl-cffi" name = "curl-cffi"
version = "0.6.2" version = "0.7.0b4"
description = "libcurl ffi bindings for Python, with impersonation support" description = "libcurl ffi bindings for Python, with impersonation support."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "curl_cffi-0.6.2-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:23b8a2872b160718c04b06b1f8aa4fb1a2f4f94bce7040493515e081a27cad19"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-macosx_10_9_x86_64.whl", hash = "sha256:694d88f7065c59c651970f14bc415431f65ac601a9ba537463d70f432a48ccfc"},
{file = "curl_cffi-0.6.2-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:ad3c1cf5360810825ec4bc3da425f26ee4098878a615dab9d309a99afd883ba9"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:6faf01aa8d98d322b877d3d801544692c73729ea6eb4a45af83514a4ecd1c8fe"},
{file = "curl_cffi-0.6.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d01de6ed737ad1924aaa0198195b9020c38e77ce90ea3d72b9eacf4938c7adf"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5d39849371bbf3eab048113693715a8da5c729c494cccfa1128d768d96fdc31e"},
{file = "curl_cffi-0.6.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37e513cc149d024a2d625e202f2cc9d4423d2937343ea2e06f797d99779e62dc"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e3a5099b98c4bf12cc1afecb3409a9c57e7ebce9447a03c96dfb661ad8fa5e79"},
{file = "curl_cffi-0.6.2-cp38-abi3-win32.whl", hash = "sha256:12e829af97cbf7c1d5afef177e786f6f404ddf163b08897a1ed087cadbeb4837"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e3616141a2a0be7896e7dc5da1ed3965e1a78aa2e563d8aba7a641135aeaf1b"},
{file = "curl_cffi-0.6.2-cp38-abi3-win_amd64.whl", hash = "sha256:3791b7a9ae4cb1298165300f2dc2d60a86779f055570ae83163fc2d8a74bf714"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:bd16cccc0d3e93c2fbc4f4cb7cce0e10cb2ef7f8957352f3f0d770f0d6e05702"},
{file = "curl_cffi-0.6.2.tar.gz", hash = "sha256:9ee519e960b5fc6e0bbf13d0ecba9ce5f6306cb929354504bf03cc30f59a8f63"}, {file = "curl_cffi-0.7.0b4-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:d65aa649abb24020c2ad7b3ce45e2816d1ffe25df06f1a6b0f52fbf353af82e0"},
{file = "curl_cffi-0.7.0b4-cp38-abi3-win32.whl", hash = "sha256:b55c53bb6dff713cb63f76e2f147e2d54c984b1b09df66b08f52f3acae1aeca0"},
{file = "curl_cffi-0.7.0b4-cp38-abi3-win_amd64.whl", hash = "sha256:449ab07e07335558997cd62296b5c4f16ce27630de7830e4ad22441049a0ef1e"},
{file = "curl_cffi-0.7.0b4.tar.gz", hash = "sha256:c09a062b8aac93d4890d2c33b7053c0e1a5cf275328b80c1fb1a950310df75f2"},
] ]
[package.dependencies] [package.dependencies]
certifi = "*" certifi = ">=2024.2.2"
cffi = ">=1.12.0" cffi = ">=1.12.0"
[package.extras] [package.extras]
build = ["cibuildwheel", "wheel"] build = ["cibuildwheel", "wheel"]
dev = ["autoflake (==1.4)", "coverage (==6.4.1)", "cryptography (==38.0.3)", "flake8 (==6.0.0)", "flake8-bugbear (==22.7.1)", "flake8-pie (==0.15.0)", "httpx (==0.23.1)", "mypy (==0.971)", "nest-asyncio (==1.6.0)", "pytest (==7.1.2)", "pytest-asyncio (==0.19.0)", "pytest-trio (==0.7.0)", "ruff (==0.1.14)", "trio (==0.21.0)", "trio-typing (==0.7.0)", "trustme (==0.9.0)", "types-certifi (==2021.10.8.2)", "uvicorn (==0.18.3)", "websockets (==11.0.3)"] dev = ["charset-normalizer (>=3.3.2,<4.0)", "coverage (>=6.4.1,<7.0)", "cryptography (>=42.0.5,<43.0)", "httpx (==0.23.1)", "mypy (>=1.9.0,<2.0)", "pytest (>=8.1.1,<9.0)", "pytest-asyncio (>=0.23.6,<1.0)", "pytest-trio (>=0.8.0,<1.0)", "ruff (>=0.3.5,<1.0)", "trio (>=0.25.0,<1.0)", "trustme (>=1.1.0,<2.0)", "uvicorn (>=0.29.0,<1.0)", "websockets (>=12.0,<13.0)"]
test = ["cryptography (==38.0.3)", "fastapi (==0.100.0)", "httpx (==0.23.1)", "nest-asyncio (==1.6.0)", "proxy.py (==2.4.3)", "pytest (==7.1.2)", "pytest-asyncio (==0.19.0)", "pytest-trio (==0.7.0)", "python-multipart (==0.0.6)", "trio (==0.21.0)", "trio-typing (==0.7.0)", "trustme (==0.9.0)", "types-certifi (==2021.10.8.2)", "uvicorn (==0.18.3)", "websockets (==11.0.3)"] test = ["charset-normalizer (>=3.3.2,<4.0)", "cryptography (>=42.0.5,<43.0)", "fastapi (==0.110.0)", "httpx (==0.23.1)", "proxy.py (>=2.4.3,<3.0)", "pytest (>=8.1.1,<9.0)", "pytest-asyncio (>=0.23.6,<1.0)", "pytest-trio (>=0.8.0,<1.0)", "python-multipart (>=0.0.9,<1.0)", "trio (>=0.25.0,<1.0)", "trustme (>=1.1.0,<2.0)", "uvicorn (>=0.29.0,<1.0)", "websockets (>=12.0,<13.0)"]
[[package]] [[package]]
name = "distlib" name = "distlib"
@ -559,13 +562,13 @@ files = [
[[package]] [[package]]
name = "filelock" name = "filelock"
version = "3.13.3" version = "3.13.4"
description = "A platform independent file lock." description = "A platform independent file lock."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "filelock-3.13.3-py3-none-any.whl", hash = "sha256:5ffa845303983e7a0b7ae17636509bc97997d58afeafa72fb141a17b152284cb"}, {file = "filelock-3.13.4-py3-none-any.whl", hash = "sha256:404e5e9253aa60ad457cae1be07c0f0ca90a63931200a47d9b6a6af84fd7b45f"},
{file = "filelock-3.13.3.tar.gz", hash = "sha256:a79895a25bbefdf55d1a2a0a80968f7dbb28edcd6d4234a0afb3f37ecde4b546"}, {file = "filelock-3.13.4.tar.gz", hash = "sha256:d13f466618bfde72bd2c18255e269f72542c6e70e7bac83a0232d6b1cc5c8cf4"},
] ]
[package.extras] [package.extras]
@ -675,13 +678,13 @@ license = ["ukkonen"]
[[package]] [[package]]
name = "idna" name = "idna"
version = "3.6" version = "3.7"
description = "Internationalized Domain Names in Applications (IDNA)" description = "Internationalized Domain Names in Applications (IDNA)"
optional = false optional = false
python-versions = ">=3.5" python-versions = ">=3.5"
files = [ files = [
{file = "idna-3.6-py3-none-any.whl", hash = "sha256:c05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f"}, {file = "idna-3.7-py3-none-any.whl", hash = "sha256:82fee1fc78add43492d3a1898bfa6d8a904cc97d8427f683ed8e798d07761aa0"},
{file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"}, {file = "idna-3.7.tar.gz", hash = "sha256:028ff3aadf0609c1fd278d8ea3089299412a7a8b9bd005dd08b9f8285bcb5cfc"},
] ]
[[package]] [[package]]
@ -711,45 +714,47 @@ colors = ["colorama (>=0.4.6)"]
[[package]] [[package]]
name = "jsonpickle" name = "jsonpickle"
version = "3.0.3" version = "3.0.4"
description = "Python library for serializing any arbitrary object graph into JSON" description = "Serialize any Python object to JSON"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "jsonpickle-3.0.3-py3-none-any.whl", hash = "sha256:e8d6dcc58f6722bea0321cd328fbda81c582461185688a535df02be0f699afb4"}, {file = "jsonpickle-3.0.4-py3-none-any.whl", hash = "sha256:04ae7567a14269579e3af66b76bda284587458d7e8a204951ca8f71a3309952e"},
{file = "jsonpickle-3.0.3.tar.gz", hash = "sha256:5691f44495327858ab3a95b9c440a79b41e35421be1a6e09a47b6c9b9421fd06"}, {file = "jsonpickle-3.0.4.tar.gz", hash = "sha256:a1b14c8d6221cd8f394f2a97e735ea1d7edc927fbd135b26f2f8700657c8c62b"},
] ]
[package.extras] [package.extras]
docs = ["furo", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "sphinx"] docs = ["furo", "rst.linker (>=1.9)", "sphinx"]
testing = ["ecdsa", "feedparser", "gmpy2", "numpy", "pandas", "pymongo", "pytest (>=3.5,!=3.7.3)", "pytest-checkdocs (>=1.2.3)", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-ruff", "scikit-learn", "simplejson", "sqlalchemy", "ujson"] packaging = ["build", "twine"]
testing = ["bson", "ecdsa", "feedparser", "gmpy2", "numpy", "pandas", "pymongo", "pytest (>=3.5,!=3.7.3)", "pytest-benchmark", "pytest-benchmark[histogram]", "pytest-checkdocs (>=1.2.3)", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-ruff (>=0.2.1)", "scikit-learn", "scipy", "scipy (>=1.9.3)", "simplejson", "sqlalchemy", "ujson"]
[[package]] [[package]]
name = "langcodes" name = "langcodes"
version = "3.3.0" version = "3.4.0"
description = "Tools for labeling human languages with IETF language tags" description = "Tools for labeling human languages with IETF language tags"
optional = false optional = false
python-versions = ">=3.6" python-versions = ">=3.8"
files = [ files = [
{file = "langcodes-3.3.0-py3-none-any.whl", hash = "sha256:4d89fc9acb6e9c8fdef70bcdf376113a3db09b67285d9e1d534de6d8818e7e69"}, {file = "langcodes-3.4.0-py3-none-any.whl", hash = "sha256:10a4cc078b8e8937d8485d3352312a0a89a3125190db9f2bb2074250eef654e9"},
{file = "langcodes-3.3.0.tar.gz", hash = "sha256:794d07d5a28781231ac335a1561b8442f8648ca07cd518310aeb45d6f0807ef6"}, {file = "langcodes-3.4.0.tar.gz", hash = "sha256:ae5a77d1a01d0d1e91854a671890892b7ce9abb601ab7327fc5c874f899e1979"},
] ]
[package.dependencies] [package.dependencies]
language-data = {version = ">=1.1,<2.0", optional = true, markers = "extra == \"data\""} language-data = ">=1.2"
[package.extras] [package.extras]
data = ["language-data (>=1.1,<2.0)"] build = ["build", "twine"]
test = ["pytest", "pytest-cov"]
[[package]] [[package]]
name = "language-data" name = "language-data"
version = "1.2.0.dev3" version = "1.2.0"
description = "Supplementary data about languages used by the langcodes module" description = "Supplementary data about languages used by the langcodes module"
optional = false optional = false
python-versions = "*" python-versions = "*"
files = [ files = [
{file = "language_data-1.2.0.dev3-py3-none-any.whl", hash = "sha256:040520d640dd33a294b9bf6dd0596705cbd6c296bc7ee8c7d4d7a143d7d50137"}, {file = "language_data-1.2.0-py3-none-any.whl", hash = "sha256:77d5cab917f91ee0b2f1aa7018443e911cf8985ef734ca2ba3940770f6a3816b"},
{file = "language_data-1.2.0.dev3.tar.gz", hash = "sha256:dca04d2308c339ef3a31da678ea563785547114d040ce5c2d8d39e53ad26ce1f"}, {file = "language_data-1.2.0.tar.gz", hash = "sha256:82a86050bbd677bfde87d97885b17566cfe75dad3ac4f5ce44b52c28f752e773"},
] ]
[package.dependencies] [package.dependencies]
@ -757,6 +762,7 @@ marisa-trie = ">=0.7.7"
[package.extras] [package.extras]
build = ["build", "twine"] build = ["build", "twine"]
test = ["pytest", "pytest-cov"]
[[package]] [[package]]
name = "lxml" name = "lxml"
@ -1304,13 +1310,13 @@ files = [
[[package]] [[package]]
name = "pycaption" name = "pycaption"
version = "2.2.4" version = "2.2.6"
description = "Closed caption converter" description = "Closed caption converter"
optional = false optional = false
python-versions = ">=3.8,<4.0" python-versions = "<4.0,>=3.8"
files = [ files = [
{file = "pycaption-2.2.4-py3-none-any.whl", hash = "sha256:243d2b7215e9a9e4f8d817955c88c9c69f5ea7ad5918b57aac3d222f274e32af"}, {file = "pycaption-2.2.6-py3-none-any.whl", hash = "sha256:c2e393f20d64f8967d874abebc814230499b179e37ee8cdfae44462307774cce"},
{file = "pycaption-2.2.4.tar.gz", hash = "sha256:00a926329bb787f7525549f231bbafb3fd8744c3e2db0325b906b93e946f0d88"}, {file = "pycaption-2.2.6.tar.gz", hash = "sha256:b2654979f12dad888e39b10eb5fa03522dae157c635d9e5a2a645291a3a7ff53"},
] ]
[package.dependencies] [package.dependencies]
@ -1717,44 +1723,44 @@ files = [
[[package]] [[package]]
name = "ruff" name = "ruff"
version = "0.3.5" version = "0.3.7"
description = "An extremely fast Python linter and code formatter, written in Rust." description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "ruff-0.3.5-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:aef5bd3b89e657007e1be6b16553c8813b221ff6d92c7526b7e0227450981eac"}, {file = "ruff-0.3.7-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:0e8377cccb2f07abd25e84fc5b2cbe48eeb0fea9f1719cad7caedb061d70e5ce"},
{file = "ruff-0.3.5-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:89b1e92b3bd9fca249153a97d23f29bed3992cff414b222fcd361d763fc53f12"}, {file = "ruff-0.3.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:15a4d1cc1e64e556fa0d67bfd388fed416b7f3b26d5d1c3e7d192c897e39ba4b"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e55771559c89272c3ebab23326dc23e7f813e492052391fe7950c1a5a139d89"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d28bdf3d7dc71dd46929fafeec98ba89b7c3550c3f0978e36389b5631b793663"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dabc62195bf54b8a7876add6e789caae0268f34582333cda340497c886111c39"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:379b67d4f49774ba679593b232dcd90d9e10f04d96e3c8ce4a28037ae473f7bb"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a05f3793ba25f194f395578579c546ca5d83e0195f992edc32e5907d142bfa3"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c060aea8ad5ef21cdfbbe05475ab5104ce7827b639a78dd55383a6e9895b7c51"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:dfd3504e881082959b4160ab02f7a205f0fadc0a9619cc481982b6837b2fd4c0"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:ebf8f615dde968272d70502c083ebf963b6781aacd3079081e03b32adfe4d58a"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:87258e0d4b04046cf1d6cc1c56fadbf7a880cc3de1f7294938e923234cf9e498"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d48098bd8f5c38897b03604f5428901b65e3c97d40b3952e38637b5404b739a2"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:712e71283fc7d9f95047ed5f793bc019b0b0a29849b14664a60fd66c23b96da1"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:da8a4fda219bf9024692b1bc68c9cff4b80507879ada8769dc7e985755d662ea"},
{file = "ruff-0.3.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a532a90b4a18d3f722c124c513ffb5e5eaff0cc4f6d3aa4bda38e691b8600c9f"}, {file = "ruff-0.3.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c44e0149f1d8b48c4d5c33d88c677a4aa22fd09b1683d6a7ff55b816b5d074f"},
{file = "ruff-0.3.5-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:122de171a147c76ada00f76df533b54676f6e321e61bd8656ae54be326c10296"}, {file = "ruff-0.3.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:3050ec0af72b709a62ecc2aca941b9cd479a7bf2b36cc4562f0033d688e44fa1"},
{file = "ruff-0.3.5-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:d80a6b18a6c3b6ed25b71b05eba183f37d9bc8b16ace9e3d700997f00b74660b"}, {file = "ruff-0.3.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:a29cc38e4c1ab00da18a3f6777f8b50099d73326981bb7d182e54a9a21bb4ff7"},
{file = "ruff-0.3.5-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a7b6e63194c68bca8e71f81de30cfa6f58ff70393cf45aab4c20f158227d5936"}, {file = "ruff-0.3.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:5b15cc59c19edca917f51b1956637db47e200b0fc5e6e1878233d3a938384b0b"},
{file = "ruff-0.3.5-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a759d33a20c72f2dfa54dae6e85e1225b8e302e8ac655773aff22e542a300985"}, {file = "ruff-0.3.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:e491045781b1e38b72c91247cf4634f040f8d0cb3e6d3d64d38dcf43616650b4"},
{file = "ruff-0.3.5-py3-none-win32.whl", hash = "sha256:9d8605aa990045517c911726d21293ef4baa64f87265896e491a05461cae078d"}, {file = "ruff-0.3.7-py3-none-win32.whl", hash = "sha256:bc931de87593d64fad3a22e201e55ad76271f1d5bfc44e1a1887edd0903c7d9f"},
{file = "ruff-0.3.5-py3-none-win_amd64.whl", hash = "sha256:dc56bb16a63c1303bd47563c60482a1512721053d93231cf7e9e1c6954395a0e"}, {file = "ruff-0.3.7-py3-none-win_amd64.whl", hash = "sha256:5ef0e501e1e39f35e03c2acb1d1238c595b8bb36cf7a170e7c1df1b73da00e74"},
{file = "ruff-0.3.5-py3-none-win_arm64.whl", hash = "sha256:faeeae9905446b975dcf6d4499dc93439b131f1443ee264055c5716dd947af55"}, {file = "ruff-0.3.7-py3-none-win_arm64.whl", hash = "sha256:789e144f6dc7019d1f92a812891c645274ed08af6037d11fc65fcbc183b7d59f"},
{file = "ruff-0.3.5.tar.gz", hash = "sha256:a067daaeb1dc2baf9b82a32dae67d154d95212080c80435eb052d95da647763d"}, {file = "ruff-0.3.7.tar.gz", hash = "sha256:d5c1aebee5162c2226784800ae031f660c350e7a3402c4d1f8ea4e97e232e3ba"},
] ]
[[package]] [[package]]
name = "setuptools" name = "setuptools"
version = "69.2.0" version = "69.5.1"
description = "Easily download, build, install, upgrade, and uninstall Python packages" description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "setuptools-69.2.0-py3-none-any.whl", hash = "sha256:c21c49fb1042386df081cb5d86759792ab89efca84cf114889191cd09aacc80c"}, {file = "setuptools-69.5.1-py3-none-any.whl", hash = "sha256:c636ac361bc47580504644275c9ad802c50415c7522212252c033bd15f301f32"},
{file = "setuptools-69.2.0.tar.gz", hash = "sha256:0ff4183f8f42cd8fa3acea16c45205521a4ef28f73c6391d8a25e92893134f2e"}, {file = "setuptools-69.5.1.tar.gz", hash = "sha256:6c1fccdac05a97e598fb0ae3bbed5904ccb317337a51139dcd51453611bbb987"},
] ]
[package.extras] [package.extras]
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"] docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "mypy (==1.9)", "packaging (>=23.2)", "pip (>=19.1)", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff (>=0.2.1)", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"] testing = ["build[virtualenv]", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "mypy (==1.9)", "packaging (>=23.2)", "pip (>=19.1)", "pytest (>=6,!=8.1.1)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (>=0.2.1)", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.2)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"] testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.2)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]] [[package]]
@ -1803,13 +1809,13 @@ files = [
[[package]] [[package]]
name = "types-protobuf" name = "types-protobuf"
version = "4.24.0.20240311" version = "4.25.0.20240410"
description = "Typing stubs for protobuf" description = "Typing stubs for protobuf"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "types-protobuf-4.24.0.20240311.tar.gz", hash = "sha256:c80426f9fb9b21aee514691e96ab32a5cd694a82e2ac07964b352c3e7e0182bc"}, {file = "types-protobuf-4.25.0.20240410.tar.gz", hash = "sha256:86576c2e7e691b8b75f4cabec430f7405edef411b5d191e847c91307935b1b38"},
{file = "types_protobuf-4.24.0.20240311-py3-none-any.whl", hash = "sha256:8e039486df058141cb221ab99f88c5878c08cca4376db1d84f63279860aa09cd"}, {file = "types_protobuf-4.25.0.20240410-py3-none-any.whl", hash = "sha256:335b2e8cf9f39c233dbf0f977a2a4fbc2c0bac720225c544cc1412a67ab1e1d3"},
] ]
[[package]] [[package]]
@ -1825,13 +1831,13 @@ files = [
[[package]] [[package]]
name = "types-requests" name = "types-requests"
version = "2.31.0.20240403" version = "2.31.0.20240406"
description = "Typing stubs for requests" description = "Typing stubs for requests"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "types-requests-2.31.0.20240403.tar.gz", hash = "sha256:e1e0cd0b655334f39d9f872b68a1310f0e343647688bf2cee932ec4c2b04de59"}, {file = "types-requests-2.31.0.20240406.tar.gz", hash = "sha256:4428df33c5503945c74b3f42e82b181e86ec7b724620419a2966e2de604ce1a1"},
{file = "types_requests-2.31.0.20240403-py3-none-any.whl", hash = "sha256:06abf6a68f5c4f2a62f6bb006672dfb26ed50ccbfddb281e1ee6f09a65707d5d"}, {file = "types_requests-2.31.0.20240406-py3-none-any.whl", hash = "sha256:6216cdac377c6b9a040ac1c0404f7284bd13199c0e1bb235f4324627e8898cf5"},
] ]
[package.dependencies] [package.dependencies]
@ -1839,13 +1845,13 @@ urllib3 = ">=2"
[[package]] [[package]]
name = "typing-extensions" name = "typing-extensions"
version = "4.10.0" version = "4.11.0"
description = "Backported and Experimental Type Hints for Python 3.8+" description = "Backported and Experimental Type Hints for Python 3.8+"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "typing_extensions-4.10.0-py3-none-any.whl", hash = "sha256:69b1a937c3a517342112fb4c6df7e72fc39a38e7891a5730ed4985b5214b5475"}, {file = "typing_extensions-4.11.0-py3-none-any.whl", hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"},
{file = "typing_extensions-4.10.0.tar.gz", hash = "sha256:b0abd7c89e8fb96f98db18d86106ff1d90ab692004eb746cf6eda2682f91b3cb"}, {file = "typing_extensions-4.11.0.tar.gz", hash = "sha256:83f085bd5ca59c80295fc2a82ab5dac679cbe02b9f33f7d83af68e241bea51b0"},
] ]
[[package]] [[package]]
@ -2002,4 +2008,4 @@ multidict = ">=4.0"
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = ">=3.9,<4.0" python-versions = ">=3.9,<4.0"
content-hash = "1c9319036f4d6db07d33c2c62f9e57cdb2881291a24af7c0b6a4d7dbbc2134b3" content-hash = "8bbbd788ab179a0669e8d7c6f45c0746e79f11c24ac39f6d4856563e76ec2f94"

View File

@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry] [tool.poetry]
name = "devine" name = "devine"
version = "3.3.1" version = "3.3.3"
description = "Modular Movie, TV, and Music Archival Software." description = "Modular Movie, TV, and Music Archival Software."
license = "GPL-3.0-only" license = "GPL-3.0-only"
authors = ["rlaphoenix <rlaphoenix@pm.me>"] authors = ["rlaphoenix <rlaphoenix@pm.me>"]
@ -39,12 +39,12 @@ Brotli = "^1.1.0"
click = "^8.1.7" click = "^8.1.7"
construct = "^2.8.8" construct = "^2.8.8"
crccheck = "^1.3.0" crccheck = "^1.3.0"
jsonpickle = "^3.0.3" jsonpickle = "^3.0.4"
langcodes = { extras = ["data"], version = "^3.3.0" } langcodes = { extras = ["data"], version = "^3.4.0" }
lxml = "^5.2.1" lxml = "^5.2.1"
pproxy = "^2.7.9" pproxy = "^2.7.9"
protobuf = "^4.25.3" protobuf = "^4.25.3"
pycaption = "^2.2.4" pycaption = "^2.2.6"
pycryptodomex = "^3.20.0" pycryptodomex = "^3.20.0"
pyjwt = "^2.8.0" pyjwt = "^2.8.0"
pymediainfo = "^6.1.0" pymediainfo = "^6.1.0"
@ -61,21 +61,17 @@ subtitle-filter = "^1.4.9"
Unidecode = "^1.3.8" Unidecode = "^1.3.8"
urllib3 = "^2.2.1" urllib3 = "^2.2.1"
chardet = "^5.2.0" chardet = "^5.2.0"
curl-cffi = "^0.6.2" curl-cffi = "^0.7.0b4"
# Temporary explicit versions of these langcodes dependencies as language-data v1.1
# uses marisa-trie v0.7.8 which doesn't have Python 3.12 wheels.
language-data = "^1.2.0.dev3"
marisa-trie = "^1.1.0"
[tool.poetry.dev-dependencies] [tool.poetry.dev-dependencies]
pre-commit = "^3.7.0" pre-commit = "^3.7.0"
mypy = "^1.9.0" mypy = "^1.9.0"
mypy-protobuf = "^3.6.0" mypy-protobuf = "^3.6.0"
types-protobuf = "^4.24.0.20240311" types-protobuf = "^4.24.0.20240408"
types-PyMySQL = "^1.1.0.1" types-PyMySQL = "^1.1.0.1"
types-requests = "^2.31.0.20240403" types-requests = "^2.31.0.20240406"
isort = "^5.13.2" isort = "^5.13.2"
ruff = "~0.3.5" ruff = "~0.3.7"
[tool.poetry.scripts] [tool.poetry.scripts]
devine = "devine.core.__main__:main" devine = "devine.core.__main__:main"