1
0
mirror of https://github.com/yt-dlp/yt-dlp.git synced 2026-01-17 20:31:29 +00:00

Compare commits

..

261 Commits

Author SHA1 Message Date
github-actions
990dd7b00f [version] update
Created by: pukkandan

:ci skip all :ci run dl
2023-01-02 14:44:06 +00:00
pukkandan
d83b0ad809 Release 2023.01.02 2023-01-02 20:07:07 +05:30
pukkandan
08e29b9f1f [cleanup] Misc
Closes #5576, closes #5887
2023-01-02 19:40:15 +05:30
pukkandan
8e174ba7de [docs] Improvements
Closes #5846, closes #5774
2023-01-02 19:40:13 +05:30
bashonly
05997b6e98 [extractor/generic] Decode unicode-escaped embed URLs (#5919)
Authored by: bashonly
Closes #5854
2023-01-02 19:36:01 +05:30
Simon Sawicki
32a84bcf4e Update to ytdl-commit-195f22f6
[generic] Improve KVS (etc) extraction
195f22f679

Closes #3716
Authored by: Grub4k, pukkandan
2023-01-02 19:15:36 +05:30
Matthew
8300774c4a Add --enable-file-urls (#5917)
Closes https://github.com/yt-dlp/yt-dlp/issues/3675

Authored by: coletdjnz
2023-01-02 06:05:13 +00:00
bashonly
d7f9871469 [extractor/iqiyi] Fix Iq JS regex (#5922)
Closes #5702
Authored by: bashonly
2023-01-02 05:50:37 +00:00
bashonly
13f930abc0 [extractor/fifa] Fix Preplay extraction (#5921)
Closes #5839
Authored by: dirkf
2023-01-02 05:46:06 +00:00
bashonly
b23b503e22 [extractor/odnoklassniki] Extract subtitles (#5920)
Closes #5744
Authored by: bashonly
2023-01-02 05:44:54 +00:00
Matthew
e756f45ba0 Improve handling for overriding extractors with plugins (#5916)
* Extractors replaced with plugin extractors now show in debug output
* Better testcase handling
* Added documentation
Authored by: coletdjnz, pukkandan
2023-01-02 04:55:11 +00:00
Lesmiscore
8c53322cda [downloader/aria2c] Native progress for aria2c via RPC (#3724)
Authored by: Lesmiscore, pukkandan

Closes #2038
2023-01-02 02:16:25 +09:00
pukkandan
193fb150b7 Fix bug in 119e40ef64 2023-01-01 17:01:48 +05:30
pukkandan
26fdfc3704 [extractor/biliintl:series] Make partial download of series faster 2023-01-01 14:41:47 +05:30
pukkandan
78d25e0b7c [extractor/embedly] Handle vimeo embeds
Closes #3360
2023-01-01 14:15:23 +05:30
pukkandan
2a06bb4eb6 Add --compat-options 2021,2022
Use these to guard against future compat changes. This allows devs to
change defaults and make other potentially breaking changes more easily.
If you need everything to work exactly as-is, put this in your config
2023-01-01 14:11:15 +05:30
pukkandan
88fb942577 Add message when there are no subtitles/thumbnails
Closes #5551
2023-01-01 14:11:15 +05:30
pukkandan
1cdda32998 [utils] get_exe_version: Detect broken executables
Authored by: dirkf, pukkandan
Closes #5561
2023-01-01 14:11:14 +05:30
coletdjnz
3e01ce744a [extractor/generic] Use Accept-Encoding: identity for initial request
The existing comment seems to imply this was the desired behavior from the beginning.

Partial fix for https://github.com/yt-dlp/yt-dlp/issues/5855, https://github.com/yt-dlp/yt-dlp/issues/5851, https://github.com/yt-dlp/yt-dlp/issues/4748
2023-01-01 18:41:35 +13:00
Matthew
8e40b9d1ec Improve plugin architecture (#5553)
to make plugins easier to develop and use:
* Plugins are now loaded as namespace packages.
* Plugins can be loaded in any distribution of yt-dlp (binary, pip, source, etc.).
* Plugin packages can be installed and managed via pip, or dropped into any of the documented locations.
* Users do not need to edit any code files to install plugins.
* Backwards-compatible with previous plugin architecture.

As a side-effect, yt-dlp will now search in a few more locations for config files.

Closes https://github.com/yt-dlp/yt-dlp/issues/1389

Authored by: flashdagger, coletdjnz, pukkandan, Grub4K
Co-authored-by: Marcel <flashdagger@googlemail.com>
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
Co-authored-by: Simon Sawicki <accounts@grub4k.xyz>
2023-01-01 04:29:22 +00:00
pukkandan
2fb0f85868 [update] Workaround #5632 2022-12-31 11:04:02 +05:30
Stel Abrego
a0e526ed4d [extractor/bandcamp] Add album_artist (#5537)
Closes #5536 
Authored by: stelcodes
2022-12-31 10:28:33 +05:30
pukkandan
8d1ddb0805 [extractor/udemy] Fix lectures that have no URL and detect DRM
Closes #5662
2022-12-31 10:02:50 +05:30
pukkandan
9bb856998b [extractor/youtube] Extract DRC formats 2022-12-30 15:50:17 +05:30
pukkandan
fbb7383306 Add weba to known extensions 2022-12-30 15:32:47 +05:30
pukkandan
ec54bd43f3 Fix bug in writing playlist info-json
Closes #4889
2022-12-30 14:07:15 +05:30
pukkandan
f74371a97d [extractor/bilibili] Fix --no-playlist for anthology
Closes #5797
2022-12-30 12:10:57 +05:30
ChillingPepper
d5f043d127 [utils] js_to_json: Fix bug in f55523c (#5771)
Authored by: ChillingPepper, pukkandan
2022-12-30 12:08:38 +05:30
pukkandan
fe74d5b592 Let --parse/replace-in-metadata run at any post-processing stage
Closes #5808, #456
2022-12-30 11:19:39 +05:30
pukkandan
119e40ef64 Add pre-processor stage video
Related: #456, #5808
2022-12-30 11:18:45 +05:30
pukkandan
4455918e7f [extractor/stv] Detect DRM
Closes #5320
2022-12-30 11:17:10 +05:30
Anant Murmu
efa944f4bc [cleanup] Use random.choices (#5800)
Authored by: freezboltz
2022-12-30 08:13:49 +05:30
nosoop
e107c2b8cf [extractor/soundcloud] Support user permalink (#5842)
Closes #5841
Authored by: nosoop
2022-12-30 00:16:43 +05:30
Lesmiscore
ca2f6e14e6 [extractor/BiliLive] Fix extractor
- Remove unnecessary group in `_VALID_URL`
- This extractor always returns livestreams
2022-12-30 03:01:22 +09:00
bashonly
c1edb853b0 [extractor/kick] Add extractor (#5736)
Closes #5722
Authored by: bashonly
2022-12-29 17:31:01 +00:00
bashonly
2647c933b8 [extractor/wistia] Improve extension detection (#5415)
Closes #5053
Authored by: bashonly, Grub4k, pukkandan
2022-12-29 16:32:54 +00:00
bashonly
53006b35ea [extractor/amazon] Add AmazonReviews extractor (#5857)
Closes #5766
Authored by: bashonly
2022-12-29 15:04:09 +00:00
bashonly
4b183d4962 [extractor/videoken] Add extractors (#5824)
Closes #5818
Authored by: bashonly
2022-12-29 14:29:08 +00:00
bashonly
3d667e0047 [extractor/slideslive] Support embeds and slides (#5784)
Authored by: bashonly, Grub4K, pukkandan
2022-12-29 12:03:03 +00:00
Sam
9a9006ba20 [extractor/twitcasting] Fix videos with password (#5894)
Closes #5888
Authored by: bashonly, Spicadox
2022-12-29 11:15:38 +00:00
HobbyistDev
153e88a751 [extractor/netverse] Add NetverseSearch extractor (#5838)
Authored by: HobbyistDev
2022-12-29 13:42:07 +05:30
JChris246
9fcd8ad1f2 [extractor/spankbang] Fix extractor (#5791)
Authored by: JChris246
Closes #5731
2022-12-29 13:38:22 +05:30
monnef
6b71d186dd [extractor/curiositystream] Fix auth (#5730)
Authored by: mnn
2022-12-29 13:17:23 +05:30
lkw123
074b2fae90 [extractor/kankanews] Add extractor (#5729)
Authored by: synthpop123
2022-12-29 13:08:49 +05:30
Kurt Bestor
06a9d68eb8 [extractor/youku] Fix extractor (#5622)
Closes #4456
Authored by: KurtBestor
2022-12-29 12:48:55 +05:30
Damiano Amatruda
a4d6ead30f [extractor/ciscowebex] Support password-protected videos (#5601)
Authored by: damianoamatruda
2022-12-29 12:24:19 +05:30
lauren n. liberda
d1b5f3d79c [extractor/polskieradio] Adapt to next.js redesigns (#5416)
Authored by: selfisekai
2022-12-28 02:17:25 +05:30
lauren n. liberda
da8d2de208 [extractor/cda] Support premium and misc improvements (#5529)
* Fix cache for non-ASCII key
* Improve error messages
* Better UA for fingerprint bypass

Authored by: selfisekai
2022-12-28 01:27:26 +05:30
chris
15e9e578c0 [extractor/ArteTV] Extract chapters (#5879)
Authored by: iw0nderhow, bashonly
2022-12-28 01:22:58 +05:30
Bobscorn
0ef3d47027 [extractor/beatbump] Add extractors (#5304)
Authored by: Bobscorn, pukkandan
Closes #4653
2022-12-27 12:34:56 +05:30
barsnick
247c8dd4f5 [extractor/urplay] Support for audio-only formats (#4606)
Closes #4605
Authored by: barsnick
2022-12-27 12:04:01 +05:30
HobbyistDev
032f22020c [extractor/trtcocuk] Add extractor (#5009)
Closes #2635
Authored by: HobbyistDev
2022-12-27 11:55:09 +05:30
pukkandan
4af47a0003 Fix 9012d20b23 2022-12-27 11:45:22 +05:30
pukkandan
9012d20b23 [extractor/mixch] Support --wait-for-video 2022-12-27 03:01:22 +05:30
Giulio Muscarello
d61ef7f343 [extractor/ARD] Add vtt subtitles (#5835)
Authored by: CapacitorSet
2022-12-24 16:19:10 +05:30
skbeh
1c226ccdd4 [extractor/bilibili] Improve _VALID_URL (#5820)
Authored by: skbeh
2022-12-24 16:17:37 +05:30
pukkandan
8791e78ccc Fix original_url in playlists 2022-12-23 01:44:20 +05:30
pukkandan
69f5fe45b9 [FFmpegVideoConvertor] Add gif to --recode-video 2022-12-23 01:44:20 +05:30
pukkandan
0b5546c723 [extractor] Let _extract_format functions obey --ignore-no-formats 2022-12-23 01:44:18 +05:30
bashonly
1fc089143c [extractor/reddit] Extract crossposted media (#5801)
Closes #5798
Authored by: bashonly
2022-12-21 00:55:47 +00:00
Lesmiscore
5424dbaf91 Deprioritize HEVC-over-FLV formats (#5823)
Authored by: Lesmiscore
2022-12-19 11:36:14 +09:00
Matthew
c733555106 [extractor/youtube:tab] Extract metadata from channel items (#5569)
Authored by: coletdjnz
2022-12-12 23:08:14 +00:00
HobbyistDev
81388c0954 [extractor/oneplace] Add OnePlacePodcast extractor (#5549)
Closes #5543
Authored by: HobbyistDev
2022-12-10 19:10:24 +05:30
Denis
df10bad267 [extractor/rutube] Support private videos (#5761)
Authored by: mexus
2022-12-10 18:47:01 +05:30
HobbyistDev
f0f3fa028b [extractor/netverse] Extract comments (#5568)
Authored by: HobbyistDev
2022-12-10 14:17:06 +05:30
HobbyistDev
22697a84f6 [extractor/europarl] Add EuroParlWebstream Extractor (#5547)
Authored by: HobbyistDev
Closes #4933
2022-12-10 14:14:43 +05:30
HobbyistDev
3ac5476430 [extractor/nosnl] Add support for /video (#5590)
Authored by: HobbyistDev
2022-12-10 14:04:55 +05:30
HobbyistDev
e318b5b87a [extractor/airtv] Add extractor (#5533)
Authored by: HobbyistDev
Closes #5132
2022-12-10 13:59:13 +05:30
bashonly
f549b18512 [extractor/pinterest] Fix extractor (#5739)
Closes #1772
Authored by: bashonly
2022-12-09 23:46:04 +00:00
bashonly
7c5e1701f6 [extractor/foxsports] Fix extractor (#5719)
Closes #5714
Authored by: bashonly
2022-12-09 23:43:10 +00:00
bashonly
16bed382fd [extractor/twitter] Heed --no-playlist for multi-video tweets (#5757)
Closes #5752
Authored by: bashonly, Grub4K
2022-12-09 23:41:45 +00:00
bashonly
3cf50fa8e9 [downloader/ffmpeg] Fix headers for video+audio formats (#5659)
Authored by: bashonly, Grub4K
2022-12-09 23:36:38 +00:00
bashonly
f69b0554eb [extractor/slideslive] Fix extractor (#5737)
Closes #1532
Authored by: bashonly, Grub4K
2022-12-09 23:25:37 +00:00
pukkandan
e74a3c6dcc [extractor/hotstar] Improve format metadata 2022-12-09 15:23:59 +05:30
pukkandan
7108221662 Add ac4 to known codecs
Note: ffmpeg does not currently support this format

Related #5738
2022-12-09 15:23:59 +05:30
nixxo
10dc85924a [extractor/mediaset] Better embed detection and error messages (#5664)
Authored by: nixxo
2022-12-09 12:50:37 +05:30
Vita
b05f0a50e0 [extractor/yle_areena] Support restricted videos (#5735)
* and improve metadata

Closes #5734
Authored by: docbender
2022-12-09 11:33:36 +05:30
Elyse
3d79ebc8b7 [extractor/mediastream] Add extractor (#5640)
Closes #5532, closes #4431, closes #4425
Authored by: elyse0, HobbyistDev

Co-authored-by: HobbyistDev <tesutonihon4@gmail.com>
2022-12-09 02:47:21 +05:30
pukkandan
b44cd29851 [jsinterp] Escape regex that looks like nested set
Closes #5749
2022-12-08 22:43:38 +05:30
milkknife
85a802969e [extractor/webcamerapl] Add extractor (#5715)
Authored by: milkknife
2022-12-08 22:26:36 +05:30
nixxo
72f96c5566 [extractor/la7] Improve extractor (#5538)
Authored by: nixxo
Closes #5360
2022-12-08 22:22:19 +05:30
MMM
839e2a62ae [extractor/rumble] Add RumbleIE extractor (#5515)
Closes #2846
Authored by: flashdagger
2022-12-08 22:02:17 +05:30
HobbyistDev
28b8f57b4b [extractor/noice] Add NoicePodcast extractor (#5621)
Authored by: HobbyistDev
2022-12-08 19:28:36 +05:30
lkw123
dfc186d422 [extractor/xiami] Remove extractors (#5711)
Authored by: synthpop123
2022-12-08 18:13:29 +05:30
David Turner
42ec478fc4 [extractor/plutotv] Fix videos with non-zero start (#5745)
Authored by: digitall
2022-12-08 18:08:52 +05:30
pukkandan
7991ae57a8 [extractor/sibnet] Separate from VKIE
Fixes bfd973ece3 (commitcomment-91834251)
2022-12-08 17:20:02 +05:30
pukkandan
935bac1e4d Fix --cookies-from-browser CLI parsing
Closes #5716
2022-12-06 00:35:53 +05:30
bashonly
c4cbd3bebd [extractor/tiktok] Update _VALID_URL, add api_hostname arg (#5708)
Closes #5706
Authored by: bashonly
2022-12-04 22:30:31 +00:00
pukkandan
c53a18f016 [utils] windows_enable_vt_mode: Proper implementation
Authored by: Grub4K
2022-12-05 01:06:56 +05:30
pukkandan
71df9b7fd5 [cleanup] Misc 2022-12-03 19:52:31 +05:30
Benjamin Ryan
c9f5ce5118 [extractor/tiktok] Update API hostname (#5690)
Closes #5688
Authored by: redraskal
2022-12-02 09:38:00 +00:00
bashonly
ddf1e22d48 [extractor/swearnet] Fix description bug (#5681)
Bug in 049565df2e
Closes #5643
Authoried by: bashonly
2022-12-01 11:24:43 +00:00
bashonly
0e96b408b9 [extractor/reddit] Extract video embeds in text posts (#5677)
Closes #5612
Authored by: bashonly
2022-12-01 04:04:32 +00:00
bashonly
ba72399723 [extractor/tiktok] Fix subs, DouyinIE, improve _VALID_URL (#5676)
Closes #5665, Closes #2267
Authored by: bashonly
2022-12-01 04:00:32 +00:00
pukkandan
9bcfe33be7 [utils] Make ExtractorError mutable 2022-11-30 06:10:26 +05:30
pukkandan
71eb82d1b2 [extractor/youtube] Subtitles cannot be translated to und
Closes #5674
2022-11-30 05:18:18 +05:30
pukkandan
a9d069f5b8 [extractor/amazonminitv] Cleanup 48652590ec 2022-11-29 07:54:07 +05:30
alexia
48652590ec [extractor/amazonminitv] Add extractors (#3628)
Authored by: nyuszika7h, GautamMKGarg
2022-11-28 11:36:18 +09:00
marieell
86f557b636 [extractor/youporn] Fix metadata (#2768)
Authored by: marieell
2022-11-26 08:00:25 +05:30
pukkandan
c0caa80515 [extractor/naver] Treat fan subtitles as separate language
Closes #5467
2022-11-25 16:10:30 +05:30
Mudassir Chapra
0d95d8b00a [extractor/gronkh] Fix _VALID_URL (#5628)
Closes #5531
Authored by: muddi900
2022-11-24 15:34:45 +00:00
Elan Ruusamäe
9d52bf65ff [extractor/kanal2] Add extractor (#5575)
Authored by: glensc, pukkandan, bashonly
2022-11-22 18:09:57 +00:00
bashonly
d761dfd059 [extractor/naver] Improve _VALID_URL for NaverNowIE (#5620)
Authored by: bashonly
2022-11-22 03:42:16 +00:00
bashonly
27c0f899c8 [extractor/screencastify] Add extractor (#5604)
Closes #5603
Authored by: bashonly
2022-11-22 00:40:02 +00:00
bashonly
7ff2fafe47 [extractor/vimeo] Add VimeoProIE (#5596)
* Add support for VimeoPro URLs not containing a Vimeo video ID
* Add support for password-protected VimeoPro pages
Closes #5594
Authored by: bashonly, pukkandan
2022-11-21 00:55:57 +00:00
bashonly
3b021eacef [extractor/generic] Add fragment_query extractor arg for DASH and HLS (#5528)
* `fragment_query`: passthrough any query in generic mpd/m3u8 manifest URLs to their fragments
* Add support for `extra_param_to_segment_url` to DASH downloader
Authored by: bashonly, pukkandan
2022-11-21 00:51:45 +00:00
Marcel
f352a09778 [webvtt] Handle premature EOF
Closes #2867, closes #5600
Authored by: flashdagger
2022-11-20 14:14:42 +05:30
chengzhicn
02b2f9fa7d [extractor/reddit] Add vcodec to fallback format (#5591)
Authored by: chengzhicn
2022-11-20 01:44:21 +05:30
pukkandan
29ca408219 [FormatSort] Add mov to vext
Closes #5581
2022-11-19 09:04:01 +05:30
pukkandan
8486540257 [extractor/unsupported] Add more URLs
Closes #5557, Closes #2744, Closes #5578
2022-11-19 08:42:06 +05:30
bashonly
ed027fd9d8 [extractor/generic] Fix JSON LD manifest extraction (#5577)
Closes #5572
Authored by: bashonly, pukkandan
2022-11-18 02:04:03 +00:00
bashonly
352e7d9873 [extractor/twitter] Refresh guest token when expired (#5560)
Closes #5548
Authored by: bashonly, Grub4K
2022-11-18 02:00:11 +00:00
nixxo
9a0416c6a5 [extractor/twitter:spaces] Add 'Referer' to m3u8 (#5580)
Closes #5565
Authored by: nixxo
2022-11-18 06:42:02 +05:30
bashonly
f5a9e9df0d [extractor/brightcove] Add BrightcoveNewBaseIE and fix embed extraction (#5558)
* Move Brightcove embed extraction and tests into the IEs
* Split `BrightcoveNewBaseIE` from `BrightcoveNewIE`
* Fix bug in ade1fa70cb with the "wrong" spelling of `referrer` being smuggled

Closes #5539
2022-11-17 19:11:35 +00:00
bashonly
f96a3fb7d3 [extractor/redgifs] Fix bug in 8c188d5d09 (#5559) 2022-11-17 19:09:40 +00:00
Bnyro
bc87dac75f [extractor/youtube] Add piped.video (#5571)
Closes #5518
Authored by: Bnyro
2022-11-17 18:45:38 +05:30
pukkandan
9f14daf22b [extractor] Deprecate _sort_formats 2022-11-17 11:40:17 +05:30
pukkandan
784320c98c Implement universal format sorting
Closes #5566
2022-11-17 11:05:49 +05:30
pukkandan
d0d74b7197 [utils] Move format sorting code into utils 2022-11-17 11:04:38 +05:30
pukkandan
64c464a144 [utils] Move FileDownloader.parse_bytes into utils 2022-11-17 08:40:34 +05:30
pukkandan
4de88a6a36 [extractor/generic] Don't report redirect to https 2022-11-17 02:12:07 +05:30
pukkandan
105bfd90f5 Add new field aspect_ratio
Closes #5402
2022-11-16 06:57:09 +05:30
pukkandan
6368e2e639 [cleanup] Misc
Closes #5541
2022-11-16 06:57:07 +05:30
pukkandan
a4894d3e25 [extractor/youtube] Consider language in format de-duplication 2022-11-15 05:23:46 +05:30
pukkandan
d7b460d0e5 Make early reject of --match-filter stricter
Closes #5509
2022-11-13 10:56:06 +05:30
pukkandan
171a31dbe8 [extractor] Add a way to distinguish IEs that returns only videos 2022-11-13 10:56:04 +05:30
pukkandan
83cc7b8aae [utils] classproperty: Add cache support 2022-11-13 08:29:49 +05:30
Elyse
0a4b2f4180 [extractor/tencent] Fix geo-restricted video (#5505)
Closes #5230
Authored by: elyse0
2022-11-12 12:43:13 +05:30
pukkandan
a8c754cc00 [extractor/youtube] Fix bug in handling of music URLs
Bug in bd7e919a75
Closes #5502
2022-11-12 00:02:13 +05:30
pukkandan
bc5c2f8a2c Fix bugs in PlaylistEntries 2022-11-12 00:02:12 +05:30
Audrey
d965856235 [extractor/Veoh] Add user extractor (#5242)
Authored by: tntmod54321
2022-11-11 23:28:54 +05:30
pukkandan
08270da5c3 [extractor/youtube] Fix ytuser: 2022-11-11 16:29:52 +05:30
github-actions
5e39fb982e [version] update
Created by: pukkandan

:ci skip all :ci run dl
2022-11-11 10:37:46 +00:00
pukkandan
8b644025b1 Release 2022.11.11 2022-11-11 16:03:04 +05:30
Robert Geislinger
7aaf4cd2a8 [cleanup] Misc
Closes #5471, Closes #5312

Authored by: pukkandan, Alienmaster
2022-11-11 15:48:29 +05:30
pukkandan
8522226d2f [ThumbnailsConvertor] Fix filename escaping
Closes #4604
Authored by: pukkandan, dirkf
2022-11-11 15:28:19 +05:30
Vitaly Khabarov
f4b2c59cfe [extractor/YleAreena] Add extractor (#5270)
Closes #2508
Authored by: vitkhab, pukkandan
2022-11-11 15:06:23 +05:30
Timendum
7c8c63529e [extractor/cinetecamilano] Add extractor (#5279)
Closes #5031
Authored by: timendum
2022-11-11 14:33:17 +05:30
bashonly
e4221b700f Fix --list options not implying -s in some cases (#5296)
Authored by: bashonly, Grub4K
2022-11-11 14:24:57 +05:30
pukkandan
bd7e919a75 [extractor/youtube:tab] Improvements to tab handling (#5487)
* Better handling of direct channel URLs - See https://github.com/yt-dlp/yt-dlp/pull/5439#issuecomment-1309322019
* Prioritize tab id from URL slug - Closes #5486
* Add metadata for the wrapping playlist
* Simplify redirect for music playlists
2022-11-11 13:52:40 +05:30
pukkandan
f7fc8d39e9 [extractor] Fix fatal=False for _search_nuxt_data
Closes #5423
2022-11-11 07:29:29 +05:30
mlampe
a6858cda29 [build] Make linux binary truly standalone using conda (#5423)
Authored by: mlampe
2022-11-11 07:28:23 +05:30
MrOctopus
17fc3dc48a [build] Create armv7l and aarch64 releases (#5449)
Closes #5436
Authored by: MrOctopus, pukkandan
2022-11-11 07:19:24 +05:30
Matthew
3f5c216969 [extractor/nzherald] Support new video embed (#5493)
Authored by: coletdjnz
2022-11-10 21:12:10 +00:00
Matthew
e72e48c53f [extractor/youtube] Ignore incomplete data error for comment replies (#5490)
When --ignore-errors is used.
Closes https://github.com/yt-dlp/yt-dlp/issues/4669
Authored by: coletdjnz
2022-11-10 06:35:22 +00:00
Matthew
0cf643b234 [extractor/youtube] Differentiate between no and disabled comments (#5491)
`comments` and `comment_count` will be set to None, as opposed to 
an empty list and 0, respectively.

Fixes https://github.com/yt-dlp/yt-dlp/issues/5068

Authored by: coletdjnz, pukkandan
2022-11-10 03:33:03 +00:00
Sergey
dc3028d233 [build] py2exe: Migrate to freeze API (#5149)
Closes #5135
Authored by: SG5, pukkandan
2022-11-10 08:54:14 +05:30
Matthew
4dc23a8051 [extractor/youtube:tab] Fix video metadata from tabs (#5489)
Closes #5488
Authored by: coletdjnz
2022-11-10 08:14:12 +05:30
pukkandan
495322b95b [test] Allow extract_flat in download tests
Authored by: coletdjnz, pukkandan
2022-11-10 07:32:35 +05:30
Alex
c789fb7787 [build, test] Harden workflows' security (#5410)
Authored by: sashashura
2022-11-10 07:11:07 +05:30
pukkandan
ed6bec168d [extractor/doodstream] Remove extractor
It was added in youtube-dlc, likely without sufficient scrutiny

Closes #3808, Closes #5251, Closes #5403
2022-11-09 22:19:46 +05:30
MMM
0d8affc17f [extractor/rumble] Add HLS formats and extract more metadata (#5280)
Closes #5177, #5277 
Authored by: flashdagger
2022-11-09 15:06:11 +05:30
Matthew
d9df9b4919 [extractor/unsupported] Raise error on known DRM-only sites (#5483)
Authored by: coletdjnz
2022-11-09 09:09:13 +00:00
MMM
efdc45a6ea [extractor/bitchute] Better error for geo-restricted videos (#5474)
Authored by: flashdagger
2022-11-09 14:35:08 +05:30
Matthew
86973308cd [extractor/youtube:tab] Update tab handling for redesign (#5439)
Closes #5432, #5430, #5419
Authored by: coletdjnz, pukkandan
2022-11-09 14:28:44 +05:30
MMM
c61473c1d6 [extractor/bitchute] Improve BitChuteChannelIE (#5066)
Authored by: flashdagger, pukkandan
2022-11-09 09:00:15 +05:30
zulaport
8fddc232bf [extractor/camsoda] Add extractor (#5465)
Authored by: zulaport
2022-11-09 08:53:24 +05:30
pukkandan
fad689c7b6 [extractor/hotstar] Refactor v1 API calls 2022-11-09 08:44:50 +05:30
m4tu4g
db6fa6960c [extractor/hotstar] Add season support (#5479)
Closes #5473
Authored by: m4tu4g
2022-11-09 08:33:10 +05:30
Anant Murmu
3b87f4d943 [extractor/stripchat] Improve error message (#5475)
Authored by: freezboltz
2022-11-08 12:14:47 +05:30
pukkandan
581e86b512 [extractor/uktvplay] Fix _VALID_URL
Closes #5472
2022-11-07 21:47:31 +05:30
megapro17
8196182a12 [extractor/odnoklassniki] Support boosty.to embeds (#5105)
Closes #4212
Authored by: megapro17, Lesmiscore, pukkandan
2022-11-07 21:32:42 +05:30
m4tu4g
9b383177c9 [extractor/mxplayer] Improve extractor (#5303)
Closes #5276
Authored by: m4tu4g
2022-11-07 21:29:53 +05:30
ClosedPort22
fbb0ee7747 [compat] Fix shutils.move in restricted ACL mode on BSD (#5309)
Authored by: ClosedPort22, pukkandan
2022-11-07 20:54:30 +05:30
Lesmiscore
c7e4ab278a [extractor/niconico] Always use HTTPS for requests
This prevents MITM attacks from malicious parties like insane ISPs

Closes #5469
2022-11-07 14:56:28 +09:00
pukkandan
e9ce4e9250 [extractor/foxnews] Add FoxNewsVideo extractor
Closes #5133
2022-11-07 03:00:01 +05:30
pukkandan
5da08bde9e [extractor/vlive] Extract release_timestamp
Closes #5424
2022-11-07 02:49:03 +05:30
pukkandan
ff48fc04d0 [update] Use error code 100 for update errors
This error code was previously used for
"Exiting to finish update", but is no longer used

Closes #5198
2022-11-07 02:40:36 +05:30
pukkandan
46d09f8707 [cleanup] Lint and misc cleanup 2022-11-07 02:32:36 +05:30
pukkandan
db4678e448 Update to ytdl-commit-de39d128
[extractor/ceskatelevize] Back-port extractor from yt-dlp
de39d1281c

Closes #5361, Closes #4634, Closes #5210
2022-11-07 02:18:30 +05:30
zulaport
a349d4d641 [extractor/stripchat] Fix hostname for HLS stream (#5445)
Closes #5227 
Authored by: zulaport
2022-11-07 02:09:09 +05:30
Matthew
ac8e69dd32 Do not backport Python 3.10 SSL configuration for LibreSSL (#5464)
Until further investigation.

Fixes regression in 5b9f253fa0

Authored by: coletdjnz
2022-11-06 20:30:55 +00:00
bashonly
96b9e9cf62 [extractor/telegram] Add playlist support and more metadata (#5358)
Authored by: bashonly, bsun0000
2022-11-06 19:05:09 +00:00
Jeff Huffman
cb1553e966 [extractor/crunchyroll] Beta is now the only layout (#5294)
Closes #5292
Authored by: tejing1
2022-11-07 00:18:55 +05:30
Alex Karabanov
0d2a0ecac3 [extractor/listennotes] Add extractor (#5310)
Closes #5262
Authored by: lksj, pukkandan
2022-11-07 00:00:59 +05:30
changren-wcr
c94df4d19d [extractor/qingting] Add extractor (#5329)
Closes #5323
Authored by: changren-wcr, bashonly
2022-11-06 23:41:53 +05:30
lauren
728f4b5c2e [extractor/tvp] Update extractors (#5346)
Closes #5328
Authored by: selfisekai
2022-11-06 23:40:06 +05:30
Kevin Wood
8c188d5d09 [extractor/redgifs] Refresh auth token for 401 (#5352)
Closes #5351
Authored by: endotronic, pukkandan
2022-11-06 23:15:45 +05:30
Bruno Guerreiro
e14ea7fbd9 [extractor/youtube] Update piped instances (#5441)
Closes #5286
Authored by: Generator
2022-11-06 23:12:23 +05:30
Richard Gibson
7053aa3a48 [extractor/epoch] Support videos without data-trailer (#5387)
Closes #5359
Authored by: gibson042, pukkandan
2022-11-06 22:53:16 +05:30
HobbyistDev
049565df2e [extractor/swearnet] Add extractor (#5371)
Authored by: HobbyistDev
2022-11-06 22:41:33 +05:30
CrankDatSouljaBoy
cc1d3bf96b [extractor/deuxm] Add extractors (#5388)
Authored by: CrankDatSouljaBoy
2022-11-06 22:21:15 +05:30
Matthew
5b9f253fa0 Backport SSL configuration from Python 3.10 (#5437)
Partial fix for https://github.com/yt-dlp/yt-dlp/pull/5294#issuecomment-1289363572, https://github.com/yt-dlp/yt-dlp/issues/4627

Authored by: coletdjnz
2022-11-06 22:07:23 +05:30
nixxo
d715b0e413 [extractor/skyit] Fix extractors (#5442)
Closes #5392
Authored by: nixxo
2022-11-06 21:51:12 +05:30
Matthew
6141346d18 [extractor/youtube] Update playlist metadata extraction for new layout (#5376)
Fixes https://github.com/yt-dlp/yt-dlp/issues/5373

Authored by: coletdjnz
2022-11-06 05:25:31 +00:00
MMM
59a0c35865 [extractor/lbry] Authenticate with cookies (#5435)
Closes #5431
Authored by: flashdagger
2022-11-05 16:09:58 +05:30
Lesmiscore
da9a60ca0d [extractor/twitcasting] Fix data-movie-playlist extraction (#5453)
Authored by: Lesmiscore
2022-11-05 19:18:15 +09:00
sam
0d113603ac [extractor/oftv] Add extractors (#5134)
Closes #5017
Authored by: DoubleCouponDay
2022-11-05 15:43:05 +05:30
pukkandan
2e30b46fe4 [extractor/youtube] Improve chapter parsing from description
Closes #5448
2022-11-05 15:34:53 +05:30
bashonly
68a9a450d4 [extractor/genius] Add extractors (#5221)
Closes #5209
Authored by: bashonly
2022-11-04 21:07:45 +05:30
sam
ed13a772d7 [extractor/bbc] Support onion domains (#5211)
Authored by: DoubleCouponDay
2022-11-04 20:55:17 +05:30
lauren
78545664bf [extractor/agora] Add extractors (#5101)
Authored by: selfisekai
2022-11-04 20:24:05 +05:30
pukkandan
f72218c199 [extractor/bitchute] Simplify extractor (#5066)
* Check alternate domains when a URL does not work
* Obey `--no-check-formats`
* Remove webseeds (doesnt seem to exist anymore)

Authored by: flashdagger, pukkandan

Co-authored-by: Marcel <flashdagger@googlemail.com>
2022-11-04 19:44:00 +05:30
James Woglom
58fb927ebd [kaltura] Support playlists (#4986)
Authored by: jwoglom, pukkandan
2022-11-04 17:15:47 +05:30
pukkandan
62b8dac490 [extractor] Improve _generic_title 2022-10-31 17:41:48 +05:30
Lesmiscore
682b4524bf [extractor/japandiet] Add extractors (#5368)
Authored by: Lesmiscore
2022-10-31 15:51:53 +09:00
nosoop
9da6612b0f [extractor/youtube] Fix duration for premieres (#5382)
Closes #5378
Authored by: nosoop
2022-10-29 00:00:33 +05:30
coletdjnz
e63faa101c [extractor/youtube] Fix live_status extraction for playlist videos
Regression in 867c66ff97

Authored by: coletdjnz
2022-10-27 17:36:54 +13:00
pukkandan
497074f044 Write API params in debug head 2022-10-25 20:09:28 +05:30
pukkandan
c90c5b9bdd [extractor/bilibili] Add chapters and misc cleanup (#4221)
Authored by: lockmatrix, pukkandan
2022-10-25 20:09:27 +05:30
Locke
ad97487606 [extractor/bilibili] Fix BilibiliIE and Bangumi extractors (#4945)
Closes #1878, #4071, #4397
Authored by: lockmatrix, pukkandan
2022-10-25 18:28:18 +05:30
HobbyistDev
e091fb92da [extractor/mlb] Add MLBArticle extractor (#4832)
Closes #3475
Authored by: HobbyistDev
2022-10-25 16:00:03 +05:30
Alex Karabanov
c9bd65185c [extractor/zenyandex] Fix extractors (#3750, #5268)
Closes #3736
Authored by:  lksj, puc9, pukkandan

Co-authored-by: puc9 <51006296+puc9@users.noreply.github.com>
2022-10-25 15:50:48 +05:30
bashonly
c66ed4e2e5 [extractor/americastestkitchen] Fix extractor (#5343)
Fix `_VALID_URL` and season extraction

Closes #5343
Authored by: bashonly
2022-10-24 15:46:56 +05:30
pukkandan
2530b68d44 [extractor/iprima] Make json+ld non-fatal
Closes #5318

Authored by: bashonly
2022-10-22 06:20:37 +05:30
Lesmiscore
7d61d2306e [build] Replace set-output with GITHUB_OUTPUT (#5315)
https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/

Authored by: Lesmiscore
2022-10-21 18:56:00 +05:30
m4tu4g
385adffcf5 [extractor/zee5] Improve _VALID_URL (#5316)
Authored by: m4tu4g
2022-10-21 16:11:43 +05:30
pukkandan
0c908911f9 [extractor/redgifs] Fix extractors
Superseeds f47cf86eff

Closes #5311

Authored by: bashonly
2022-10-21 14:34:46 +05:30
m4tu4g
c13a301a94 [extractor/zeenews] Add extractor (#5289)
Closes #4967 
Authored by: m4tu4g, pukkandan
2022-10-20 03:17:18 +05:30
pukkandan
f47cf86eff [extractor/redgifs] Fix extractors
Closes #5202, closes #5216
2022-10-20 02:48:49 +05:30
Simon Sawicki
7a26ce2641 [extractor/twitter] Add Spaces extractor and GraphQL API (#5247, #4864)
Closes #1605, Closes #5233, Closes #1249

Authored by: Grub4K, nixxo, bashonly, pukkandan

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
Co-authored-by: nixxo <nixxo@protonmail.com>
2022-10-19 21:31:21 +05:30
bashonly
3639df54c3 [extractor/paramountplus] Update API token (#5285)
Closes #5273
Authored by: bashonly
2022-10-19 17:48:27 +05:30
Anant Murmu
a4713ba96d [extractor/voot] Improve _VALID_URL (#5283)
Authored by: freezboltz
2022-10-19 12:25:28 +05:30
bsun0000
5318156f1c [extractor/youtube] Mark videos as fully watched
Closes #2555
Authored by: bsun0000
2022-10-19 00:07:47 +05:30
pukkandan
d5d1df8afd [cleanup Misc
Closes #5162
2022-10-18 23:52:44 +05:30
pukkandan
cd5df121f3 [SponsorBlock] Relax duration check for large segments 2022-10-18 23:36:59 +05:30
jahway603
73ac0e6b85 [docs, devscripts] Document pyinst's argument passthrough (#5235)
Closes #4631
Authored by: jahway603
2022-10-18 23:25:52 +05:30
pukkandan
a7ddbc0475 [ModifyChapters] Handle the entire video being marked for removal
Closes #5238
2022-10-18 23:08:24 +05:30
pukkandan
8fab23301c [SponsorBlock] Obey --retry-sleep extractor 2022-10-18 23:08:24 +05:30
pukkandan
1338ae3ba3 [SponsorBlock] Add type field 2022-10-18 23:08:23 +05:30
Ajay Ramachandran
63c547d71c [SponsorBlock] Support chapter category (#5260)
Authored by: ajayyy, pukkandan
2022-10-18 22:21:57 +05:30
pukkandan
814bba3933 [downloader/fragment] HLS download can continue without first fragment
Closes #5274
2022-10-18 19:20:51 +05:30
cruel-efficiency
2576d53a31 Fix end time of clips (#5255)
Closes #5256
Authored by: cruel-efficiency
2022-10-18 18:21:43 +05:30
Matthew
217753f4aa [extractor/YoutubeWebArchive] Improve metadata extraction (#4968)
Closes https://github.com/yt-dlp/yt-dlp/issues/4574

Authored by: coletdjnz
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2022-10-17 05:46:24 +00:00
Vitaly Khabarov
42a44f01c3 [extractor/Fox] Extract thumbnail (#5243)
Closes #1679
Authored by: vitkhab
2022-10-15 14:16:08 +05:30
pukkandan
9b9dad119a [outtmpl] Ensure ASCII in json and add option for Unicode
Closes #5236
2022-10-14 11:50:24 +05:30
Matthew
6dca2aa66d [extractor/generic:quoted-html] Add extractor (#5213)
Extracts embeds from escaped HTML within `data-html` attribute.
Related: https://github.com/ytdl-org/youtube-dl/issues/21294, https://github.com/yt-dlp/yt-dlp/pull/5121

Authored by: coletdjnz
Co-authored-by: pukkandan <pukkandan.ytdlp@gmail.com>
2022-10-14 04:32:52 +00:00
pukkandan
6678a4f0b3 [extractor/youtube] Fix live_status
Bug in 4d37720a0c
2022-10-14 07:41:53 +05:30
pukkandan
d51b2816e3 [extractor/iq] Increase phantomjs timeout
Closes #5161
2022-10-14 07:41:06 +05:30
lauren
34f00179db [extractor/cda]: Support login through API (#5100)
Authored by: selfisekai
2022-10-14 07:11:08 +05:30
pukkandan
5225df50cf [extractor/youtube:tab] Let approximate_date return timestamp 2022-10-13 15:30:15 +05:30
pukkandan
94dc8604dd Do more processing in --flat-playlist 2022-10-13 15:30:14 +05:30
Simon Sawicki
a71b812f53 [utils] js_to_json: Improve escape handling (#5217)
Authored by: Grub4K
2022-10-13 01:52:17 +05:30
sam
c6989aa3ae [extractor/aeon] Add extractor (#5205)
Closes #1653
Authored by: DoubleCouponDay
2022-10-12 15:25:42 +05:30
pukkandan
a79bf78397 [extractor/tnaflix] Fix 09c127ff83
Closes #5188
2022-10-12 11:19:56 +05:30
sam
82fb2357d9 [extractor/twitter] Add onion site to _VALID_URL (#5208)
See #3053
Authored by: DoubleCouponDay
2022-10-12 09:42:31 +05:30
Simon Sawicki
13b2ae29c2 [extractor/twitter] Support multi-video posts (#5183)
Closes #5157, Closes #5147
Authored by: Grub4K
2022-10-11 11:24:38 +05:30
Simon Sawicki
36069409ec [cookies] Improve LenientSimpleCookie (#5195)
Closes #5186 
Authored by: Grub4K
2022-10-11 09:09:12 +05:30
pukkandan
0468a3b325 [jsinterp] Improve separating regex
Fixes https://github.com/yt-dlp/yt-dlp/issues/4635#issuecomment-1273974909
2022-10-11 08:02:26 +05:30
pukkandan
d509c1f5a3 [utils] strftime_or_none: Workaround Python bug on Windows
CLoses #5185
2022-10-11 08:02:23 +05:30
schnusch
2c98d99818 [extractors/podbayfm] Add extractor (#4971)
Authored by: schnusch
2022-10-11 02:01:01 +05:30
bashonly
226c0f3a54 [extractor/sbs] Improve _VALID_URL (#5193)
Closes #5045
Authored by: bashonly
2022-10-11 01:58:55 +05:30
pukkandan
ade1fa70cb [extractor/generic] Separate embed extraction into own function (#5176) 2022-10-09 16:09:36 +05:30
Matthew
4c9a1a3ba5 [extractor/wordpress:mb.miniAudioPlayer] Add embed extractor (#5087)
Closes https://github.com/yt-dlp/yt-dlp/issues/4994

Authored by: coletdjnz
2022-10-09 05:55:26 +00:00
Simon Sawicki
1d55ebabc9 [extractor/common] Fix json_ld type checks (#5145)
Closes #5144, #5143
Authored by: Grub4K
2022-10-09 08:47:58 +05:30
tkgmomosheep
f324fe8c59 [extractor/viu] Support subtitles of on-screen text (#5173)
Authored by: tkgmomosheep
2022-10-09 08:04:12 +05:30
HobbyistDev
866f037344 [extractor/nos.nl] Add extractor (#4822)
Closes #4649
Authored by: HobbyistDev
2022-10-09 08:02:58 +05:30
Marenga
5d14b73491 [VK] Fix playlist URLs (#4930)
Closes #2825
Authored by: the-marenga
2022-10-09 07:20:44 +05:30
Teemu Ikonen
540236ce11 [extractor/screen9] Add extractor (#5137)
Authored by: tpikonen
2022-10-09 07:04:22 +05:30
Simon Sawicki
7b0127e1e1 [utils] traverse_obj: Allow re.Match objects (#5174)
Authored by: Grub4K
2022-10-09 07:01:37 +05:30
Simon Sawicki
f99bbfc983 [utils] traverse_obj: Always return list when branching (#5170)
Fixes #5162
Authored by: Grub4K
2022-10-09 06:57:32 +05:30
bashonly
3b55aaac59 [extractor/tubitv] Better DRM detection (#5171)
Closes #5128
Authored by: bashonly
2022-10-08 02:05:46 +05:30
bashonly
2e565f5bca [extractor/reddit] Add fallback format (#5165)
Closes #5160
Authored by: bashonly
2022-10-07 17:40:12 +05:30
Noah
e02e6d86db [embedthumbnail] Fix thumbnail name in mp3 (#5163)
Authored by: How-Bout-No
2022-10-07 17:34:27 +05:30
Matthew
867c66ff97 [extractor/youtube] Extract concurrent view count for livestreams (#5152)
Adds new field `concurrent_view_count`
Closes https://github.com/yt-dlp/yt-dlp/issues/4843

Authored by: coletdjnz
2022-10-07 07:00:40 +00:00
bashonly
f03940963e [extractor/dplay] Add MotorTrendOnDemand extractor (#5151)
Closes #5141
Authored by: bashonly
2022-10-06 10:40:54 +05:30
Sergey
09c127ff83 [extractor/Tnaflix] Fix for HTTP 500 (#5150)
Closes #5107
Authored by: SG5
2022-10-06 09:24:41 +05:30
pukkandan
aebb4f4ba7 Fix for formats=None
Fixes: https://github.com/yt-dlp/yt-dlp/pull/4965#issuecomment-1267682512
2022-10-05 09:17:33 +05:30
invertico
bf2e1ec67a [extractor/livestreamfails] Support posts (#5139)
Authored by: invertico
2022-10-04 23:52:07 +05:30
pukkandan
98d4ec1ef2 [build] Pin py2exe version
Workaround for #5135
2022-10-04 23:02:12 +05:30
pukkandan
1305b659ef [extractor/detik] Avoid unnecessary extraction 2022-10-04 10:45:00 +05:30
738 changed files with 13896 additions and 6353 deletions

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a broken site - label: I'm reporting a broken site
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.01.02** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -62,7 +62,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.10.04 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.01.02 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -70,8 +70,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.10.04, Current version: 2022.10.04 Latest version: 2023.01.02, Current version: 2023.01.02
yt-dlp is up to date (2022.10.04) yt-dlp is up to date (2023.01.02)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a new site support request - label: I'm reporting a new site support request
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.01.02** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -74,7 +74,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.10.04 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.01.02 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -82,8 +82,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.10.04, Current version: 2022.10.04 Latest version: 2023.01.02, Current version: 2023.01.02
yt-dlp is up to date (2022.10.04) yt-dlp is up to date (2023.01.02)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm requesting a site-specific feature - label: I'm requesting a site-specific feature
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.01.02** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -70,7 +70,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.10.04 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.01.02 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -78,8 +78,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.10.04, Current version: 2022.10.04 Latest version: 2023.01.02, Current version: 2023.01.02
yt-dlp is up to date (2022.10.04) yt-dlp is up to date (2023.01.02)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a bug unrelated to a specific site - label: I'm reporting a bug unrelated to a specific site
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.01.02** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -55,7 +55,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.10.04 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.01.02 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -63,8 +63,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.10.04, Current version: 2022.10.04 Latest version: 2023.01.02, Current version: 2023.01.02
yt-dlp is up to date (2022.10.04) yt-dlp is up to date (2023.01.02)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -20,7 +20,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.01.02** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true required: true
@@ -51,7 +51,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.10.04 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.01.02 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -59,7 +59,7 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.10.04, Current version: 2022.10.04 Latest version: 2023.01.02, Current version: 2023.01.02
yt-dlp is up to date (2022.10.04) yt-dlp is up to date (2023.01.02)
<more lines> <more lines>
render: shell render: shell

View File

@@ -26,7 +26,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.01.02** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true required: true
@@ -57,7 +57,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.10.04 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.01.02 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -65,7 +65,7 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.10.04, Current version: 2022.10.04 Latest version: 2023.01.02, Current version: 2023.01.02
yt-dlp is up to date (2022.10.04) yt-dlp is up to date (2023.01.02)
<more lines> <more lines>
render: shell render: shell

View File

@@ -2,8 +2,6 @@
### Description of your *pull request* and other information ### Description of your *pull request* and other information
</details>
<!-- <!--
Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible
@@ -41,3 +39,5 @@ Fixes #
- [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy)) - [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy))
- [ ] Core bug fix/improvement - [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes)) - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
</details>

View File

@@ -1,8 +1,12 @@
name: Build name: Build
on: workflow_dispatch on: workflow_dispatch
permissions:
contents: read
jobs: jobs:
prepare: prepare:
permissions:
contents: write # for push_release
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
version_suffix: ${{ steps.version_suffix.outputs.version_suffix }} version_suffix: ${{ steps.version_suffix.outputs.version_suffix }}
@@ -21,7 +25,7 @@ jobs:
env: env:
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }} PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
if: "env.PUSH_VERSION_COMMIT == ''" if: "env.PUSH_VERSION_COMMIT == ''"
run: echo ::set-output name=version_suffix::$(date -u +"%H%M%S") run: echo "version_suffix=$(date -u +"%H%M%S")" >> "$GITHUB_OUTPUT"
- name: Bump version - name: Bump version
id: bump_version id: bump_version
run: | run: |
@@ -36,7 +40,7 @@ jobs:
git add -u git add -u
git commit -m "[version] update" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl" git commit -m "[version] update" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
git push origin --force ${{ github.event.ref }}:release git push origin --force ${{ github.event.ref }}:release
echo ::set-output name=head_sha::$(git rev-parse HEAD) echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
- name: Update master - name: Update master
env: env:
PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }} PUSH_VERSION_COMMIT: ${{ secrets.PUSH_VERSION_COMMIT }}
@@ -46,32 +50,46 @@ jobs:
build_unix: build_unix:
needs: prepare needs: prepare
runs-on: ubuntu-18.04 # Standalone executable should be built on minimum supported OS runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- uses: actions/setup-python@v4 - uses: actions/setup-python@v4
with: with:
python-version: '3.10' python-version: '3.10'
- uses: conda-incubator/setup-miniconda@v2
with:
miniforge-variant: Mambaforge
use-mamba: true
channels: conda-forge
auto-update-conda: true
activate-environment: ''
auto-activate-base: false
- name: Install Requirements - name: Install Requirements
run: | run: |
sudo apt-get -y install zip pandoc man sudo apt-get -y install zip pandoc man sed
python -m pip install --upgrade pip setuptools wheel twine python -m pip install -U pip setuptools wheel twine
python -m pip install Pyinstaller -r requirements.txt python -m pip install -U Pyinstaller -r requirements.txt
reqs=$(mktemp)
echo -e 'python=3.10.*\npyinstaller' >$reqs
sed 's/^brotli.*/brotli-python/' <requirements.txt >>$reqs
mamba create -n build --file $reqs
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }} python devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build Unix executables - name: Build Unix platform-independent binary
run: | run: |
make all tar make all tar
- name: Build Unix standalone binary
shell: bash -l {0}
run: |
unset LD_LIBRARY_PATH # Harmful; set by setup-python
conda activate build
python pyinst.py --onedir python pyinst.py --onedir
(cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .) (cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
python pyinst.py python pyinst.py
- name: Get SHA2-SUMS
id: get_sha
run: |
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v3
@@ -113,6 +131,49 @@ jobs:
git -C taps/ push git -C taps/ push
build_linux_arm:
permissions:
packages: write # for Creating cache
runs-on: ubuntu-latest
needs: prepare
strategy:
matrix:
architecture:
- armv7
- aarch64
steps:
- uses: actions/checkout@v3
with:
path: ./repo
- name: Virtualized Install, Prepare & Build
uses: yt-dlp/run-on-arch-action@v2
with:
githubToken: ${{ github.token }} # To cache image
arch: ${{ matrix.architecture }}
distro: ubuntu18.04 # Standalone executable should be built on minimum supported OS
dockerRunArgs: --volume "${PWD}/repo:/repo"
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
apt update
apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
python3.8 -m pip install -U pip setuptools wheel
# Cannot access requirements.txt from the repo directory at this stage
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
run: |
cd repo
python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
python3.8 devscripts/update-version.py ${{ needs.prepare.outputs.version_suffix }}
python3.8 devscripts/make_lazy_extractors.py
python3.8 pyinst.py
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
path: | # run-on-arch-action designates armv7l as armv7
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
build_macos: build_macos:
runs-on: macos-11 runs-on: macos-11
needs: prepare needs: prepare
@@ -193,8 +254,8 @@ jobs:
python-version: '3.8' python-version: '3.8'
- name: Install Requirements - name: Install Requirements
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
python -m pip install --upgrade pip setuptools wheel py2exe python -m pip install -U pip setuptools wheel py2exe
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt
- name: Prepare - name: Prepare
run: | run: |
@@ -229,8 +290,8 @@ jobs:
architecture: 'x86' architecture: 'x86'
- name: Install Requirements - name: Install Requirements
run: | run: |
python -m pip install --upgrade pip setuptools wheel python -m pip install -U pip setuptools wheel
pip install "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.3-py3-none-any.whl" -r requirements.txt
- name: Prepare - name: Prepare
run: | run: |
@@ -248,8 +309,10 @@ jobs:
publish_release: publish_release:
permissions:
contents: write # for action-gh-release
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [prepare, build_unix, build_windows, build_windows32, build_macos, build_macos_legacy] needs: [prepare, build_unix, build_linux_arm, build_windows, build_windows32, build_macos, build_macos_legacy]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
@@ -276,6 +339,8 @@ jobs:
sha256sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-256SUMS sha256sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-256SUMS
sha256sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-256SUMS sha256sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-256SUMS
sha256sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-256SUMS sha256sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-256SUMS
sha256sum artifact/yt-dlp_linux_armv7l | awk '{print $1 " yt-dlp_linux_armv7l"}' >> SHA2-256SUMS
sha256sum artifact/yt-dlp_linux_aarch64 | awk '{print $1 " yt-dlp_linux_aarch64"}' >> SHA2-256SUMS
sha256sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-256SUMS sha256sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-256SUMS
sha256sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-256SUMS sha256sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-256SUMS
sha512sum artifact/yt-dlp | awk '{print $1 " yt-dlp"}' >> SHA2-512SUMS sha512sum artifact/yt-dlp | awk '{print $1 " yt-dlp"}' >> SHA2-512SUMS
@@ -287,6 +352,8 @@ jobs:
sha512sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-512SUMS sha512sum artifact/yt-dlp_macos | awk '{print $1 " yt-dlp_macos"}' >> SHA2-512SUMS
sha512sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-512SUMS sha512sum artifact/yt-dlp_macos.zip | awk '{print $1 " yt-dlp_macos.zip"}' >> SHA2-512SUMS
sha512sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-512SUMS sha512sum artifact/yt-dlp_macos_legacy | awk '{print $1 " yt-dlp_macos_legacy"}' >> SHA2-512SUMS
sha512sum artifact/yt-dlp_linux_armv7l | awk '{print $1 " yt-dlp_linux_armv7l"}' >> SHA2-512SUMS
sha512sum artifact/yt-dlp_linux_aarch64 | awk '{print $1 " yt-dlp_linux_aarch64"}' >> SHA2-512SUMS
sha512sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-512SUMS sha512sum artifact/dist/yt-dlp_linux | awk '{print $1 " yt-dlp_linux"}' >> SHA2-512SUMS
sha512sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-512SUMS sha512sum artifact/dist/yt-dlp_linux.zip | awk '{print $1 " yt-dlp_linux.zip"}' >> SHA2-512SUMS
@@ -319,6 +386,8 @@ jobs:
artifact/yt-dlp_macos artifact/yt-dlp_macos
artifact/yt-dlp_macos.zip artifact/yt-dlp_macos.zip
artifact/yt-dlp_macos_legacy artifact/yt-dlp_macos_legacy
artifact/yt-dlp_linux_armv7l
artifact/yt-dlp_linux_aarch64
artifact/dist/yt-dlp_linux artifact/dist/yt-dlp_linux
artifact/dist/yt-dlp_linux.zip artifact/dist/yt-dlp_linux.zip
_update_spec _update_spec

View File

@@ -1,5 +1,8 @@
name: Core Tests name: Core Tests
on: [push, pull_request] on: [push, pull_request]
permissions:
contents: read
jobs: jobs:
tests: tests:
name: Core Tests name: Core Tests
@@ -9,13 +12,13 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
# CPython 3.9 is in quick-test # CPython 3.11 is in quick-test
python-version: ['3.7', '3.10', 3.11-dev, pypy-3.7, pypy-3.8] python-version: ['3.8', '3.9', '3.10', pypy-3.7, pypy-3.8]
run-tests-ext: [sh] run-tests-ext: [sh]
include: include:
# atleast one of each CPython/PyPy tests must be in windows # atleast one of each CPython/PyPy tests must be in windows
- os: windows-latest - os: windows-latest
python-version: '3.8' python-version: '3.7'
run-tests-ext: bat run-tests-ext: bat
- os: windows-latest - os: windows-latest
python-version: pypy-3.9 python-version: pypy-3.9
@@ -30,5 +33,6 @@ jobs:
run: pip install pytest run: pip install pytest
- name: Run tests - name: Run tests
continue-on-error: False continue-on-error: False
run: ./devscripts/run_tests.${{ matrix.run-tests-ext }} core run: |
# Linter is in quick-test python3 -m yt_dlp -v || true # Print debug head
./devscripts/run_tests.${{ matrix.run-tests-ext }} core

View File

@@ -1,5 +1,8 @@
name: Download Tests name: Download Tests
on: [push, pull_request] on: [push, pull_request]
permissions:
contents: read
jobs: jobs:
quick: quick:
name: Quick Download Tests name: Quick Download Tests

View File

@@ -1,5 +1,8 @@
name: Quick Test name: Quick Test
on: [push, pull_request] on: [push, pull_request]
permissions:
contents: read
jobs: jobs:
tests: tests:
name: Core Test name: Core Test
@@ -7,24 +10,23 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python 3.11
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: 3.9 python-version: '3.11'
- name: Install test requirements - name: Install test requirements
run: pip install pytest pycryptodomex run: pip install pytest pycryptodomex
- name: Run tests - name: Run tests
run: ./devscripts/run_tests.sh core run: |
python3 -m yt_dlp -v || true
./devscripts/run_tests.sh core
flake8: flake8:
name: Linter name: Linter
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v3
- name: Set up Python - uses: actions/setup-python@v4
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install flake8 - name: Install flake8
run: pip install flake8 run: pip install flake8
- name: Make lazy extractors - name: Make lazy extractors

10
.gitignore vendored
View File

@@ -30,6 +30,7 @@ cookies
*.f4v *.f4v
*.flac *.flac
*.flv *.flv
*.gif
*.jpeg *.jpeg
*.jpg *.jpg
*.m4a *.m4a
@@ -71,6 +72,7 @@ dist/
zip/ zip/
tmp/ tmp/
venv/ venv/
.venv/
completions/ completions/
# Misc # Misc
@@ -119,9 +121,5 @@ yt-dlp.zip
*/extractor/lazy_extractors.py */extractor/lazy_extractors.py
# Plugins # Plugins
ytdlp_plugins/extractor/* ytdlp_plugins/
!ytdlp_plugins/extractor/__init__.py yt-dlp-plugins
!ytdlp_plugins/extractor/sample.py
ytdlp_plugins/postprocessor/*
!ytdlp_plugins/postprocessor/__init__.py
!ytdlp_plugins/postprocessor/sample.py

View File

@@ -351,8 +351,9 @@ Say you extracted a list of thumbnails into `thumbnail_data` and want to iterate
```python ```python
thumbnail_data = data.get('thumbnails') or [] thumbnail_data = data.get('thumbnails') or []
thumbnails = [{ thumbnails = [{
'url': item['url'] 'url': item['url'],
} for item in thumbnail_data] # correct 'height': item.get('h'),
} for item in thumbnail_data if item.get('url')] # correct
``` ```
and not like: and not like:
@@ -360,12 +361,27 @@ and not like:
```python ```python
thumbnail_data = data.get('thumbnails') thumbnail_data = data.get('thumbnails')
thumbnails = [{ thumbnails = [{
'url': item['url'] 'url': item['url'],
'height': item.get('h'),
} for item in thumbnail_data] # incorrect } for item in thumbnail_data] # incorrect
``` ```
In this case, `thumbnail_data` will be `None` if the field was not found and this will cause the loop `for item in thumbnail_data` to raise a fatal error. Using `or []` avoids this error and results in setting an empty list in `thumbnails` instead. In this case, `thumbnail_data` will be `None` if the field was not found and this will cause the loop `for item in thumbnail_data` to raise a fatal error. Using `or []` avoids this error and results in setting an empty list in `thumbnails` instead.
Alternately, this can be further simplified by using `traverse_obj`
```python
thumbnails = [{
'url': item['url'],
'height': item.get('h'),
} for item in traverse_obj(data, ('thumbnails', lambda _, v: v['url']))]
```
or, even better,
```python
thumbnails = traverse_obj(data, ('thumbnails', ..., {'url': 'url', 'height': 'h'}))
```
### Provide fallbacks ### Provide fallbacks

View File

@@ -3,6 +3,7 @@ shirt-dev (collaborator)
coletdjnz/colethedj (collaborator) coletdjnz/colethedj (collaborator)
Ashish0804 (collaborator) Ashish0804 (collaborator)
nao20010128nao/Lesmiscore (collaborator) nao20010128nao/Lesmiscore (collaborator)
bashonly (collaborator)
h-h-h-h h-h-h-h
pauldubois98 pauldubois98
nixxo nixxo
@@ -295,7 +296,6 @@ Mehavoid
winterbird-code winterbird-code
yashkc2025 yashkc2025
aldoridhoni aldoridhoni
bashonly
jacobtruman jacobtruman
masta79 masta79
palewire palewire
@@ -331,3 +331,47 @@ tannertechnology
Timendum Timendum
tobi1805 tobi1805
TokyoBlackHole TokyoBlackHole
ajayyy
Alienmaster
bsun0000
changren-wcr
ClosedPort22
CrankDatSouljaBoy
cruel-efficiency
endotronic
Generator
gibson042
How-Bout-No
invertico
jahway603
jwoglom
lksj
megapro17
mlampe
MrOctopus
nosoop
puc9
sashashura
schnusch
SG5
the-marenga
tkgmomosheep
vitkhab
glensc
synthpop123
tntmod54321
milkknife
Bnyro
CapacitorSet
stelcodes
skbeh
muddi900
digitall
chengzhicn
mexus
JChris246
redraskal
Spicadox
barsnick
docbender
KurtBestor

View File

@@ -11,6 +11,256 @@
--> -->
## 2023.01.02
* **Improve plugin architecture** by [Grub4K](https://github.com/Grub4K), [coletdjnz](https://github.com/coletdjnz), [flashdagger](https://github.com/flashdagger), [pukkandan](https://github.com/pukkandan)
* Plugins can be loaded in any distribution of yt-dlp (binary, pip, source, etc.) and can be distributed and installed as packages. See [the readme](https://github.com/yt-dlp/yt-dlp/tree/05997b6e98e638d97d409c65bb5eb86da68f3b64#plugins) for more information
* Add `--compat-options 2021,2022`
* This allows devs to change defaults and make other potentially breaking changes more easily. If you need everything to work exactly as-is, put Use `--compat 2022` in your config to guard against future compat changes.
* [downloader/aria2c] Native progress for aria2c via RPC by [Lesmiscore](https://github.com/Lesmiscore), [pukkandan](https://github.com/pukkandan)
* Merge youtube-dl: Upto [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f6) by [Grub4k](https://github.com/Grub4k), [pukkandan](https://github.com/pukkandan)
* Add pre-processor stage `video`
* Let `--parse/replace-in-metadata` run at any post-processing stage
* Add `--enable-file-urls` by [coletdjnz](https://github.com/coletdjnz)
* Add new field `aspect_ratio`
* Add `ac4` to known codecs
* Add `weba` to known extensions
* [FFmpegVideoConvertor] Add `gif` to `--recode-video`
* Add message when there are no subtitles/thumbnails
* Deprioritize HEVC-over-FLV formats by [Lesmiscore](https://github.com/Lesmiscore)
* Make early reject of `--match-filter` stricter
* Fix `--cookies-from-browser` CLI parsing
* Fix `original_url` in playlists
* Fix bug in writing playlist info-json
* Fix bugs in `PlaylistEntries`
* [downloader/ffmpeg] Fix headers for video+audio formats by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
* [extractor] Add a way to distinguish IEs that returns only videos
* [extractor] Implement universal format sorting and deprecate `_sort_formats`
* [extractor] Let `_extract_format` functions obey `--ignore-no-formats`
* [extractor/generic] Add `fragment_query` extractor arg for DASH and HLS by [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
* [extractor/generic] Decode unicode-escaped embed URLs by [bashonly](https://github.com/bashonly)
* [extractor/generic] Don't report redirect to https
* [extractor/generic] Fix JSON LD manifest extraction by [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
* [extractor/generic] Use `Accept-Encoding: identity` for initial request by [coletdjnz](https://github.com/coletdjnz)
* [FormatSort] Add `mov` to `vext`
* [jsinterp] Escape regex that looks like nested set
* [webvtt] Handle premature EOF by [flashdagger](https://github.com/flashdagger)
* [utils] `classproperty`: Add cache support
* [utils] `get_exe_version`: Detect broken executables by [dirkf](https://github.com/dirkf), [pukkandan](https://github.com/pukkandan)
* [utils] `js_to_json`: Fix bug in [f55523c](https://github.com/yt-dlp/yt-dlp/commit/f55523c) by [ChillingPepper](https://github.com/ChillingPepper), [pukkandan](https://github.com/pukkandan)
* [utils] Make `ExtractorError` mutable
* [utils] Move `FileDownloader.parse_bytes` into utils
* [utils] Move format sorting code into `utils`
* [utils] `windows_enable_vt_mode`: Proper implementation by [Grub4K](https://github.com/Grub4K)
* [update] Workaround [#5632](https://github.com/yt-dlp/yt-dlp/issues/5632)
* [docs] Improvements
* [cleanup] Misc fixes and cleanup
* [cleanup] Use `random.choices` by [freezboltz](https://github.com/freezboltz)
* [extractor/airtv] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/amazonminitv] Add extractors by [GautamMKGarg](https://github.com/GautamMKGarg), [nyuszika7h](https://github.com/nyuszika7h)
* [extractor/beatbump] Add extractors by [Bobscorn](https://github.com/Bobscorn), [pukkandan](https://github.com/pukkandan)
* [extractor/europarl] Add EuroParlWebstream extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/kanal2] Add extractor by [bashonly](https://github.com/bashonly), [glensc](https://github.com/glensc), [pukkandan](https://github.com/pukkandan)
* [extractor/kankanews] Add extractor by [synthpop123](https://github.com/synthpop123)
* [extractor/kick] Add extractor by [bashonly](https://github.com/bashonly)
* [extractor/mediastream] Add extractor by [HobbyistDev](https://github.com/HobbyistDev), [elyse0](https://github.com/elyse0)
* [extractor/noice] Add NoicePodcast extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/oneplace] Add OnePlacePodcast extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/rumble] Add RumbleIE extractor by [flashdagger](https://github.com/flashdagger)
* [extractor/screencastify] Add extractor by [bashonly](https://github.com/bashonly)
* [extractor/trtcocuk] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/Veoh] Add user extractor by [tntmod54321](https://github.com/tntmod54321)
* [extractor/videoken] Add extractors by [bashonly](https://github.com/bashonly)
* [extractor/webcamerapl] Add extractor by [milkknife](https://github.com/milkknife)
* [extractor/amazon] Add `AmazonReviews` extractor by [bashonly](https://github.com/bashonly)
* [extractor/netverse] Add `NetverseSearch` extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/vimeo] Add `VimeoProIE` by [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
* [extractor/xiami] Remove extractors by [synthpop123](https://github.com/synthpop123)
* [extractor/youtube] Add `piped.video` by [Bnyro](https://github.com/Bnyro)
* [extractor/youtube] Consider language in format de-duplication
* [extractor/youtube] Extract DRC formats
* [extractor/youtube] Fix `ytuser:`
* [extractor/youtube] Fix bug in handling of music URLs
* [extractor/youtube] Subtitles cannot be translated to `und`
* [extractor/youtube:tab] Extract metadata from channel items by [coletdjnz](https://github.com/coletdjnz)
* [extractor/ARD] Add vtt subtitles by [CapacitorSet](https://github.com/CapacitorSet)
* [extractor/ArteTV] Extract chapters by [bashonly](https://github.com/bashonly), [iw0nderhow](https://github.com/iw0nderhow)
* [extractor/bandcamp] Add `album_artist` by [stelcodes](https://github.com/stelcodes)
* [extractor/bilibili] Fix `--no-playlist` for anthology
* [extractor/bilibili] Improve `_VALID_URL` by [skbeh](https://github.com/skbeh)
* [extractor/biliintl:series] Make partial download of series faster
* [extractor/BiliLive] Fix extractor
* [extractor/brightcove] Add `BrightcoveNewBaseIE` and fix embed extraction
* [extractor/cda] Support premium and misc improvements by [selfisekai](https://github.com/selfisekai)
* [extractor/ciscowebex] Support password-protected videos by [damianoamatruda](https://github.com/damianoamatruda)
* [extractor/curiositystream] Fix auth by [mnn](https://github.com/mnn)
* [extractor/embedly] Handle vimeo embeds
* [extractor/fifa] Fix Preplay extraction by [dirkf](https://github.com/dirkf)
* [extractor/foxsports] Fix extractor by [bashonly](https://github.com/bashonly)
* [extractor/gronkh] Fix `_VALID_URL` by [muddi900](https://github.com/muddi900)
* [extractor/hotstar] Improve format metadata
* [extractor/iqiyi] Fix `Iq` JS regex by [bashonly](https://github.com/bashonly)
* [extractor/la7] Improve extractor by [nixxo](https://github.com/nixxo)
* [extractor/mediaset] Better embed detection and error messages by [nixxo](https://github.com/nixxo)
* [extractor/mixch] Support `--wait-for-video`
* [extractor/naver] Improve `_VALID_URL` for `NaverNowIE` by [bashonly](https://github.com/bashonly)
* [extractor/naver] Treat fan subtitles as separate language
* [extractor/netverse] Extract comments by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/nosnl] Add support for /video by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/odnoklassniki] Extract subtitles by [bashonly](https://github.com/bashonly)
* [extractor/pinterest] Fix extractor by [bashonly](https://github.com/bashonly)
* [extractor/plutotv] Fix videos with non-zero start by [digitall](https://github.com/digitall)
* [extractor/polskieradio] Adapt to next.js redesigns by [selfisekai](https://github.com/selfisekai)
* [extractor/reddit] Add vcodec to fallback format by [chengzhicn](https://github.com/chengzhicn)
* [extractor/reddit] Extract crossposted media by [bashonly](https://github.com/bashonly)
* [extractor/reddit] Extract video embeds in text posts by [bashonly](https://github.com/bashonly)
* [extractor/rutube] Support private videos by [mexus](https://github.com/mexus)
* [extractor/sibnet] Separate from VKIE
* [extractor/slideslive] Fix extractor by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
* [extractor/slideslive] Support embeds and slides by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
* [extractor/soundcloud] Support user permalink by [nosoop](https://github.com/nosoop)
* [extractor/spankbang] Fix extractor by [JChris246](https://github.com/JChris246)
* [extractor/stv] Detect DRM
* [extractor/swearnet] Fix description bug
* [extractor/tencent] Fix geo-restricted video by [elyse0](https://github.com/elyse0)
* [extractor/tiktok] Fix subs, `DouyinIE`, improve `_VALID_URL` by [bashonly](https://github.com/bashonly)
* [extractor/tiktok] Update `_VALID_URL`, add `api_hostname` arg by [bashonly](https://github.com/bashonly)
* [extractor/tiktok] Update API hostname by [redraskal](https://github.com/redraskal)
* [extractor/twitcasting] Fix videos with password by [Spicadox](https://github.com/Spicadox), [bashonly](https://github.com/bashonly)
* [extractor/twitter] Heed `--no-playlist` for multi-video tweets by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
* [extractor/twitter] Refresh guest token when expired by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
* [extractor/twitter:spaces] Add `Referer` to m3u8 by [nixxo](https://github.com/nixxo)
* [extractor/udemy] Fix lectures that have no URL and detect DRM
* [extractor/unsupported] Add more URLs
* [extractor/urplay] Support for audio-only formats by [barsnick](https://github.com/barsnick)
* [extractor/wistia] Improve extension detection by [Grub4k](https://github.com/Grub4k), [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
* [extractor/yle_areena] Support restricted videos by [docbender](https://github.com/docbender)
* [extractor/youku] Fix extractor by [KurtBestor](https://github.com/KurtBestor)
* [extractor/youporn] Fix metadata by [marieell](https://github.com/marieell)
* [extractor/redgifs] Fix bug in [8c188d5](https://github.com/yt-dlp/yt-dlp/commit/8c188d5d09177ed213a05c900d3523867c5897fd)
### 2022.11.11
* Merge youtube-dl: Upto [commit/de39d12](https://github.com/ytdl-org/youtube-dl/commit/de39d128)
* Backport SSL configuration from Python 3.10 by [coletdjnz](https://github.com/coletdjnz)
* Do more processing in `--flat-playlist`
* Fix `--list` options not implying `-s` in some cases by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly)
* Fix end time of clips by [cruel-efficiency](https://github.com/cruel-efficiency)
* Fix for `formats=None`
* Write API params in debug head
* [outtmpl] Ensure ASCII in json and add option for Unicode
* [SponsorBlock] Add `type` field, obey `--retry-sleep extractor`, relax duration check for large segments
* [SponsorBlock] **Support `chapter` category** by [ajayyy](https://github.com/ajayyy), [pukkandan](https://github.com/pukkandan)
* [ThumbnailsConvertor] Fix filename escaping by [dirkf](https://github.com/dirkf), [pukkandan](https://github.com/pukkandan)
* [ModifyChapters] Handle the entire video being marked for removal
* [embedthumbnail] Fix thumbnail name in mp3 by [How-Bout-No](https://github.com/How-Bout-No)
* [downloader/fragment] HLS download can continue without first fragment
* [cookies] Improve `LenientSimpleCookie` by [Grub4K](https://github.com/Grub4K)
* [jsinterp] Improve separating regex
* [extractor/common] Fix `fatal=False` for `_search_nuxt_data`
* [extractor/common] Improve `_generic_title`
* [extractor/common] Fix `json_ld` type checks by [Grub4K](https://github.com/Grub4K)
* [extractor/generic] Separate embed extraction into own function
* [extractor/generic:quoted-html] Add extractor by [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
* [extractor/unsupported] Raise error on known DRM-only sites by [coletdjnz](https://github.com/coletdjnz)
* [utils] `js_to_json`: Improve escape handling by [Grub4K](https://github.com/Grub4K)
* [utils] `strftime_or_none`: Workaround Python bug on Windows
* [utils] `traverse_obj`: Always return list when branching, allow `re.Match` objects by [Grub4K](https://github.com/Grub4K)
* [build, test] Harden workflows' security by [sashashura](https://github.com/sashashura)
* [build] `py2exe`: Migrate to freeze API by [SG5](https://github.com/SG5), [pukkandan](https://github.com/pukkandan)
* [build] Create `armv7l` and `aarch64` releases by [MrOctopus](https://github.com/MrOctopus), [pukkandan](https://github.com/pukkandan)
* [build] Make linux binary truly standalone using `conda` by [mlampe](https://github.com/mlampe)
* [build] Replace `set-output` with `GITHUB_OUTPUT` by [Lesmiscore](https://github.com/Lesmiscore)
* [update] Use error code `100` for update errors
* [compat] Fix `shutils.move` in restricted ACL mode on BSD by [ClosedPort22](https://github.com/ClosedPort22), [pukkandan](https://github.com/pukkandan)
* [docs, devscripts] Document `pyinst`'s argument passthrough by [jahway603](https://github.com/jahway603)
* [test] Allow `extract_flat` in download tests by [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
* [cleanup] Misc fixes and cleanup by [pukkandan](https://github.com/pukkandan), [Alienmaster](https://github.com/Alienmaster)
* [extractor/aeon] Add extractor by [DoubleCouponDay](https://github.com/DoubleCouponDay)
* [extractor/agora] Add extractors by [selfisekai](https://github.com/selfisekai)
* [extractor/camsoda] Add extractor by [zulaport](https://github.com/zulaport)
* [extractor/cinetecamilano] Add extractor by [timendum](https://github.com/timendum)
* [extractor/deuxm] Add extractors by [CrankDatSouljaBoy](https://github.com/CrankDatSouljaBoy)
* [extractor/genius] Add extractors by [bashonly](https://github.com/bashonly)
* [extractor/japandiet] Add extractors by [Lesmiscore](https://github.com/Lesmiscore)
* [extractor/listennotes] Add extractor by [lksj](https://github.com/lksj), [pukkandan](https://github.com/pukkandan)
* [extractor/nos.nl] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/oftv] Add extractors by [DoubleCouponDay](https://github.com/DoubleCouponDay)
* [extractor/podbayfm] Add extractor by [schnusch](https://github.com/schnusch)
* [extractor/qingting] Add extractor by [bashonly](https://github.com/bashonly), [changren-wcr](https://github.com/changren-wcr)
* [extractor/screen9] Add extractor by [tpikonen](https://github.com/tpikonen)
* [extractor/swearnet] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/YleAreena] Add extractor by [pukkandan](https://github.com/pukkandan), [vitkhab](https://github.com/vitkhab)
* [extractor/zeenews] Add extractor by [m4tu4g](https://github.com/m4tu4g), [pukkandan](https://github.com/pukkandan)
* [extractor/youtube:tab] **Update tab handling for redesign** by [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
* Channel URLs download all uploads of the channel as multiple playlists, separated by tab
* [extractor/youtube] Differentiate between no comments and disabled comments by [coletdjnz](https://github.com/coletdjnz)
* [extractor/youtube] Extract `concurrent_view_count` for livestreams by [coletdjnz](https://github.com/coletdjnz)
* [extractor/youtube] Fix `duration` for premieres by [nosoop](https://github.com/nosoop)
* [extractor/youtube] Fix `live_status` by [coletdjnz](https://github.com/coletdjnz), [pukkandan](https://github.com/pukkandan)
* [extractor/youtube] Ignore incomplete data error for comment replies by [coletdjnz](https://github.com/coletdjnz)
* [extractor/youtube] Improve chapter parsing from description
* [extractor/youtube] Mark videos as fully watched by [bsun0000](https://github.com/bsun0000)
* [extractor/youtube] Update piped instances by [Generator](https://github.com/Generator)
* [extractor/youtube] Update playlist metadata extraction for new layout by [coletdjnz](https://github.com/coletdjnz)
* [extractor/youtube:tab] Fix video metadata from tabs by [coletdjnz](https://github.com/coletdjnz)
* [extractor/youtube:tab] Let `approximate_date` return timestamp
* [extractor/americastestkitchen] Fix extractor by [bashonly](https://github.com/bashonly)
* [extractor/bbc] Support onion domains by [DoubleCouponDay](https://github.com/DoubleCouponDay)
* [extractor/bilibili] Add chapters and misc cleanup by [lockmatrix](https://github.com/lockmatrix), [pukkandan](https://github.com/pukkandan)
* [extractor/bilibili] Fix BilibiliIE and Bangumi extractors by [lockmatrix](https://github.com/lockmatrix), [pukkandan](https://github.com/pukkandan)
* [extractor/bitchute] Better error for geo-restricted videos by [flashdagger](https://github.com/flashdagger)
* [extractor/bitchute] Improve `BitChuteChannelIE` by [flashdagger](https://github.com/flashdagger), [pukkandan](https://github.com/pukkandan)
* [extractor/bitchute] Simplify extractor by [flashdagger](https://github.com/flashdagger), [pukkandan](https://github.com/pukkandan)
* [extractor/cda] Support login through API by [selfisekai](https://github.com/selfisekai)
* [extractor/crunchyroll] Beta is now the only layout by [tejing1](https://github.com/tejing1)
* [extractor/detik] Avoid unnecessary extraction
* [extractor/doodstream] Remove extractor
* [extractor/dplay] Add MotorTrendOnDemand extractor by [bashonly](https://github.com/bashonly)
* [extractor/epoch] Support videos without data-trailer by [gibson042](https://github.com/gibson042), [pukkandan](https://github.com/pukkandan)
* [extractor/fox] Extract thumbnail by [vitkhab](https://github.com/vitkhab)
* [extractor/foxnews] Add `FoxNewsVideo` extractor
* [extractor/hotstar] Add season support by [m4tu4g](https://github.com/m4tu4g)
* [extractor/hotstar] Refactor v1 API calls
* [extractor/iprima] Make json+ld non-fatal by [bashonly](https://github.com/bashonly)
* [extractor/iq] Increase phantomjs timeout
* [extractor/kaltura] Support playlists by [jwoglom](https://github.com/jwoglom), [pukkandan](https://github.com/pukkandan)
* [extractor/lbry] Authenticate with cookies by [flashdagger](https://github.com/flashdagger)
* [extractor/livestreamfails] Support posts by [invertico](https://github.com/invertico)
* [extractor/mlb] Add `MLBArticle` extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/mxplayer] Improve extractor by [m4tu4g](https://github.com/m4tu4g)
* [extractor/niconico] Always use HTTPS for requests
* [extractor/nzherald] Support new video embed by [coletdjnz](https://github.com/coletdjnz)
* [extractor/odnoklassniki] Support boosty.to embeds by [Lesmiscore](https://github.com/Lesmiscore), [megapro17](https://github.com/megapro17), [pukkandan](https://github.com/pukkandan)
* [extractor/paramountplus] Update API token by [bashonly](https://github.com/bashonly)
* [extractor/reddit] Add fallback format by [bashonly](https://github.com/bashonly)
* [extractor/redgifs] Fix extractors by [bashonly](https://github.com/bashonly), [pukkandan](https://github.com/pukkandan)
* [extractor/redgifs] Refresh auth token for 401 by [endotronic](https://github.com/endotronic), [pukkandan](https://github.com/pukkandan)
* [extractor/rumble] Add HLS formats and extract more metadata by [flashdagger](https://github.com/flashdagger)
* [extractor/sbs] Improve `_VALID_URL` by [bashonly](https://github.com/bashonly)
* [extractor/skyit] Fix extractors by [nixxo](https://github.com/nixxo)
* [extractor/stripchat] Fix hostname for HLS stream by [zulaport](https://github.com/zulaport)
* [extractor/stripchat] Improve error message by [freezboltz](https://github.com/freezboltz)
* [extractor/telegram] Add playlist support and more metadata by [bashonly](https://github.com/bashonly), [bsun0000](https://github.com/bsun0000)
* [extractor/Tnaflix] Fix for HTTP 500 by [SG5](https://github.com/SG5), [pukkandan](https://github.com/pukkandan)
* [extractor/tubitv] Better DRM detection by [bashonly](https://github.com/bashonly)
* [extractor/tvp] Update extractors by [selfisekai](https://github.com/selfisekai)
* [extractor/twitcasting] Fix `data-movie-playlist` extraction by [Lesmiscore](https://github.com/Lesmiscore)
* [extractor/twitter] Add onion site to `_VALID_URL` by [DoubleCouponDay](https://github.com/DoubleCouponDay)
* [extractor/twitter] Add Spaces extractor and GraphQL API by [Grub4K](https://github.com/Grub4K), [bashonly](https://github.com/bashonly), [nixxo](https://github.com/nixxo), [pukkandan](https://github.com/pukkandan)
* [extractor/twitter] Support multi-video posts by [Grub4K](https://github.com/Grub4K)
* [extractor/uktvplay] Fix `_VALID_URL`
* [extractor/viu] Support subtitles of on-screen text by [tkgmomosheep](https://github.com/tkgmomosheep)
* [extractor/VK] Fix playlist URLs by [the-marenga](https://github.com/the-marenga)
* [extractor/vlive] Extract `release_timestamp`
* [extractor/voot] Improve `_VALID_URL` by [freezboltz](https://github.com/freezboltz)
* [extractor/wordpress:mb.miniAudioPlayer] Add embed extractor by [coletdjnz](https://github.com/coletdjnz)
* [extractor/YoutubeWebArchive] Improve metadata extraction by [coletdjnz](https://github.com/coletdjnz)
* [extractor/zee5] Improve `_VALID_URL` by [m4tu4g](https://github.com/m4tu4g)
* [extractor/zenyandex] Fix extractors by [lksj](https://github.com/lksj), [puc9](https://github.com/puc9), [pukkandan](https://github.com/pukkandan)
### 2022.10.04 ### 2022.10.04
* Allow a `set` to be passed as `download_archive` by [pukkandan](https://github.com/pukkandan), [bashonly](https://github.com/bashonly) * Allow a `set` to be passed as `download_archive` by [pukkandan](https://github.com/pukkandan), [bashonly](https://github.com/bashonly)

View File

@@ -42,7 +42,7 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
* Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc * Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc
## [Lesmiscore](https://github.com/Lesmiscore) (nao20010128nao) ## [Lesmiscore](https://github.com/Lesmiscore) <sup><sub>(nao20010128nao)</sup></sub>
**Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s **Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s
**Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr **Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
@@ -50,3 +50,10 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
* Download live from start to end for YouTube * Download live from start to end for YouTube
* Added support for new websites AbemaTV, mildom, PixivSketch, skeb, radiko, voicy, mirrativ, openrec, whowatch, damtomo, 17.live, mixch etc * Added support for new websites AbemaTV, mildom, PixivSketch, skeb, radiko, voicy, mirrativ, openrec, whowatch, damtomo, 17.live, mixch etc
* Improved/fixed support for fc2, YahooJapanNews, tver, iwara etc * Improved/fixed support for fc2, YahooJapanNews, tver, iwara etc
## [bashonly](https://github.com/bashonly)
* `--cookies-from-browser` support for Firefox containers
* Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
* Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc

View File

@@ -17,8 +17,8 @@ pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites \
clean-test: clean-test:
rm -rf test/testdata/sigs/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \ rm -rf test/testdata/sigs/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \
*.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \ *.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \
*.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.jpeg *.jpg *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 *.mp4 \ *.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 \
*.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.swf *.swp *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp *.mp4 *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.swf *.swp *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp
clean-dist: clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \ rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap

252
README.md
View File

@@ -10,9 +10,9 @@
[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)](https://discord.gg/H5MNcFW63r "Discord") [![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)](https://discord.gg/H5MNcFW63r "Discord")
[![Supported Sites](https://img.shields.io/badge/-Supported_Sites-brightgreen.svg?style=for-the-badge)](supportedsites.md "Supported Sites") [![Supported Sites](https://img.shields.io/badge/-Supported_Sites-brightgreen.svg?style=for-the-badge)](supportedsites.md "Supported Sites")
[![License: Unlicense](https://img.shields.io/badge/-Unlicense-blue.svg?style=for-the-badge)](LICENSE "License") [![License: Unlicense](https://img.shields.io/badge/-Unlicense-blue.svg?style=for-the-badge)](LICENSE "License")
[![CI Status](https://img.shields.io/github/workflow/status/yt-dlp/yt-dlp/Core%20Tests/master?label=Tests&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/actions "CI Status") [![CI Status](https://img.shields.io/github/actions/workflow/status/yt-dlp/yt-dlp/core.yml?branch=master&label=Tests&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/actions "CI Status")
[![Commits](https://img.shields.io/github/commit-activity/m/yt-dlp/yt-dlp?label=commits&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/commits "Commit History") [![Commits](https://img.shields.io/github/commit-activity/m/yt-dlp/yt-dlp?label=commits&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/commits "Commit History")
[![Last Commit](https://img.shields.io/github/last-commit/yt-dlp/yt-dlp/master?label=&style=for-the-badge)](https://github.com/yt-dlp/yt-dlp/commits "Commit History") [![Last Commit](https://img.shields.io/github/last-commit/yt-dlp/yt-dlp/master?label=&style=for-the-badge&display_timestamp=committer)](https://github.com/yt-dlp/yt-dlp/commits "Commit History")
</div> </div>
<!-- MANPAGE: END EXCLUDED SECTION --> <!-- MANPAGE: END EXCLUDED SECTION -->
@@ -61,6 +61,8 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
* [Modifying metadata examples](#modifying-metadata-examples) * [Modifying metadata examples](#modifying-metadata-examples)
* [EXTRACTOR ARGUMENTS](#extractor-arguments) * [EXTRACTOR ARGUMENTS](#extractor-arguments)
* [PLUGINS](#plugins) * [PLUGINS](#plugins)
* [Installing Plugins](#installing-plugins)
* [Developing Plugins](#developing-plugins)
* [EMBEDDING YT-DLP](#embedding-yt-dlp) * [EMBEDDING YT-DLP](#embedding-yt-dlp)
* [Embedding examples](#embedding-examples) * [Embedding examples](#embedding-examples)
* [DEPRECATED OPTIONS](#deprecated-options) * [DEPRECATED OPTIONS](#deprecated-options)
@@ -74,13 +76,13 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
# NEW FEATURES # NEW FEATURES
* Merged with **youtube-dl v2021.12.17+ [commit/ed5c44e](https://github.com/ytdl-org/youtube-dl/commit/ed5c44e7b74ac77f87ca5ed6cb5e964a0c6a0678)**<!--([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))--> and **youtube-dlc v2020.11.11-3+ [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl) * Merged with **youtube-dl v2021.12.17+ [commit/195f22f](https://github.com/ytdl-org/youtube-dl/commit/195f22f)** <!--([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))--> and **youtube-dlc v2020.11.11-3+ [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl)
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API * **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in YouTube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
* **[Format Sorting](#sorting-formats)**: The default format sorting options have been changed so that higher resolution and better codecs will be now preferred instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection than what is possible by simply using `--format` ([examples](#format-selection-examples)) * **[Format Sorting](#sorting-formats)**: The default format sorting options have been changed so that higher resolution and better codecs will be now preferred instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection than what is possible by simply using `--format` ([examples](#format-selection-examples))
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that the NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details. * **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
* **YouTube improvements**: * **YouTube improvements**:
* Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, YouTube Music Albums/Channels ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723)), and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`) * Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, YouTube Music Albums/Channels ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723)), and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
@@ -88,7 +90,7 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
* Supports some (but not all) age-gated content without cookies * Supports some (but not all) age-gated content without cookies
* Download livestreams from the start using `--live-from-start` (*experimental*) * Download livestreams from the start using `--live-from-start` (*experimental*)
* `255kbps` audio is extracted (if available) from YouTube Music when premium cookies are given * `255kbps` audio is extracted (if available) from YouTube Music when premium cookies are given
* Redirect channel's home URL automatically to `/video` to preserve the old behaviour * Channel URLs download all uploads of the channel, including shorts and live
* **Cookies from browser**: Cookies can be automatically extracted from all major web browsers using `--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]` * **Cookies from browser**: Cookies can be automatically extracted from all major web browsers using `--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]`
@@ -142,7 +144,7 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* `playlist_index` behaves differently when used with options like `--playlist-reverse` and `--playlist-items`. See [#302](https://github.com/yt-dlp/yt-dlp/issues/302) for details. You can use `--compat-options playlist-index` if you want to keep the earlier behavior * `playlist_index` behaves differently when used with options like `--playlist-reverse` and `--playlist-items`. See [#302](https://github.com/yt-dlp/yt-dlp/issues/302) for details. You can use `--compat-options playlist-index` if you want to keep the earlier behavior
* The output of `-F` is listed in a new format. Use `--compat-options list-formats` to revert this * The output of `-F` is listed in a new format. Use `--compat-options list-formats` to revert this
* Live chats (if available) are considered as subtitles. Use `--sub-langs all,-live_chat` to download all subtitles except live chat. You can also use `--compat-options no-live-chat` to prevent any live chat/danmaku from downloading * Live chats (if available) are considered as subtitles. Use `--sub-langs all,-live_chat` to download all subtitles except live chat. You can also use `--compat-options no-live-chat` to prevent any live chat/danmaku from downloading
* YouTube channel URLs are automatically redirected to `/video`. Append a `/featured` to the URL to download only the videos in the home page. If the channel does not have a videos tab, we try to download the equivalent `UU` playlist instead. For all other tabs, if the channel does not show the requested tab, an error will be raised. Also, `/live` URLs raise an error if there are no live videos instead of silently downloading the entire channel. You may use `--compat-options no-youtube-channel-redirect` to revert all these redirections * YouTube channel URLs download all uploads of the channel. To download only the videos in a specific tab, pass the tab's URL. If the channel does not show the requested tab, an error will be raised. Also, `/live` URLs raise an error if there are no live videos instead of silently downloading the entire channel. You may use `--compat-options no-youtube-channel-redirect` to revert all these redirections
* Unavailable videos are also listed for YouTube playlists. Use `--compat-options no-youtube-unavailable-videos` to remove this * Unavailable videos are also listed for YouTube playlists. Use `--compat-options no-youtube-unavailable-videos` to remove this
* The upload dates extracted from YouTube are in UTC [when available](https://github.com/yt-dlp/yt-dlp/blob/89e4d86171c7b7c997c77d4714542e0383bf0db0/yt_dlp/extractor/youtube.py#L3898-L3900). Use `--compat-options no-youtube-prefer-utc-upload-date` to prefer the non-UTC upload date. * The upload dates extracted from YouTube are in UTC [when available](https://github.com/yt-dlp/yt-dlp/blob/89e4d86171c7b7c997c77d4714542e0383bf0db0/yt_dlp/extractor/youtube.py#L3898-L3900). Use `--compat-options no-youtube-prefer-utc-upload-date` to prefer the non-UTC upload date.
* If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this * If `ffmpeg` is used as the downloader, the downloading and merging of formats happen in a single step when possible. Use `--compat-options no-direct-merge` to revert this
@@ -151,12 +153,15 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this * When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
* `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi` * `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi`
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior * yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
* yt-dlp tries to parse the external downloader outputs into the standard progress output if possible (Currently implemented: `aria2c`). You can use `--compat-options no-external-downloader-progress` to get the downloader output as-is
For ease of use, a few more compat options are available: For ease of use, a few more compat options are available:
* `--compat-options all`: Use all compat options (Do NOT use) * `--compat-options all`: Use all compat options (Do NOT use)
* `--compat-options youtube-dl`: Same as `--compat-options all,-multistreams` * `--compat-options youtube-dl`: Same as `--compat-options all,-multistreams`
* `--compat-options youtube-dlc`: Same as `--compat-options all,-no-live-chat,-no-youtube-channel-redirect` * `--compat-options youtube-dlc`: Same as `--compat-options all,-no-live-chat,-no-youtube-channel-redirect`
* `--compat-options 2021`: Same as `--compat-options 2022,no-certifi,filename-sanitization,no-youtube-prefer-utc-upload-date`
* `--compat-options 2022`: Same as `--compat-options no-external-downloader-progress`. Use this to enable all future compat options
# INSTALLATION # INSTALLATION
@@ -179,7 +184,7 @@ You can use `yt-dlp -U` to update if you are [using the release binaries](#relea
If you [installed with PIP](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program If you [installed with PIP](https://github.com/yt-dlp/yt-dlp/wiki/Installation#with-pip), simply re-run the same command that was used to install the program
For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation) or refer their documentation For other third-party package managers, see [the wiki](https://github.com/yt-dlp/yt-dlp/wiki/Installation#third-party-package-managers) or refer their documentation
<!-- MANPAGE: BEGIN EXCLUDED SECTION --> <!-- MANPAGE: BEGIN EXCLUDED SECTION -->
@@ -201,6 +206,8 @@ File|Description
[yt-dlp_min.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_min.exe)|Windows (Win7 SP1+) standalone x64 binary built with `py2exe`<br/> ([Not recommended](#standalone-py2exe-builds-windows)) [yt-dlp_min.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_min.exe)|Windows (Win7 SP1+) standalone x64 binary built with `py2exe`<br/> ([Not recommended](#standalone-py2exe-builds-windows))
[yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux standalone x64 binary [yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux standalone x64 binary
[yt-dlp_linux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux.zip)|Unpackaged Linux executable (no auto-update) [yt-dlp_linux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux.zip)|Unpackaged Linux executable (no auto-update)
[yt-dlp_linux_armv7l](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_armv7l)|Linux standalone armv7l (32-bit) binary
[yt-dlp_linux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64)|Linux standalone aarch64 (64-bit) binary
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update) [yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update)
[yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update) [yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update)
[yt-dlp_macos_legacy](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos_legacy)|MacOS (10.9+) standalone x64 executable [yt-dlp_macos_legacy](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos_legacy)|MacOS (10.9+) standalone x64 executable
@@ -215,7 +222,7 @@ File|Description
<!-- MANPAGE: END EXCLUDED SECTION --> <!-- MANPAGE: END EXCLUDED SECTION -->
Note: The manpages, shell completion files etc. are available in the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz) **Note**: The manpages, shell completion files etc. are available in the [source tarball](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.tar.gz)
## DEPENDENCIES ## DEPENDENCIES
Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly. Python versions 3.7+ (CPython and PyPy) are supported. Other versions and implementations may or may not work correctly.
@@ -231,8 +238,9 @@ While all the other dependencies are optional, `ffmpeg` and `ffprobe` are highly
* [**ffmpeg** and **ffprobe**](https://www.ffmpeg.org) - Required for [merging separate video and audio files](#format-selection) as well as for various [post-processing](#post-processing-options) tasks. License [depends on the build](https://www.ffmpeg.org/legal.html) * [**ffmpeg** and **ffprobe**](https://www.ffmpeg.org) - Required for [merging separate video and audio files](#format-selection) as well as for various [post-processing](#post-processing-options) tasks. License [depends on the build](https://www.ffmpeg.org/legal.html)
<!-- TODO: ffmpeg has merged this patch. Remove this note once there is new release --> There are bugs in ffmpeg that causes various issues when used alongside yt-dlp. Since ffmpeg is such an important dependency, we provide [custom builds](https://github.com/yt-dlp/FFmpeg-Builds#ffmpeg-static-auto-builds) with patches for some of these issues at [yt-dlp/FFmpeg-Builds](https://github.com/yt-dlp/FFmpeg-Builds). See [the readme](https://github.com/yt-dlp/FFmpeg-Builds#patches-applied) for details on the specific issues solved by these builds
**Note**: There are some regressions in newer ffmpeg versions that causes various issues when used alongside yt-dlp. Since ffmpeg is such an important dependency, we provide [custom builds](https://github.com/yt-dlp/FFmpeg-Builds#ffmpeg-static-auto-builds) with patches for these issues at [yt-dlp/FFmpeg-Builds](https://github.com/yt-dlp/FFmpeg-Builds). See [the readme](https://github.com/yt-dlp/FFmpeg-Builds#patches-applied) for details on the specific issues solved by these builds
**Important**: What you need is ffmpeg *binary*, **NOT** [the python package of the same name](https://pypi.org/project/ffmpeg)
### Networking ### Networking
* [**certifi**](https://github.com/certifi/python-certifi)\* - Provides Mozilla's root certificate bundle. Licensed under [MPLv2](https://github.com/certifi/python-certifi/blob/master/LICENSE) * [**certifi**](https://github.com/certifi/python-certifi)\* - Provides Mozilla's root certificate bundle. Licensed under [MPLv2](https://github.com/certifi/python-certifi/blob/master/LICENSE)
@@ -277,7 +285,9 @@ To build the standalone executable, you must have Python and `pyinstaller` (plus
On some systems, you may need to use `py` or `python` instead of `python3`. On some systems, you may need to use `py` or `python` instead of `python3`.
Note that pyinstaller with versions below 4.4 [do not support](https://github.com/pyinstaller/pyinstaller#requirements-and-tested-platforms) Python installed from the Windows store without using a virtual environment. `pyinst.py` accepts any arguments that can be passed to `pyinstaller`, such as `--onefile/-F` or `--onedir/-D`, which is further [documented here](https://pyinstaller.org/en/stable/usage.html#what-to-generate).
**Note**: Pyinstaller versions below 4.4 [do not support](https://github.com/pyinstaller/pyinstaller#requirements-and-tested-platforms) Python installed from the Windows store without using a virtual environment.
**Important**: Running `pyinstaller` directly **without** using `pyinst.py` is **not** officially supported. This may or may not work correctly. **Important**: Running `pyinstaller` directly **without** using `pyinst.py` is **not** officially supported. This may or may not work correctly.
@@ -410,6 +420,8 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
--source-address IP Client-side IP address to bind to --source-address IP Client-side IP address to bind to
-4, --force-ipv4 Make all connections via IPv4 -4, --force-ipv4 Make all connections via IPv4
-6, --force-ipv6 Make all connections via IPv6 -6, --force-ipv6 Make all connections via IPv6
--enable-file-urls Enable file:// URLs. This is disabled by
default for security reasons.
## Geo-restriction: ## Geo-restriction:
--geo-verification-proxy URL Use this proxy to verify the IP address for --geo-verification-proxy URL Use this proxy to verify the IP address for
@@ -428,23 +440,25 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
explicitly provided IP block in CIDR notation explicitly provided IP block in CIDR notation
## Video Selection: ## Video Selection:
-I, --playlist-items ITEM_SPEC Comma separated playlist_index of the videos -I, --playlist-items ITEM_SPEC Comma separated playlist_index of the items
to download. You can specify a range using to download. You can specify a range using
"[START]:[STOP][:STEP]". For backward "[START]:[STOP][:STEP]". For backward
compatibility, START-STOP is also supported. compatibility, START-STOP is also supported.
Use negative indices to count from the right Use negative indices to count from the right
and negative STEP to download in reverse and negative STEP to download in reverse
order. E.g. "-I 1:3,7,-5::2" used on a order. E.g. "-I 1:3,7,-5::2" used on a
playlist of size 15 will download the videos playlist of size 15 will download the items
at index 1,2,3,7,11,13,15 at index 1,2,3,7,11,13,15
--min-filesize SIZE Do not download any videos smaller than --min-filesize SIZE Abort download if filesize is smaller than
SIZE, e.g. 50k or 44.6M
--max-filesize SIZE Abort download if filesize is larger than
SIZE, e.g. 50k or 44.6M SIZE, e.g. 50k or 44.6M
--max-filesize SIZE Do not download any videos larger than SIZE,
e.g. 50k or 44.6M
--date DATE Download only videos uploaded on this date. --date DATE Download only videos uploaded on this date.
The date can be "YYYYMMDD" or in the format The date can be "YYYYMMDD" or in the format
[now|today|yesterday][-N[day|week|month|year]]. [now|today|yesterday][-N[day|week|month|year]].
E.g. --date today-2weeks E.g. "--date today-2weeks" downloads
only videos uploaded on the same day two
weeks ago
--datebefore DATE Download only videos uploaded on or before --datebefore DATE Download only videos uploaded on or before
this date. The date formats accepted is the this date. The date formats accepted is the
same as --date same as --date
@@ -487,9 +501,9 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
a file that is in the archive a file that is in the archive
--break-on-reject Stop the download process when encountering --break-on-reject Stop the download process when encountering
a file that has been filtered out a file that has been filtered out
--break-per-input --break-on-existing, --break-on-reject, --break-per-input Alters --max-downloads, --break-on-existing,
--max-downloads, and autonumber resets per --break-on-reject, and autonumber to reset
input URL per input URL
--no-break-per-input --break-on-existing and similar options --no-break-per-input --break-on-existing and similar options
terminates the entire download queue terminates the entire download queue
--skip-playlist-after-errors N Number of allowed failures until the rest of --skip-playlist-after-errors N Number of allowed failures until the rest of
@@ -521,8 +535,8 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
linear=1::2 --retry-sleep fragment:exp=1:20 linear=1::2 --retry-sleep fragment:exp=1:20
--skip-unavailable-fragments Skip unavailable fragments for DASH, --skip-unavailable-fragments Skip unavailable fragments for DASH,
hlsnative and ISM downloads (default) hlsnative and ISM downloads (default)
(Alias: --no-abort-on-unavailable-fragment) (Alias: --no-abort-on-unavailable-fragments)
--abort-on-unavailable-fragment --abort-on-unavailable-fragments
Abort download if a fragment is unavailable Abort download if a fragment is unavailable
(Alias: --no-skip-unavailable-fragments) (Alias: --no-skip-unavailable-fragments)
--keep-fragments Keep downloaded fragments on disk after --keep-fragments Keep downloaded fragments on disk after
@@ -721,7 +735,7 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
screen, optionally prefixed with when to screen, optionally prefixed with when to
print it, separated by a ":". Supported print it, separated by a ":". Supported
values of "WHEN" are the same as that of values of "WHEN" are the same as that of
--use-postprocessor, and "video" (default). --use-postprocessor (default: video).
Implies --quiet. Implies --simulate unless Implies --quiet. Implies --simulate unless
--no-simulate or later stages of WHEN are --no-simulate or later stages of WHEN are
used. This option can be used multiple times used. This option can be used multiple times
@@ -889,11 +903,11 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
specific bitrate like 128K (default 5) specific bitrate like 128K (default 5)
--remux-video FORMAT Remux the video into another container if --remux-video FORMAT Remux the video into another container if
necessary (currently supported: avi, flv, necessary (currently supported: avi, flv,
mkv, mov, mp4, webm, aac, aiff, alac, flac, gif, mkv, mov, mp4, webm, aac, aiff, alac,
m4a, mka, mp3, ogg, opus, vorbis, wav). If flac, m4a, mka, mp3, ogg, opus, vorbis,
target container does not support the wav). If target container does not support
video/audio codec, remuxing will fail. You the video/audio codec, remuxing will fail.
can specify multiple rules; e.g. You can specify multiple rules; e.g.
"aac>m4a/mov>mp4/mkv" will remux aac to m4a, "aac>m4a/mov>mp4/mkv" will remux aac to m4a,
mov to mp4 and anything else to mkv mov to mp4 and anything else to mkv
--recode-video FORMAT Re-encode the video into another format if --recode-video FORMAT Re-encode the video into another format if
@@ -948,13 +962,18 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
mkv/mka video files mkv/mka video files
--no-embed-info-json Do not embed the infojson as an attachment --no-embed-info-json Do not embed the infojson as an attachment
to the video file to the video file
--parse-metadata FROM:TO Parse additional metadata like title/artist --parse-metadata [WHEN:]FROM:TO
Parse additional metadata like title/artist
from other fields; see "MODIFYING METADATA" from other fields; see "MODIFYING METADATA"
for details for details. Supported values of "WHEN" are
--replace-in-metadata FIELDS REGEX REPLACE the same as that of --use-postprocessor
(default: pre_process)
--replace-in-metadata [WHEN:]FIELDS REGEX REPLACE
Replace text in a metadata field using the Replace text in a metadata field using the
given regex. This option can be used given regex. This option can be used
multiple times multiple times. Supported values of "WHEN"
are the same as that of --use-postprocessor
(default: pre_process)
--xattrs Write metadata to the video file's xattrs --xattrs Write metadata to the video file's xattrs
(using dublin core and xdg standards) (using dublin core and xdg standards)
--concat-playlist POLICY Concatenate videos in a playlist. One of --concat-playlist POLICY Concatenate videos in a playlist. One of
@@ -975,18 +994,18 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
--ffmpeg-location PATH Location of the ffmpeg binary; either the --ffmpeg-location PATH Location of the ffmpeg binary; either the
path to the binary or its containing directory path to the binary or its containing directory
--exec [WHEN:]CMD Execute a command, optionally prefixed with --exec [WHEN:]CMD Execute a command, optionally prefixed with
when to execute it (after_move if when to execute it, separated by a ":".
unspecified), separated by a ":". Supported Supported values of "WHEN" are the same as
values of "WHEN" are the same as that of that of --use-postprocessor (default:
--use-postprocessor. Same syntax as the after_move). Same syntax as the output
output template can be used to pass any template can be used to pass any field as
field as arguments to the command. After arguments to the command. After download, an
download, an additional field "filepath" additional field "filepath" that contains
that contains the final path of the the final path of the downloaded file is
downloaded file is also available, and if no also available, and if no fields are passed,
fields are passed, %(filepath)q is appended %(filepath,_filename|)q is appended to the
to the end of the command. This option can end of the command. This option can be used
be used multiple times multiple times
--no-exec Remove any previously defined --exec --no-exec Remove any previously defined --exec
--convert-subs FORMAT Convert the subtitles to another format --convert-subs FORMAT Convert the subtitles to another format
(currently supported: ass, lrc, srt, vtt) (currently supported: ass, lrc, srt, vtt)
@@ -1024,14 +1043,16 @@ You can also fork the project on GitHub and run your fork's [build workflow](.gi
postprocessor is invoked. It can be one of postprocessor is invoked. It can be one of
"pre_process" (after video extraction), "pre_process" (after video extraction),
"after_filter" (after video passes filter), "after_filter" (after video passes filter),
"before_dl" (before each video download), "video" (after --format; before
"post_process" (after each video download; --print/--output), "before_dl" (before each
default), "after_move" (after moving video video download), "post_process" (after each
file to it's final locations), "after_video" video download; default), "after_move"
(after downloading and processing all (after moving video file to it's final
formats of a video), or "playlist" (at end locations), "after_video" (after downloading
of playlist). This option can be used and processing all formats of a video), or
multiple times to add different postprocessors "playlist" (at end of playlist). This option
can be used multiple times to add different
postprocessors
## SponsorBlock Options: ## SponsorBlock Options:
Make chapter entries for, or remove various segments (sponsor, Make chapter entries for, or remove various segments (sponsor,
@@ -1042,10 +1063,10 @@ Make chapter entries for, or remove various segments (sponsor,
for, separated by commas. Available for, separated by commas. Available
categories are sponsor, intro, outro, categories are sponsor, intro, outro,
selfpromo, preview, filler, interaction, selfpromo, preview, filler, interaction,
music_offtopic, poi_highlight, all and music_offtopic, poi_highlight, chapter, all
default (=all). You can prefix the category and default (=all). You can prefix the
with a "-" to exclude it. See [1] for category with a "-" to exclude it. See [1]
description of the categories. E.g. for description of the categories. E.g.
--sponsorblock-mark all,-preview --sponsorblock-mark all,-preview
[1] https://wiki.sponsor.ajay.app/w/Segment_Categories [1] https://wiki.sponsor.ajay.app/w/Segment_Categories
--sponsorblock-remove CATS SponsorBlock categories to be removed from --sponsorblock-remove CATS SponsorBlock categories to be removed from
@@ -1054,8 +1075,8 @@ Make chapter entries for, or remove various segments (sponsor,
remove takes precedence. The syntax and remove takes precedence. The syntax and
available categories are the same as for available categories are the same as for
--sponsorblock-mark except that "default" --sponsorblock-mark except that "default"
refers to "all,-filler" and poi_highlight is refers to "all,-filler" and poi_highlight,
not available chapter are not available
--sponsorblock-chapter-title TEMPLATE --sponsorblock-chapter-title TEMPLATE
An output template for the title of the An output template for the title of the
SponsorBlock chapters created by SponsorBlock chapters created by
@@ -1099,15 +1120,20 @@ You can configure yt-dlp by placing any supported command line option to a confi
* If `-P` is not given, the current directory is searched * If `-P` is not given, the current directory is searched
1. **User Configuration**: 1. **User Configuration**:
* `${XDG_CONFIG_HOME}/yt-dlp/config` (recommended on Linux/macOS) * `${XDG_CONFIG_HOME}/yt-dlp/config` (recommended on Linux/macOS)
* `${XDG_CONFIG_HOME}/yt-dlp/config.txt`
* `${XDG_CONFIG_HOME}/yt-dlp.conf` * `${XDG_CONFIG_HOME}/yt-dlp.conf`
* `${APPDATA}/yt-dlp/config` (recommended on Windows) * `${APPDATA}/yt-dlp/config` (recommended on Windows)
* `${APPDATA}/yt-dlp/config.txt` * `${APPDATA}/yt-dlp/config.txt`
* `~/yt-dlp.conf` * `~/yt-dlp.conf`
* `~/yt-dlp.conf.txt` * `~/yt-dlp.conf.txt`
* `~/.yt-dlp/config`
* `~/.yt-dlp/config.txt`
See also: [Notes about environment variables](#notes-about-environment-variables) See also: [Notes about environment variables](#notes-about-environment-variables)
1. **System Configuration**: 1. **System Configuration**:
* `/etc/yt-dlp.conf` * `/etc/yt-dlp.conf`
* `/etc/yt-dlp/config`
* `/etc/yt-dlp/config.txt`
E.g. with the following configuration file yt-dlp will always extract the audio, not copy the mtime, use a proxy and save all videos under `YouTube` directory in your home directory: E.g. with the following configuration file yt-dlp will always extract the audio, not copy the mtime, use a proxy and save all videos under `YouTube` directory in your home directory:
``` ```
@@ -1126,7 +1152,7 @@ E.g. with the following configuration file yt-dlp will always extract the audio,
-o ~/YouTube/%(title)s.%(ext)s -o ~/YouTube/%(title)s.%(ext)s
``` ```
Note that options in configuration file are just the same options aka switches used in regular command line calls; thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`. They must also be quoted when necessary as-if it were a UNIX shell. **Note**: Options in configuration file are just the same options aka switches used in regular command line calls; thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`. They must also be quoted when necessary as-if it were a UNIX shell.
You can use `--ignore-config` if you want to disable all configuration files for a particular yt-dlp run. If `--ignore-config` is found inside any configuration file, no further configuration will be loaded. For example, having the option in the portable configuration file prevents loading of home, user, and system configurations. Additionally, (for backward compatibility) if `--ignore-config` is found inside the system configuration file, the user configuration is not loaded. You can use `--ignore-config` if you want to disable all configuration files for a particular yt-dlp run. If `--ignore-config` is found inside any configuration file, no further configuration will be loaded. For example, having the option in the portable configuration file prevents loading of home, user, and system configurations. Additionally, (for backward compatibility) if `--ignore-config` is found inside the system configuration file, the user configuration is not loaded.
@@ -1189,9 +1215,9 @@ The field names themselves (the part inside the parenthesis) can also have some
1. **Default**: A literal default value can be specified for when the field is empty using a `|` separator. This overrides `--output-na-placeholder`. E.g. `%(uploader|Unknown)s` 1. **Default**: A literal default value can be specified for when the field is empty using a `|` separator. This overrides `--output-na-placeholder`. E.g. `%(uploader|Unknown)s`
1. **More Conversions**: In addition to the normal format types `diouxXeEfFgGcrs`, yt-dlp additionally supports converting to `B` = **B**ytes, `j` = **j**son (flag `#` for pretty-printing), `h` = HTML escaping, `l` = a comma separated **l**ist (flag `#` for `\n` newline-separated), `q` = a string **q**uoted for the terminal (flag `#` to split a list into different arguments), `D` = add **D**ecimal suffixes (e.g. 10M) (flag `#` to use 1024 as factor), and `S` = **S**anitize as filename (flag `#` for restricted) 1. **More Conversions**: In addition to the normal format types `diouxXeEfFgGcrs`, yt-dlp additionally supports converting to `B` = **B**ytes, `j` = **j**son (flag `#` for pretty-printing, `+` for Unicode), `h` = HTML escaping, `l` = a comma separated **l**ist (flag `#` for `\n` newline-separated), `q` = a string **q**uoted for the terminal (flag `#` to split a list into different arguments), `D` = add **D**ecimal suffixes (e.g. 10M) (flag `#` to use 1024 as factor), and `S` = **S**anitize as filename (flag `#` for restricted)
1. **Unicode normalization**: The format type `U` can be used for NFC [unicode normalization](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize). The alternate form flag (`#`) changes the normalization to NFD and the conversion flag `+` can be used for NFKC/NFKD compatibility equivalence normalization. E.g. `%(title)+.100U` is NFKC 1. **Unicode normalization**: The format type `U` can be used for NFC [Unicode normalization](https://docs.python.org/3/library/unicodedata.html#unicodedata.normalize). The alternate form flag (`#`) changes the normalization to NFD and the conversion flag `+` can be used for NFKC/NFKD compatibility equivalence normalization. E.g. `%(title)+.100U` is NFKC
To summarize, the general syntax for a field is: To summarize, the general syntax for a field is:
``` ```
@@ -1200,6 +1226,10 @@ To summarize, the general syntax for a field is:
Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video. Additionally, you can set different output templates for the various metadata files separately from the general output template by specifying the type of file followed by the template separated by a colon `:`. The different file types supported are `subtitle`, `thumbnail`, `description`, `annotation` (deprecated), `infojson`, `link`, `pl_thumbnail`, `pl_description`, `pl_infojson`, `chapter`, `pl_video`. E.g. `-o "%(title)s.%(ext)s" -o "thumbnail:%(title)s\%(title)s.%(ext)s"` will put the thumbnails in a folder with the same name as the video. If any of the templates is empty, that type of file will not be written. E.g. `--write-thumbnail -o "thumbnail:"` will write thumbnails only for playlists and not for video.
<a id="outtmpl-postprocess-note"></a>
**Note**: Due to post-processing (i.e. merging etc.), the actual output filename might differ. Use `--print after_move:filepath` to get the name after all post-processing is complete.
The available fields are: The available fields are:
- `id` (string): Video identifier - `id` (string): Video identifier
@@ -1226,6 +1256,7 @@ The available fields are:
- `duration` (numeric): Length of the video in seconds - `duration` (numeric): Length of the video in seconds
- `duration_string` (string): Length of the video (HH:mm:ss) - `duration_string` (string): Length of the video (HH:mm:ss)
- `view_count` (numeric): How many users have watched the video on the platform - `view_count` (numeric): How many users have watched the video on the platform
- `concurrent_view_count` (numeric): How many users are currently watching the video on the platform.
- `like_count` (numeric): Number of positive ratings of the video - `like_count` (numeric): Number of positive ratings of the video
- `dislike_count` (numeric): Number of negative ratings of the video - `dislike_count` (numeric): Number of negative ratings of the video
- `repost_count` (numeric): Number of reposts of the video - `repost_count` (numeric): Number of reposts of the video
@@ -1299,7 +1330,7 @@ Available only when using `--download-sections` and for `chapter:` prefix when u
Available only when used in `--print`: Available only when used in `--print`:
- `urls` (string): The URLs of all requested formats, one in each line - `urls` (string): The URLs of all requested formats, one in each line
- `filename` (string): Name of the video file. Note that the actual filename may be different due to post-processing. Use `--exec echo` to get the name after all postprocessing is complete - `filename` (string): Name of the video file. Note that the [actual filename may differ](#outtmpl-postprocess-note)
- `formats_table` (table): The video format table as printed by `--list-formats` - `formats_table` (table): The video format table as printed by `--list-formats`
- `thumbnails_table` (table): The thumbnail format table as printed by `--list-thumbnails` - `thumbnails_table` (table): The thumbnail format table as printed by `--list-thumbnails`
- `subtitles_table` (table): The subtitle format table as printed by `--list-subs` - `subtitles_table` (table): The subtitle format table as printed by `--list-subs`
@@ -1310,14 +1341,15 @@ Available only in `--sponsorblock-chapter-title`:
- `start_time` (numeric): Start time of the chapter in seconds - `start_time` (numeric): Start time of the chapter in seconds
- `end_time` (numeric): End time of the chapter in seconds - `end_time` (numeric): End time of the chapter in seconds
- `categories` (list): The SponsorBlock categories the chapter belongs to - `categories` (list): The [SponsorBlock categories](https://wiki.sponsor.ajay.app/w/Types#Category) the chapter belongs to
- `category` (string): The smallest SponsorBlock category the chapter belongs to - `category` (string): The smallest SponsorBlock category the chapter belongs to
- `category_names` (list): Friendly names of the categories - `category_names` (list): Friendly names of the categories
- `name` (string): Friendly name of the smallest category - `name` (string): Friendly name of the smallest category
- `type` (string): The [SponsorBlock action type](https://wiki.sponsor.ajay.app/w/Types#Action_Type) of the chapter
Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. E.g. for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `yt-dlp test video` and id `BaW_jenozKc`, this will result in a `yt-dlp test video-BaW_jenozKc.mp4` file created in the current directory. Each aforementioned sequence when referenced in an output template will be replaced by the actual value corresponding to the sequence name. E.g. for `-o %(title)s-%(id)s.%(ext)s` and an mp4 video with title `yt-dlp test video` and id `BaW_jenozKc`, this will result in a `yt-dlp test video-BaW_jenozKc.mp4` file created in the current directory.
Note that some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with `--output-na-placeholder` (`NA` by default). **Note**: Some of the sequences are not guaranteed to be present since they depend on the metadata obtained by a particular extractor. Such sequences will be replaced with placeholder value provided with `--output-na-placeholder` (`NA` by default).
**Tip**: Look at the `-j` output to identify which fields are available for the particular URL **Tip**: Look at the `-j` output to identify which fields are available for the particular URL
@@ -1432,6 +1464,7 @@ The following numeric meta fields can be used with comparisons `<`, `<=`, `>`, `
- `filesize_approx`: An estimate for the number of bytes - `filesize_approx`: An estimate for the number of bytes
- `width`: Width of the video, if known - `width`: Width of the video, if known
- `height`: Height of the video, if known - `height`: Height of the video, if known
- `aspect_ratio`: Aspect ratio of the video, if known
- `tbr`: Average bitrate of audio and video in KBit/s - `tbr`: Average bitrate of audio and video in KBit/s
- `abr`: Average audio bitrate in KBit/s - `abr`: Average audio bitrate in KBit/s
- `vbr`: Average video bitrate in KBit/s - `vbr`: Average video bitrate in KBit/s
@@ -1457,7 +1490,7 @@ Also filtering work for comparisons `=` (equals), `^=` (starts with), `$=` (ends
Any string comparison may be prefixed with negation `!` in order to produce an opposite comparison, e.g. `!*=` (does not contain). The comparand of a string comparison needs to be quoted with either double or single quotes if it contains spaces or special characters other than `._-`. Any string comparison may be prefixed with negation `!` in order to produce an opposite comparison, e.g. `!*=` (does not contain). The comparand of a string comparison needs to be quoted with either double or single quotes if it contains spaces or special characters other than `._-`.
Note that none of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the website. Any other field made available by the extractor can also be used for filtering. **Note**: None of the aforementioned meta fields are guaranteed to be present since this solely depends on the metadata obtained by particular extractor, i.e. the metadata offered by the website. Any other field made available by the extractor can also be used for filtering.
Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height<=?720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. You can also use the filters with `all` to download all formats that satisfy the filter, e.g. `-f "all[vcodec=none]"` selects all audio-only formats. Formats for which the value is not known are excluded unless you put a question mark (`?`) after the operator. You can combine format filters, so `-f "[height<=?720][tbr>500]"` selects up to 720p videos (or videos where the height is not known) with a bitrate of at least 500 KBit/s. You can also use the filters with `all` to download all formats that satisfy the filter, e.g. `-f "all[vcodec=none]"` selects all audio-only formats.
@@ -1477,9 +1510,9 @@ The available fields are:
- `source`: The preference of the source - `source`: The preference of the source
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`) - `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`)
- `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other) - `vcodec`: Video Codec (`av01` > `vp9.2` > `vp9` > `h265` > `h264` > `vp8` > `h263` > `theora` > other)
- `acodec`: Audio Codec (`flac`/`alac` > `wav`/`aiff` > `opus` > `vorbis` > `aac` > `mp4a` > `mp3` > `eac3` > `ac3` > `dts` > other) - `acodec`: Audio Codec (`flac`/`alac` > `wav`/`aiff` > `opus` > `vorbis` > `aac` > `mp4a` > `mp3` `ac4` > > `eac3` > `ac3` > `dts` > other)
- `codec`: Equivalent to `vcodec,acodec` - `codec`: Equivalent to `vcodec,acodec`
- `vext`: Video Extension (`mp4` > `webm` > `flv` > other). If `--prefer-free-formats` is used, `webm` is preferred. - `vext`: Video Extension (`mp4` > `mov` > `webm` > `flv` > other). If `--prefer-free-formats` is used, `webm` is preferred.
- `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac` - `aext`: Audio Extension (`m4a` > `aac` > `mp3` > `ogg` > `opus` > `webm` > other). If `--prefer-free-formats` is used, the order changes to `ogg` > `opus` > `webm` > `mp3` > `m4a` > `aac`
- `ext`: Equivalent to `vext,aext` - `ext`: Equivalent to `vext,aext`
- `filesize`: Exact filesize, if known in advance - `filesize`: Exact filesize, if known in advance
@@ -1555,7 +1588,7 @@ $ yt-dlp -S "+size,+br"
$ yt-dlp -f "bv*[ext=mp4]+ba[ext=m4a]/b[ext=mp4] / bv*+ba/b" $ yt-dlp -f "bv*[ext=mp4]+ba[ext=m4a]/b[ext=mp4] / bv*+ba/b"
# Download the best video with the best extension # Download the best video with the best extension
# (For video, mp4 > webm > flv. For audio, m4a > aac > mp3 ...) # (For video, mp4 > mov > webm > flv. For audio, m4a > aac > mp3 ...)
$ yt-dlp -S "ext" $ yt-dlp -S "ext"
@@ -1638,9 +1671,9 @@ The metadata obtained by the extractors can be modified by using `--parse-metada
`--replace-in-metadata FIELDS REGEX REPLACE` is used to replace text in any metadata field using [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax). [Backreferences](https://docs.python.org/3/library/re.html?highlight=backreferences#re.sub) can be used in the replace string for advanced use. `--replace-in-metadata FIELDS REGEX REPLACE` is used to replace text in any metadata field using [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax). [Backreferences](https://docs.python.org/3/library/re.html?highlight=backreferences#re.sub) can be used in the replace string for advanced use.
The general syntax of `--parse-metadata FROM:TO` is to give the name of a field or an [output template](#output-template) to extract data from, and the format to interpret it as, separated by a colon `:`. Either a [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax) with named capture groups or a similar syntax to the [output template](#output-template) (only `%(field)s` formatting is supported) can be used for `TO`. The option can be used multiple times to parse and modify various fields. The general syntax of `--parse-metadata FROM:TO` is to give the name of a field or an [output template](#output-template) to extract data from, and the format to interpret it as, separated by a colon `:`. Either a [python regular expression](https://docs.python.org/3/library/re.html#regular-expression-syntax) with named capture groups, a single field name, or a similar syntax to the [output template](#output-template) (only `%(field)s` formatting is supported) can be used for `TO`. The option can be used multiple times to parse and modify various fields.
Note that any field created by this can be used in the [output template](#output-template) and will also affect the media file's metadata added when using `--embed-metadata`. Note that these options preserve their relative order, allowing replacements to be made in parsed fields and viceversa. Also, any field thus created can be used in the [output template](#output-template) and will also affect the media file's metadata added when using `--embed-metadata`.
This option also has a few special uses: This option also has a few special uses:
@@ -1710,7 +1743,7 @@ Some extractors accept additional arguments which can be passed using `--extract
The following extractors use this feature: The following extractors use this feature:
#### youtube #### youtube
* `lang`: Language code to prefer translated metadata of this language (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes * `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively * `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
* `player_client`: Clients to extract video data from. The main clients are `web`, `android` and `ios` with variants `_music`, `_embedded`, `_embedscreen`, `_creator` (e.g. `web_embedded`); and `mweb` and `tv_embedded` (agegate bypass) with no variants. By default, `android,web` is used, but `tv_embedded` and `creator` variants are added as required for age-gated videos. Similarly, the music variants are added for `music.youtube.com` urls. You can use `all` to use all the clients, and `default` for the default clients. * `player_client`: Clients to extract video data from. The main clients are `web`, `android` and `ios` with variants `_music`, `_embedded`, `_embedscreen`, `_creator` (e.g. `web_embedded`); and `mweb` and `tv_embedded` (agegate bypass) with no variants. By default, `android,web` is used, but `tv_embedded` and `creator` variants are added as required for age-gated videos. Similarly, the music variants are added for `music.youtube.com` urls. You can use `all` to use all the clients, and `default` for the default clients.
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details * `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
@@ -1723,17 +1756,16 @@ The following extractors use this feature:
#### youtubetab (YouTube playlists, channels, feeds, etc.) #### youtubetab (YouTube playlists, channels, feeds, etc.)
* `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details) * `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details)
* `approximate_date`: Extract approximate `upload_date` in flat-playlist. This may cause date-based filters to be slightly off * `approximate_date`: Extract approximate `upload_date` and `timestamp` in flat-playlist. This may cause date-based filters to be slightly off
#### generic
* `fragment_query`: Passthrough any query in mpd/m3u8 manifest URLs to their fragments. Does not apply to ffmpeg
#### funimation #### funimation
* `language`: Audio languages to extract, e.g. `funimation:language=english,japanese` * `language`: Audio languages to extract, e.g. `funimation:language=english,japanese`
* `version`: The video version to extract - `uncut` or `simulcast` * `version`: The video version to extract - `uncut` or `simulcast`
#### crunchyroll #### crunchyrollbeta (Crunchyroll)
* `language`: Audio languages to extract, e.g. `crunchyroll:language=jaJp`
* `hardsub`: Which hard-sub versions to extract, e.g. `crunchyroll:hardsub=None,enUS`
#### crunchyrollbeta
* `format`: Which stream type(s) to extract (default: `adaptive_hls`). Potentially useful values include `adaptive_hls`, `adaptive_dash`, `vo_adaptive_hls`, `vo_adaptive_dash`, `download_hls`, `download_dash`, `multitrack_adaptive_hls_v2` * `format`: Which stream type(s) to extract (default: `adaptive_hls`). Potentially useful values include `adaptive_hls`, `adaptive_dash`, `vo_adaptive_hls`, `vo_adaptive_dash`, `download_hls`, `download_dash`, `multitrack_adaptive_hls_v2`
* `hardsub`: Preference order for which hardsub versions to extract, or `all` (default: `None` = no hardsubs), e.g. `crunchyrollbeta:hardsub=en-US,None` * `hardsub`: Preference order for which hardsub versions to extract, or `all` (default: `None` = no hardsubs), e.g. `crunchyrollbeta:hardsub=en-US,None`
@@ -1755,33 +1787,87 @@ The following extractors use this feature:
* `dr`: dynamic range to ignore - one or more of `sdr`, `hdr10`, `dv` * `dr`: dynamic range to ignore - one or more of `sdr`, `hdr10`, `dv`
#### tiktok #### tiktok
* `api_hostname`: Hostname to use for mobile API requests, e.g. `api-h2.tiktokv.com`
* `app_version`: App version to call mobile APIs with - should be set along with `manifest_app_version`, e.g. `20.2.1` * `app_version`: App version to call mobile APIs with - should be set along with `manifest_app_version`, e.g. `20.2.1`
* `manifest_app_version`: Numeric app version to call mobile APIs with, e.g. `221` * `manifest_app_version`: Numeric app version to call mobile APIs with, e.g. `221`
#### rokfinchannel #### rokfinchannel
* `tab`: Which tab to download - one of `new`, `top`, `videos`, `podcasts`, `streams`, `stacks` * `tab`: Which tab to download - one of `new`, `top`, `videos`, `podcasts`, `streams`, `stacks`
#### twitter
* `force_graphql`: Force usage of the GraphQL API. By default it will only be used if login cookies are provided
NOTE: These options may be changed/removed in the future without concern for backward compatibility **Note**: These options may be changed/removed in the future without concern for backward compatibility
<!-- MANPAGE: MOVE "INSTALLATION" SECTION HERE --> <!-- MANPAGE: MOVE "INSTALLATION" SECTION HERE -->
# PLUGINS # PLUGINS
Plugins are loaded from `<root-dir>/ytdlp_plugins/<type>/__init__.py`; where `<root-dir>` is the directory of the binary (`<root-dir>/yt-dlp`), or the root directory of the module if you are running directly from source-code (`<root dir>/yt_dlp/__main__.py`). Plugins are currently not supported for the `pip` version Note that **all** plugins are imported even if not invoked, and that **there are no checks** performed on plugin code. **Use plugins at your own risk and only if you trust the code!**
Plugins can be of `<type>`s `extractor` or `postprocessor`. Extractor plugins do not need to be enabled from the CLI and are automatically invoked when the input URL is suitable for it. Postprocessor plugins can be invoked using `--use-postprocessor NAME`. Plugins can be of `<type>`s `extractor` or `postprocessor`.
- Extractor plugins do not need to be enabled from the CLI and are automatically invoked when the input URL is suitable for it.
- Extractor plugins take priority over builtin extractors.
- Postprocessor plugins can be invoked using `--use-postprocessor NAME`.
See [ytdlp_plugins](ytdlp_plugins) for example plugins.
Note that **all** plugins are imported even if not invoked, and that **there are no checks** performed on plugin code. Use plugins at your own risk and only if you trust the code Plugins are loaded from the namespace packages `yt_dlp_plugins.extractor` and `yt_dlp_plugins.postprocessor`.
If you are a plugin author, add [ytdlp-plugins](https://github.com/topics/ytdlp-plugins) as a topic to your repository for discoverability In other words, the file structure on the disk looks something like:
yt_dlp_plugins/
extractor/
myplugin.py
postprocessor/
myplugin.py
yt-dlp looks for these `yt_dlp_plugins` namespace folders in many locations (see below) and loads in plugins from **all** of them.
See the [wiki for some known plugins](https://github.com/yt-dlp/yt-dlp/wiki/Plugins) See the [wiki for some known plugins](https://github.com/yt-dlp/yt-dlp/wiki/Plugins)
## Installing Plugins
Plugins can be installed using various methods and locations.
1. **Configuration directories**:
Plugin packages (containing a `yt_dlp_plugins` namespace folder) can be dropped into the following standard [configuration locations](#configuration):
* **User Plugins**
* `${XDG_CONFIG_HOME}/yt-dlp/plugins/<package name>/yt_dlp_plugins/` (recommended on Linux/macOS)
* `${XDG_CONFIG_HOME}/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
* `${APPDATA}/yt-dlp/plugins/<package name>/yt_dlp_plugins/` (recommended on Windows)
* `~/.yt-dlp/plugins/<package name>/yt_dlp_plugins/`
* `~/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
* **System Plugins**
* `/etc/yt-dlp/plugins/<package name>/yt_dlp_plugins/`
* `/etc/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
2. **Executable location**: Plugin packages can similarly be installed in a `yt-dlp-plugins` directory under the executable location:
* Binary: where `<root-dir>/yt-dlp.exe`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
* Source: where `<root-dir>/yt_dlp/__main__.py`, `<root-dir>/yt-dlp-plugins/<package name>/yt_dlp_plugins/`
3. **pip and other locations in `PYTHONPATH`**
* Plugin packages can be installed and managed using `pip`. See [yt-dlp-sample-plugins](https://github.com/yt-dlp/yt-dlp-sample-plugins) for an example.
* Note: plugin files between plugin packages installed with pip must have unique filenames.
* Any path in `PYTHONPATH` is searched in for the `yt_dlp_plugins` namespace folder.
* Note: This does not apply for Pyinstaller/py2exe builds.
`.zip`, `.egg` and `.whl` archives containing a `yt_dlp_plugins` namespace folder in their root are also supported as plugin packages.
* e.g. `${XDG_CONFIG_HOME}/yt-dlp/plugins/mypluginpkg.zip` where `mypluginpkg.zip` contains `yt_dlp_plugins/<type>/myplugin.py`
Run yt-dlp with `--verbose` to check if the plugin has been loaded.
## Developing Plugins
See the [yt-dlp-sample-plugins](https://github.com/yt-dlp/yt-dlp-sample-plugins) repo for a template plugin package and the [Plugin Development](https://github.com/yt-dlp/yt-dlp/wiki/Plugin-Development) section of the wiki for a plugin development guide.
All public classes with a name ending in `IE`/`PP` are imported from each file for extractors and postprocessors repectively. This respects underscore prefix (e.g. `_MyBasePluginIE` is private) and `__all__`. Modules can similarly be excluded by prefixing the module name with an underscore (e.g. `_myplugin.py`).
To replace an existing extractor with a subclass of one, set the `plugin_name` class keyword argument (e.g. `MyPluginIE(ABuiltInIE, plugin_name='myplugin')` will replace `ABuiltInIE` with `MyPluginIE`). Since the extractor replaces the parent, you should exclude the subclass extractor from being imported separately by making it private using one of the methods described above.
If you are a plugin author, add [yt-dlp-plugins](https://github.com/topics/yt-dlp-plugins) as a topic to your repository for discoverability.
See the [Developer Instructions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) on how to write and test an extractor.
# EMBEDDING YT-DLP # EMBEDDING YT-DLP

View File

@@ -10,7 +10,7 @@ from ..utils import (
) )
# These bloat the lazy_extractors, so allow them to passthrough silently # These bloat the lazy_extractors, so allow them to passthrough silently
ALLOWED_CLASSMETHODS = {'get_testcases', 'extract_from_webpage'} ALLOWED_CLASSMETHODS = {'extract_from_webpage', 'get_testcases', 'get_webpage_testcases'}
_WARNED = False _WARNED = False

View File

@@ -14,10 +14,17 @@ from devscripts.utils import get_filename_args, read_file, write_file
NO_ATTR = object() NO_ATTR = object()
STATIC_CLASS_PROPERTIES = [ STATIC_CLASS_PROPERTIES = [
'IE_NAME', 'IE_DESC', 'SEARCH_KEY', '_VALID_URL', '_WORKING', '_ENABLED', '_NETRC_MACHINE', 'age_limit' 'IE_NAME', '_ENABLED', '_VALID_URL', # Used for URL matching
'_WORKING', 'IE_DESC', '_NETRC_MACHINE', 'SEARCH_KEY', # Used for --extractor-descriptions
'age_limit', # Used for --age-limit (evaluated)
'_RETURN_TYPE', # Accessed in CLI only with instance (evaluated)
] ]
CLASS_METHODS = [ CLASS_METHODS = [
'ie_key', 'working', 'description', 'suitable', '_match_valid_url', '_match_id', 'get_temp_id', 'is_suitable' 'ie_key', 'suitable', '_match_valid_url', # Used for URL matching
'working', 'get_temp_id', '_match_id', # Accessed just before instance creation
'description', # Used for --extractor-descriptions
'is_suitable', # Used for --age-limit
'supports_login', 'is_single_video', # Accessed in CLI only with instance
] ]
IE_TEMPLATE = ''' IE_TEMPLATE = '''
class {name}({bases}): class {name}({bases}):
@@ -33,8 +40,12 @@ def main():
_ALL_CLASSES = get_all_ies() # Must be before import _ALL_CLASSES = get_all_ies() # Must be before import
import yt_dlp.plugins
from yt_dlp.extractor.common import InfoExtractor, SearchInfoExtractor from yt_dlp.extractor.common import InfoExtractor, SearchInfoExtractor
# Filter out plugins
_ALL_CLASSES = [cls for cls in _ALL_CLASSES if not cls.__module__.startswith(f'{yt_dlp.plugins.PACKAGE_NAME}.')]
DummyInfoExtractor = type('InfoExtractor', (InfoExtractor,), {'IE_NAME': NO_ATTR}) DummyInfoExtractor = type('InfoExtractor', (InfoExtractor,), {'IE_NAME': NO_ATTR})
module_src = '\n'.join(( module_src = '\n'.join((
MODULE_TEMPLATE, MODULE_TEMPLATE,

View File

@@ -50,5 +50,7 @@ UPDATE_HINT = None
''' '''
write_file('yt_dlp/version.py', VERSION_FILE) write_file('yt_dlp/version.py', VERSION_FILE)
print(f'::set-output name=ytdlp_version::{VERSION}') github_output = os.getenv('GITHUB_OUTPUT')
if github_output:
write_file(github_output, f'ytdlp_version={VERSION}\n', 'a')
print(f'\nVersion = {VERSION}, Git HEAD = {GIT_HEAD}') print(f'\nVersion = {VERSION}, Git HEAD = {GIT_HEAD}')

View File

@@ -7,8 +7,8 @@ def read_file(fname):
return f.read() return f.read()
def write_file(fname, content): def write_file(fname, content, mode='w'):
with open(fname, 'w', encoding='utf-8') as f: with open(fname, mode, encoding='utf-8') as f:
return f.write(content) return f.write(content)

View File

@@ -12,9 +12,8 @@ from PyInstaller.__main__ import run as run_pyinstaller
from devscripts.utils import read_version from devscripts.utils import read_version
OS_NAME, MACHINE, ARCH = sys.platform, platform.machine(), platform.architecture()[0][:2] OS_NAME, MACHINE, ARCH = sys.platform, platform.machine().lower(), platform.architecture()[0][:2]
if MACHINE in ('x86_64', 'AMD64') or ('i' in MACHINE and '86' in MACHINE): if MACHINE in ('x86', 'x86_64', 'amd64', 'i386', 'i686'):
# NB: Windows x86 has MACHINE = AMD64 irrespective of bitness
MACHINE = 'x86' if ARCH == '32' else '' MACHINE = 'x86' if ARCH == '32' else ''
@@ -63,7 +62,7 @@ def exe(onedir):
name = '_'.join(filter(None, ( name = '_'.join(filter(None, (
'yt-dlp', 'yt-dlp',
{'win32': '', 'darwin': 'macos'}.get(OS_NAME, OS_NAME), {'win32': '', 'darwin': 'macos'}.get(OS_NAME, OS_NAME),
MACHINE MACHINE,
))) )))
return name, ''.join(filter(None, ( return name, ''.join(filter(None, (
'dist/', 'dist/',

125
setup.py
View File

@@ -36,36 +36,34 @@ def packages():
def py2exe_params(): def py2exe_params():
import py2exe # noqa: F401
warnings.warn( warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. ' 'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'The recommended way is to use "pyinst.py" to build using pyinstaller') 'It is recommended to run "pyinst.py" to build using pyinstaller instead')
return { return {
'console': [{ 'console': [{
'script': './yt_dlp/__main__.py', 'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp', 'dest_base': 'yt-dlp',
'icon_resources': [(1, 'devscripts/logo.ico')],
}],
'version_info': {
'version': VERSION, 'version': VERSION,
'description': DESCRIPTION, 'description': DESCRIPTION,
'comments': LONG_DESCRIPTION.split('\n')[0], 'comments': LONG_DESCRIPTION.split('\n')[0],
'product_name': 'yt-dlp', 'product_name': 'yt-dlp',
'product_version': VERSION, 'product_version': VERSION,
'icon_resources': [(1, 'devscripts/logo.ico')],
}],
'options': {
'py2exe': {
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': ['Crypto', 'Cryptodome'], # py2exe cannot import Crypto
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
# Modules that are only imported dynamically must be added here
'includes': ['yt_dlp.compat._legacy'],
}
}, },
'zipfile': None 'options': {
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': ['Crypto', 'Cryptodome'], # py2exe cannot import Crypto
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
# Modules that are only imported dynamically must be added here
'includes': ['yt_dlp.compat._legacy'],
},
'zipfile': None,
} }
@@ -113,41 +111,58 @@ class build_lazy_extractors(Command):
subprocess.run([sys.executable, 'devscripts/make_lazy_extractors.py']) subprocess.run([sys.executable, 'devscripts/make_lazy_extractors.py'])
params = py2exe_params() if sys.argv[1:2] == ['py2exe'] else build_params() def main():
setup( if sys.argv[1:2] == ['py2exe']:
name='yt-dlp', params = py2exe_params()
version=VERSION, try:
maintainer='pukkandan', from py2exe import freeze
maintainer_email='pukkandan.ytdlp@gmail.com', except ImportError:
description=DESCRIPTION, import py2exe # noqa: F401
long_description=LONG_DESCRIPTION, warnings.warn('You are using an outdated version of py2exe. Support for this version will be removed in the future')
long_description_content_type='text/markdown', params['console'][0].update(params.pop('version_info'))
url='https://github.com/yt-dlp/yt-dlp', params['options'] = {'py2exe': params.pop('options')}
packages=packages(), else:
install_requires=REQUIREMENTS, return freeze(**params)
python_requires='>=3.7', else:
project_urls={ params = build_params()
'Documentation': 'https://github.com/yt-dlp/yt-dlp#readme',
'Source': 'https://github.com/yt-dlp/yt-dlp', setup(
'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues', name='yt-dlp',
'Funding': 'https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators', version=VERSION,
}, maintainer='pukkandan',
classifiers=[ maintainer_email='pukkandan.ytdlp@gmail.com',
'Topic :: Multimedia :: Video', description=DESCRIPTION,
'Development Status :: 5 - Production/Stable', long_description=LONG_DESCRIPTION,
'Environment :: Console', long_description_content_type='text/markdown',
'Programming Language :: Python', url='https://github.com/yt-dlp/yt-dlp',
'Programming Language :: Python :: 3.7', packages=packages(),
'Programming Language :: Python :: 3.8', install_requires=REQUIREMENTS,
'Programming Language :: Python :: 3.9', python_requires='>=3.7',
'Programming Language :: Python :: 3.10', project_urls={
'Programming Language :: Python :: 3.11', 'Documentation': 'https://github.com/yt-dlp/yt-dlp#readme',
'Programming Language :: Python :: Implementation', 'Source': 'https://github.com/yt-dlp/yt-dlp',
'Programming Language :: Python :: Implementation :: CPython', 'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues',
'Programming Language :: Python :: Implementation :: PyPy', 'Funding': 'https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators',
'License :: Public Domain', },
'Operating System :: OS Independent', classifiers=[
], 'Topic :: Multimedia :: Video',
cmdclass={'build_lazy_extractors': build_lazy_extractors}, 'Development Status :: 5 - Production/Stable',
**params 'Environment :: Console',
) 'Programming Language :: Python',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: Implementation',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'License :: Public Domain',
'Operating System :: OS Independent',
],
cmdclass={'build_lazy_extractors': build_lazy_extractors},
**params
)
main()

View File

@@ -23,7 +23,7 @@
- **9now.com.au** - **9now.com.au**
- **abc.net.au** - **abc.net.au**
- **abc.net.au:iview** - **abc.net.au:iview**
- **abc.net.au:iview:showseries** - **abc.net.au:iview:showseries**
- **abcnews** - **abcnews**
- **abcnews:video** - **abcnews:video**
- **abcotvs**: ABC Owned Television Stations - **abcotvs**: ABC Owned Television Stations
@@ -35,7 +35,7 @@
- **acast:channel** - **acast:channel**
- **AcFunBangumi** - **AcFunBangumi**
- **AcFunVideo** - **AcFunVideo**
- **ADN**: [<abbr title="netrc machine"><em>animedigitalnetwork</em></abbr>] Anime Digital Network - **ADN**: [<abbr title="netrc machine"><em>animationdigitalnetwork</em></abbr>] Animation Digital Network
- **AdobeConnect** - **AdobeConnect**
- **adobetv** - **adobetv**
- **adobetv:channel** - **adobetv:channel**
@@ -46,10 +46,12 @@
- **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network and History Vault - **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network and History Vault
- **aenetworks:collection** - **aenetworks:collection**
- **aenetworks:show** - **aenetworks:show**
- **AeonCo**
- **afreecatv**: [<abbr title="netrc machine"><em>afreecatv</em></abbr>] afreecatv.com - **afreecatv**: [<abbr title="netrc machine"><em>afreecatv</em></abbr>] afreecatv.com
- **afreecatv:live**: [<abbr title="netrc machine"><em>afreecatv</em></abbr>] afreecatv.com - **afreecatv:live**: [<abbr title="netrc machine"><em>afreecatv</em></abbr>] afreecatv.com
- **afreecatv:user** - **afreecatv:user**
- **AirMozilla** - **AirMozilla**
- **AirTV**
- **AliExpressLive** - **AliExpressLive**
- **AlJazeera** - **AlJazeera**
- **Allocine** - **Allocine**
@@ -59,6 +61,10 @@
- **Alura**: [<abbr title="netrc machine"><em>alura</em></abbr>] - **Alura**: [<abbr title="netrc machine"><em>alura</em></abbr>]
- **AluraCourse**: [<abbr title="netrc machine"><em>aluracourse</em></abbr>] - **AluraCourse**: [<abbr title="netrc machine"><em>aluracourse</em></abbr>]
- **Amara** - **Amara**
- **AmazonMiniTV**
- **amazonminitv:season**: Amazon MiniTV Series, "minitv:season:" prefix
- **amazonminitv:series**
- **AmazonReviews**
- **AmazonStore** - **AmazonStore**
- **AMCNetworks** - **AMCNetworks**
- **AmericasTestKitchen** - **AmericasTestKitchen**
@@ -119,17 +125,18 @@
- **Bandcamp:album** - **Bandcamp:album**
- **Bandcamp:user** - **Bandcamp:user**
- **Bandcamp:weekly** - **Bandcamp:weekly**
- **bangumi.bilibili.com**: BiliBili番剧
- **BannedVideo** - **BannedVideo**
- **bbc**: [<abbr title="netrc machine"><em>bbc</em></abbr>] BBC - **bbc**: [<abbr title="netrc machine"><em>bbc</em></abbr>] BBC
- **bbc.co.uk**: [<abbr title="netrc machine"><em>bbc</em></abbr>] BBC iPlayer - **bbc.co.uk**: [<abbr title="netrc machine"><em>bbc</em></abbr>] BBC iPlayer
- **bbc.co.uk:article**: BBC articles - **bbc.co.uk:article**: BBC articles
- **bbc.co.uk:iplayer:episodes** - **bbc.co.uk:iplayer:episodes**
- **bbc.co.uk:iplayer:group** - **bbc.co.uk:iplayer:group**
- **bbc.co.uk:playlist** - **bbc.co.uk:playlist**
- **BBVTV**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>] - **BBVTV**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>]
- **BBVTVLive**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>] - **BBVTVLive**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>]
- **BBVTVRecordings**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>] - **BBVTVRecordings**: [<abbr title="netrc machine"><em>bbvtv</em></abbr>]
- **BeatBumpPlaylist**
- **BeatBumpVideo**
- **Beatport** - **Beatport**
- **Beeg** - **Beeg**
- **BehindKink** - **BehindKink**
@@ -149,13 +156,15 @@
- **Bilibili category extractor** - **Bilibili category extractor**
- **BilibiliAudio** - **BilibiliAudio**
- **BilibiliAudioAlbum** - **BilibiliAudioAlbum**
- **BiliBiliBangumi**
- **BiliBiliBangumiMedia**
- **BiliBiliPlayer** - **BiliBiliPlayer**
- **BiliBiliSearch**: Bilibili video search; "bilisearch:" prefix - **BiliBiliSearch**: Bilibili video search; "bilisearch:" prefix
- **BilibiliSpaceAudio** - **BilibiliSpaceAudio**
- **BilibiliSpacePlaylist** - **BilibiliSpacePlaylist**
- **BilibiliSpaceVideo** - **BilibiliSpaceVideo**
- **BiliIntl**: [<abbr title="netrc machine"><em>biliintl</em></abbr>] - **BiliIntl**: [<abbr title="netrc machine"><em>biliintl</em></abbr>]
- **BiliIntlSeries**: [<abbr title="netrc machine"><em>biliintl</em></abbr>] - **biliIntl:series**: [<abbr title="netrc machine"><em>biliintl</em></abbr>]
- **BiliLive** - **BiliLive**
- **BioBioChileTV** - **BioBioChileTV**
- **Biography** - **Biography**
@@ -195,6 +204,7 @@
- **Camdemy** - **Camdemy**
- **CamdemyFolder** - **CamdemyFolder**
- **CamModels** - **CamModels**
- **Camsoda**
- **CamtasiaEmbed** - **CamtasiaEmbed**
- **CamWithHer** - **CamWithHer**
- **CanalAlpha** - **CanalAlpha**
@@ -218,7 +228,7 @@
- **cbssports:embed** - **cbssports:embed**
- **CCMA** - **CCMA**
- **CCTV**: 央视网 - **CCTV**: 央视网
- **CDA** - **CDA**: [<abbr title="netrc machine"><em>cdapl</em></abbr>]
- **Cellebrite** - **Cellebrite**
- **CeskaTelevize** - **CeskaTelevize**
- **CGTN** - **CGTN**
@@ -233,6 +243,7 @@
- **cielotv.it** - **cielotv.it**
- **Cinchcast** - **Cinchcast**
- **Cinemax** - **Cinemax**
- **CinetecaMilano**
- **CiscoLiveSearch** - **CiscoLiveSearch**
- **CiscoLiveSession** - **CiscoLiveSession**
- **ciscowebex**: Cisco Webex - **ciscowebex**: Cisco Webex
@@ -272,9 +283,7 @@
- **CrowdBunker** - **CrowdBunker**
- **CrowdBunkerChannel** - **CrowdBunkerChannel**
- **crunchyroll**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>] - **crunchyroll**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>]
- **crunchyroll:beta**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>]
- **crunchyroll:playlist**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>] - **crunchyroll:playlist**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>]
- **crunchyroll:playlist:beta**: [<abbr title="netrc machine"><em>crunchyroll</em></abbr>]
- **CSpan**: C-SPAN - **CSpan**: C-SPAN
- **CSpanCongress** - **CSpanCongress**
- **CtsNews**: 華視新聞 - **CtsNews**: 華視新聞
@@ -311,6 +320,8 @@
- **democracynow** - **democracynow**
- **DestinationAmerica** - **DestinationAmerica**
- **DetikEmbed** - **DetikEmbed**
- **DeuxM**
- **DeuxMNews**
- **DHM**: Filmarchiv - Deutsches Historisches Museum - **DHM**: Filmarchiv - Deutsches Historisches Museum
- **Digg** - **Digg**
- **DigitalConcertHall**: [<abbr title="netrc machine"><em>digitalconcerthall</em></abbr>] DigitalConcertHall extractor - **DigitalConcertHall**: [<abbr title="netrc machine"><em>digitalconcerthall</em></abbr>] DigitalConcertHall extractor
@@ -328,7 +339,6 @@
- **DIYNetwork** - **DIYNetwork**
- **dlive:stream** - **dlive:stream**
- **dlive:vod** - **dlive:vod**
- **DoodStream**
- **Dotsub** - **Dotsub**
- **Douyin** - **Douyin**
- **DouyuShow** - **DouyuShow**
@@ -384,6 +394,7 @@
- **ESPNCricInfo** - **ESPNCricInfo**
- **EsriVideo** - **EsriVideo**
- **Europa** - **Europa**
- **EuroParlWebstream**
- **EuropeanTour** - **EuropeanTour**
- **Eurosport** - **Eurosport**
- **EUScreen** - **EUScreen**
@@ -422,6 +433,7 @@
- **Foxgay** - **Foxgay**
- **foxnews**: Fox News and Fox Business Video - **foxnews**: Fox News and Fox Business Video
- **foxnews:article** - **foxnews:article**
- **FoxNewsVideo**
- **FoxSports** - **FoxSports**
- **fptplay**: fptplay.vn - **fptplay**: fptplay.vn
- **FranceCulture** - **FranceCulture**
@@ -463,6 +475,8 @@
- **gem.cbc.ca**: [<abbr title="netrc machine"><em>cbcgem</em></abbr>] - **gem.cbc.ca**: [<abbr title="netrc machine"><em>cbcgem</em></abbr>]
- **gem.cbc.ca:live** - **gem.cbc.ca:live**
- **gem.cbc.ca:playlist** - **gem.cbc.ca:playlist**
- **Genius**
- **GeniusLyrics**
- **Gettr** - **Gettr**
- **GettrStreaming** - **GettrStreaming**
- **Gfycat** - **Gfycat**
@@ -483,7 +497,7 @@
- **Golem** - **Golem**
- **goodgame:stream** - **goodgame:stream**
- **google:podcasts** - **google:podcasts**
- **google:podcasts:feed** - **google:podcasts:feed**
- **GoogleDrive** - **GoogleDrive**
- **GoogleDrive:Folder** - **GoogleDrive:Folder**
- **GoPlay**: [<abbr title="netrc machine"><em>goplay</em></abbr>] - **GoPlay**: [<abbr title="netrc machine"><em>goplay</em></abbr>]
@@ -518,6 +532,7 @@
- **HotNewHipHop** - **HotNewHipHop**
- **hotstar** - **hotstar**
- **hotstar:playlist** - **hotstar:playlist**
- **hotstar:season**
- **hotstar:series** - **hotstar:series**
- **Howcast** - **Howcast**
- **HowStuffWorks** - **HowStuffWorks**
@@ -592,6 +607,8 @@
- **JWPlatform** - **JWPlatform**
- **Kakao** - **Kakao**
- **Kaltura** - **Kaltura**
- **Kanal2**
- **KankaNews**
- **Karaoketv** - **Karaoketv**
- **KarriereVideos** - **KarriereVideos**
- **Katsomo** - **Katsomo**
@@ -600,8 +617,10 @@
- **Ketnet** - **Ketnet**
- **khanacademy** - **khanacademy**
- **khanacademy:unit** - **khanacademy:unit**
- **Kick**
- **Kicker** - **Kicker**
- **KickStarter** - **KickStarter**
- **KickVOD**
- **KinjaEmbed** - **KinjaEmbed**
- **KinoPoisk** - **KinoPoisk**
- **KompasVideo** - **KompasVideo**
@@ -618,7 +637,7 @@
- **kuwo:singer**: 酷我音乐 - 歌手 - **kuwo:singer**: 酷我音乐 - 歌手
- **kuwo:song**: 酷我音乐 - **kuwo:song**: 酷我音乐
- **la7.it** - **la7.it**
- **la7.it:pod:episode** - **la7.it:pod:episode**
- **la7.it:podcast** - **la7.it:podcast**
- **laola1tv** - **laola1tv**
- **laola1tv:embed** - **laola1tv:embed**
@@ -652,9 +671,10 @@
- **LineLiveChannel** - **LineLiveChannel**
- **LinkedIn**: [<abbr title="netrc machine"><em>linkedin</em></abbr>] - **LinkedIn**: [<abbr title="netrc machine"><em>linkedin</em></abbr>]
- **linkedin:learning**: [<abbr title="netrc machine"><em>linkedin</em></abbr>] - **linkedin:learning**: [<abbr title="netrc machine"><em>linkedin</em></abbr>]
- **linkedin:learning:course**: [<abbr title="netrc machine"><em>linkedin</em></abbr>] - **linkedin:learning:course**: [<abbr title="netrc machine"><em>linkedin</em></abbr>]
- **LinuxAcademy**: [<abbr title="netrc machine"><em>linuxacademy</em></abbr>] - **LinuxAcademy**: [<abbr title="netrc machine"><em>linuxacademy</em></abbr>]
- **Liputan6** - **Liputan6**
- **ListenNotes**
- **LiTV** - **LiTV**
- **LiveJournal** - **LiveJournal**
- **livestream** - **livestream**
@@ -673,7 +693,7 @@
- **MagentaMusik360** - **MagentaMusik360**
- **mailru**: Видео@Mail.Ru - **mailru**: Видео@Mail.Ru
- **mailru:music**: Музыка@Mail.Ru - **mailru:music**: Музыка@Mail.Ru
- **mailru:music:search**: Музыка@Mail.Ru - **mailru:music:search**: Музыка@Mail.Ru
- **MainStreaming**: MainStreaming Player - **MainStreaming**: MainStreaming Player
- **MallTV** - **MallTV**
- **mangomolo:live** - **mangomolo:live**
@@ -701,6 +721,7 @@
- **Mediasite** - **Mediasite**
- **MediasiteCatalog** - **MediasiteCatalog**
- **MediasiteNamedCatalog** - **MediasiteNamedCatalog**
- **MediaStream**
- **MediaWorksNZVOD** - **MediaWorksNZVOD**
- **Medici** - **Medici**
- **megaphone.fm**: megaphone.fm embedded players - **megaphone.fm**: megaphone.fm embedded players
@@ -718,7 +739,7 @@
- **microsoftstream**: Microsoft Stream - **microsoftstream**: Microsoft Stream
- **mildom**: Record ongoing live by specific user in Mildom - **mildom**: Record ongoing live by specific user in Mildom
- **mildom:clip**: Clip in Mildom - **mildom:clip**: Clip in Mildom
- **mildom:user:vod**: Download all VODs from specific user in Mildom - **mildom:user:vod**: Download all VODs from specific user in Mildom
- **mildom:vod**: VOD in Mildom - **mildom:vod**: VOD in Mildom
- **minds** - **minds**
- **minds:channel** - **minds:channel**
@@ -736,6 +757,7 @@
- **mixcloud:playlist** - **mixcloud:playlist**
- **mixcloud:user** - **mixcloud:user**
- **MLB** - **MLB**
- **MLBArticle**
- **MLBTV**: [<abbr title="netrc machine"><em>mlb</em></abbr>] - **MLBTV**: [<abbr title="netrc machine"><em>mlb</em></abbr>]
- **MLBVideo** - **MLBVideo**
- **MLSSoccer** - **MLSSoccer**
@@ -753,6 +775,7 @@
- **MotherlessGroup** - **MotherlessGroup**
- **Motorsport**: motorsport.com - **Motorsport**: motorsport.com
- **MotorTrend** - **MotorTrend**
- **MotorTrendOnDemand**
- **MovieClips** - **MovieClips**
- **MovieFap** - **MovieFap**
- **Moviepilot** - **Moviepilot**
@@ -803,7 +826,7 @@
- **navernow** - **navernow**
- **NBA** - **NBA**
- **nba:watch** - **nba:watch**
- **nba:watch:collection** - **nba:watch:collection**
- **NBAChannel** - **NBAChannel**
- **NBAEmbed** - **NBAEmbed**
- **NBAWatchEmbed** - **NBAWatchEmbed**
@@ -817,7 +840,7 @@
- **NBCStations** - **NBCStations**
- **ndr**: NDR.de - Norddeutscher Rundfunk - **ndr**: NDR.de - Norddeutscher Rundfunk
- **ndr:embed** - **ndr:embed**
- **ndr:embed:base** - **ndr:embed:base**
- **NDTV** - **NDTV**
- **Nebula**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>] - **Nebula**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>]
- **nebula:channel**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>] - **nebula:channel**: [<abbr title="netrc machine"><em>watchnebula</em></abbr>]
@@ -835,6 +858,7 @@
- **NetPlusTVRecordings**: [<abbr title="netrc machine"><em>netplus</em></abbr>] - **NetPlusTVRecordings**: [<abbr title="netrc machine"><em>netplus</em></abbr>]
- **Netverse** - **Netverse**
- **NetversePlaylist** - **NetversePlaylist**
- **NetverseSearch**: "netsearch:" prefix
- **Netzkino** - **Netzkino**
- **Newgrounds** - **Newgrounds**
- **Newgrounds:playlist** - **Newgrounds:playlist**
@@ -869,7 +893,7 @@
- **niconico:tag**: NicoNico video tag URLs - **niconico:tag**: NicoNico video tag URLs
- **NiconicoUser** - **NiconicoUser**
- **nicovideo:search**: Nico video search; "nicosearch:" prefix - **nicovideo:search**: Nico video search; "nicosearch:" prefix
- **nicovideo:search:date**: Nico video search, newest first; "nicosearchdate:" prefix - **nicovideo:search:date**: Nico video search, newest first; "nicosearchdate:" prefix
- **nicovideo:search_url**: Nico video search URLs - **nicovideo:search_url**: Nico video search URLs
- **Nintendo** - **Nintendo**
- **Nitter** - **Nitter**
@@ -877,10 +901,12 @@
- **njoy:embed** - **njoy:embed**
- **NJPWWorld**: [<abbr title="netrc machine"><em>njpwworld</em></abbr>] 新日本プロレスワールド - **NJPWWorld**: [<abbr title="netrc machine"><em>njpwworld</em></abbr>] 新日本プロレスワールド
- **NobelPrize** - **NobelPrize**
- **NoicePodcast**
- **NonkTube** - **NonkTube**
- **NoodleMagazine** - **NoodleMagazine**
- **Noovo** - **Noovo**
- **Normalboots** - **Normalboots**
- **NOSNLArticle**
- **NosVideo** - **NosVideo**
- **Nova**: TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz - **Nova**: TN.cz, Prásk.tv, Nova.cz, Novaplus.cz, FANDA.tv, Krásná.cz and Doma.cz
- **NovaEmbed** - **NovaEmbed**
@@ -892,7 +918,7 @@
- **npo**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **npo**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **npo.nl:live** - **npo.nl:live**
- **npo.nl:radio** - **npo.nl:radio**
- **npo.nl:radio:fragment** - **npo.nl:radio:fragment**
- **Npr** - **Npr**
- **NRK** - **NRK**
- **NRKPlaylist** - **NRKPlaylist**
@@ -915,11 +941,14 @@
- **ocw.mit.edu** - **ocw.mit.edu**
- **OdaTV** - **OdaTV**
- **Odnoklassniki** - **Odnoklassniki**
- **OfTV**
- **OfTVPlaylist**
- **OktoberfestTV** - **OktoberfestTV**
- **OlympicsReplay** - **OlympicsReplay**
- **on24**: ON24 - **on24**: ON24
- **OnDemandKorea** - **OnDemandKorea**
- **OneFootball** - **OneFootball**
- **OnePlacePodcast**
- **onet.pl** - **onet.pl**
- **onet.tv** - **onet.tv**
- **onet.tv:channel** - **onet.tv:channel**
@@ -933,7 +962,7 @@
- **openrec:capture** - **openrec:capture**
- **openrec:movie** - **openrec:movie**
- **OraTV** - **OraTV**
- **orf:fm4:story**: fm4.orf.at stories - **orf:fm4:story**: fm4.orf.at stories
- **orf:iptv**: iptv.ORF.at - **orf:iptv**: iptv.ORF.at
- **orf:radio** - **orf:radio**
- **orf:tvthek**: ORF TVthek - **orf:tvthek**: ORF TVthek
@@ -981,7 +1010,7 @@
- **Pinterest** - **Pinterest**
- **PinterestCollection** - **PinterestCollection**
- **pixiv:sketch** - **pixiv:sketch**
- **pixiv:sketch:user** - **pixiv:sketch:user**
- **Pladform** - **Pladform**
- **PlanetMarathi** - **PlanetMarathi**
- **Platzi**: [<abbr title="netrc machine"><em>platzi</em></abbr>] - **Platzi**: [<abbr title="netrc machine"><em>platzi</em></abbr>]
@@ -999,6 +1028,8 @@
- **pluralsight**: [<abbr title="netrc machine"><em>pluralsight</em></abbr>] - **pluralsight**: [<abbr title="netrc machine"><em>pluralsight</em></abbr>]
- **pluralsight:course** - **pluralsight:course**
- **PlutoTV** - **PlutoTV**
- **PodbayFM**
- **PodbayFMChannel**
- **Podchaser** - **Podchaser**
- **podomatic** - **podomatic**
- **Pokemon** - **Pokemon**
@@ -1007,11 +1038,13 @@
- **PokerGoCollection**: [<abbr title="netrc machine"><em>pokergo</em></abbr>] - **PokerGoCollection**: [<abbr title="netrc machine"><em>pokergo</em></abbr>]
- **PolsatGo** - **PolsatGo**
- **PolskieRadio** - **PolskieRadio**
- **polskieradio:audition**
- **polskieradio:category**
- **polskieradio:kierowcow** - **polskieradio:kierowcow**
- **polskieradio:legacy**
- **polskieradio:player** - **polskieradio:player**
- **polskieradio:podcast** - **polskieradio:podcast**
- **polskieradio:podcast:list** - **polskieradio:podcast:list**
- **PolskieRadioCategory**
- **Popcorntimes** - **Popcorntimes**
- **PopcornTV** - **PopcornTV**
- **PornCom** - **PornCom**
@@ -1042,6 +1075,7 @@
- **puhutv:serie** - **puhutv:serie**
- **Puls4** - **Puls4**
- **Pyvideo** - **Pyvideo**
- **QingTing**
- **qqmusic**: QQ音乐 - **qqmusic**: QQ音乐
- **qqmusic:album**: QQ音乐 - 专辑 - **qqmusic:album**: QQ音乐 - 专辑
- **qqmusic:playlist**: QQ音乐 - 歌单 - **qqmusic:playlist**: QQ音乐 - 歌单
@@ -1122,7 +1156,7 @@
- **rtl.nl**: rtl.nl and rtlxl.nl - **rtl.nl**: rtl.nl and rtlxl.nl
- **rtl2** - **rtl2**
- **rtl2:you** - **rtl2:you**
- **rtl2:you:series** - **rtl2:you:series**
- **RTLLuLive** - **RTLLuLive**
- **RTLLuRadio** - **RTLLuRadio**
- **RTNews** - **RTNews**
@@ -1139,6 +1173,7 @@
- **rtvslo.si** - **rtvslo.si**
- **RUHD** - **RUHD**
- **Rule34Video** - **Rule34Video**
- **Rumble**
- **RumbleChannel** - **RumbleChannel**
- **RumbleEmbed** - **RumbleEmbed**
- **Ruptly** - **Ruptly**
@@ -1164,13 +1199,17 @@
- **SaltTVLive**: [<abbr title="netrc machine"><em>salttv</em></abbr>] - **SaltTVLive**: [<abbr title="netrc machine"><em>salttv</em></abbr>]
- **SaltTVRecordings**: [<abbr title="netrc machine"><em>salttv</em></abbr>] - **SaltTVRecordings**: [<abbr title="netrc machine"><em>salttv</em></abbr>]
- **SampleFocus** - **SampleFocus**
- **SamplePlugin**: (**Currently broken**)
- **Sangiin**: 参議院インターネット審議中継 (archive)
- **Sapo**: SAPO Vídeos - **Sapo**: SAPO Vídeos
- **savefrom.net** - **savefrom.net**
- **SBS**: sbs.com.au - **SBS**: sbs.com.au
- **schooltv** - **schooltv**
- **ScienceChannel** - **ScienceChannel**
- **screen.yahoo:search**: Yahoo screen search; "yvsearch:" prefix - **screen.yahoo:search**: Yahoo screen search; "yvsearch:" prefix
- **Screen9**
- **Screencast** - **Screencast**
- **Screencastify**
- **ScreencastOMatic** - **ScreencastOMatic**
- **ScrippsNetworks** - **ScrippsNetworks**
- **scrippsnetworks:watch** - **scrippsnetworks:watch**
@@ -1191,6 +1230,10 @@
- **ShareVideosEmbed** - **ShareVideosEmbed**
- **ShemarooMe** - **ShemarooMe**
- **ShowRoomLive** - **ShowRoomLive**
- **ShugiinItvLive**: 衆議院インターネット審議中継
- **ShugiinItvLiveRoom**: 衆議院インターネット審議中継 (中継)
- **ShugiinItvVod**: 衆議院インターネット審議中継 (ビデオライブラリ)
- **SibnetEmbed**
- **simplecast** - **simplecast**
- **simplecast:episode** - **simplecast:episode**
- **simplecast:podcast** - **simplecast:podcast**
@@ -1198,10 +1241,9 @@
- **Skeb** - **Skeb**
- **sky.it** - **sky.it**
- **sky:news** - **sky:news**
- **sky:news:story** - **sky:news:story**
- **sky:sports** - **sky:sports**
- **sky:sports:news** - **sky:sports:news**
- **skyacademy.it**
- **SkylineWebcams** - **SkylineWebcams**
- **skynewsarabia:article** - **skynewsarabia:article**
- **skynewsarabia:video** - **skynewsarabia:video**
@@ -1221,6 +1263,7 @@
- **soundcloud:set**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>] - **soundcloud:set**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
- **soundcloud:trackstation**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>] - **soundcloud:trackstation**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
- **soundcloud:user**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>] - **soundcloud:user**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
- **soundcloud:user:permalink**: [<abbr title="netrc machine"><em>soundcloud</em></abbr>]
- **SoundcloudEmbed** - **SoundcloudEmbed**
- **soundgasm** - **soundgasm**
- **soundgasm:profile** - **soundgasm:profile**
@@ -1277,6 +1320,7 @@
- **SVTPage** - **SVTPage**
- **SVTPlay**: SVT Play and Öppet arkiv - **SVTPlay**: SVT Play and Öppet arkiv
- **SVTSeries** - **SVTSeries**
- **SwearnetEpisode**
- **SWRMediathek** - **SWRMediathek**
- **Syfy** - **Syfy**
- **SYVDK** - **SYVDK**
@@ -1289,7 +1333,7 @@
- **Teachable**: [<abbr title="netrc machine"><em>teachable</em></abbr>] - **Teachable**: [<abbr title="netrc machine"><em>teachable</em></abbr>]
- **TeachableCourse**: [<abbr title="netrc machine"><em>teachable</em></abbr>] - **TeachableCourse**: [<abbr title="netrc machine"><em>teachable</em></abbr>]
- **teachertube**: teachertube.com videos - **teachertube**: teachertube.com videos
- **teachertube:user:collection**: teachertube.com user and collection videos - **teachertube:user:collection**: teachertube.com user and collection videos
- **TeachingChannel** - **TeachingChannel**
- **Teamcoco** - **Teamcoco**
- **TeamTreeHouse**: [<abbr title="netrc machine"><em>teamtreehouse</em></abbr>] - **TeamTreeHouse**: [<abbr title="netrc machine"><em>teamtreehouse</em></abbr>]
@@ -1347,6 +1391,8 @@
- **toggo** - **toggo**
- **Tokentube** - **Tokentube**
- **Tokentube:channel** - **Tokentube:channel**
- **tokfm:audition**
- **tokfm:podcast**
- **ToonGoggles** - **ToonGoggles**
- **tou.tv**: [<abbr title="netrc machine"><em>toutv</em></abbr>] - **tou.tv**: [<abbr title="netrc machine"><em>toutv</em></abbr>]
- **Toypics**: Toypics video - **Toypics**: Toypics video
@@ -1360,6 +1406,7 @@
- **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix - **TrovoChannelClip**: All Clips of a trovo.live channel; "trovoclip:" prefix
- **TrovoChannelVod**: All VODs of a trovo.live channel; "trovovod:" prefix - **TrovoChannelVod**: All VODs of a trovo.live channel; "trovovod:" prefix
- **TrovoVod** - **TrovoVod**
- **TrtCocukVideo**
- **TrueID** - **TrueID**
- **TruNews** - **TruNews**
- **Truth** - **Truth**
@@ -1378,7 +1425,6 @@
- **Turbo** - **Turbo**
- **tv.dfb.de** - **tv.dfb.de**
- **TV2** - **TV2**
- **TV24UAGenericPassthrough**
- **TV2Article** - **TV2Article**
- **TV2DK** - **TV2DK**
- **TV2DKBornholmPlay** - **TV2DKBornholmPlay**
@@ -1411,8 +1457,9 @@
- **tvopengr:watch**: tvopen.gr (and ethnos.gr) videos - **tvopengr:watch**: tvopen.gr (and ethnos.gr) videos
- **tvp**: Telewizja Polska - **tvp**: Telewizja Polska
- **tvp:embed**: Telewizja Polska - **tvp:embed**: Telewizja Polska
- **tvp:series**
- **tvp:stream** - **tvp:stream**
- **tvp:vod**
- **tvp:vod:series**
- **TVPlayer** - **TVPlayer**
- **TVPlayHome** - **TVPlayHome**
- **Tweakers** - **Tweakers**
@@ -1431,6 +1478,7 @@
- **twitter:broadcast** - **twitter:broadcast**
- **twitter:card** - **twitter:card**
- **twitter:shortener** - **twitter:shortener**
- **twitter:spaces**
- **udemy**: [<abbr title="netrc machine"><em>udemy</em></abbr>] - **udemy**: [<abbr title="netrc machine"><em>udemy</em></abbr>]
- **udemy:course**: [<abbr title="netrc machine"><em>udemy</em></abbr>] - **udemy:course**: [<abbr title="netrc machine"><em>udemy</em></abbr>]
- **UDNEmbed**: 聯合影音 - **UDNEmbed**: 聯合影音
@@ -1459,6 +1507,7 @@
- **VeeHD** - **VeeHD**
- **Veo** - **Veo**
- **Veoh** - **Veoh**
- **veoh:user**
- **Vesti**: Вести.Ru - **Vesti**: Вести.Ru
- **Vevo** - **Vevo**
- **VevoPlaylist** - **VevoPlaylist**
@@ -1478,6 +1527,11 @@
- **video.sky.it:live** - **video.sky.it:live**
- **VideoDetective** - **VideoDetective**
- **videofy.me** - **videofy.me**
- **VideoKen**
- **VideoKenCategory**
- **VideoKenPlayer**
- **VideoKenPlaylist**
- **VideoKenTopic**
- **videomore** - **videomore**
- **videomore:season** - **videomore:season**
- **videomore:video** - **videomore:video**
@@ -1497,6 +1551,7 @@
- **vimeo:group**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] - **vimeo:group**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
- **vimeo:likes**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo user likes - **vimeo:likes**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo user likes
- **vimeo:ondemand**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] - **vimeo:ondemand**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
- **vimeo:pro**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
- **vimeo:review**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Review pages on vimeo - **vimeo:review**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Review pages on vimeo
- **vimeo:user**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] - **vimeo:user**: [<abbr title="netrc machine"><em>vimeo</em></abbr>]
- **vimeo:watchlater**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication) - **vimeo:watchlater**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication)
@@ -1567,6 +1622,7 @@
- **WDRElefant** - **WDRElefant**
- **WDRPage** - **WDRPage**
- **web.archive:youtube**: web.archive.org saved youtube videos, "ytarchive:" prefix - **web.archive:youtube**: web.archive.org saved youtube videos, "ytarchive:" prefix
- **Webcamerapl**
- **Webcaster** - **Webcaster**
- **WebcasterFeed** - **WebcasterFeed**
- **WebOfStories** - **WebOfStories**
@@ -1580,10 +1636,12 @@
- **wikimedia.org** - **wikimedia.org**
- **Willow** - **Willow**
- **WimTV** - **WimTV**
- **WinSportsVideo**
- **Wistia** - **Wistia**
- **WistiaChannel** - **WistiaChannel**
- **WistiaPlaylist** - **WistiaPlaylist**
- **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **wnl**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **wordpress:mb.miniAudioPlayer**
- **wordpress:playlist** - **wordpress:playlist**
- **WorldStarHipHop** - **WorldStarHipHop**
- **wppilot** - **wppilot**
@@ -1591,16 +1649,14 @@
- **WSJ**: Wall Street Journal - **WSJ**: Wall Street Journal
- **WSJArticle** - **WSJArticle**
- **WWE** - **WWE**
- **wyborcza:video**
- **WyborczaPodcast**
- **XBef** - **XBef**
- **XboxClips** - **XboxClips**
- **XFileShare**: XFileShare based sites: Aparat, ClipWatching, GoUnlimited, GoVid, HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, VUp, WolfStream, XVideoSharing - **XFileShare**: XFileShare based sites: Aparat, ClipWatching, GoUnlimited, GoVid, HolaVid, Streamty, TheVideoBee, Uqload, VidBom, vidlo, VidLocker, VidShare, VUp, WolfStream, XVideoSharing
- **XHamster** - **XHamster**
- **XHamsterEmbed** - **XHamsterEmbed**
- **XHamsterUser** - **XHamsterUser**
- **xiami:album**: 虾米音乐 - 专辑
- **xiami:artist**: 虾米音乐 - 歌手
- **xiami:collection**: 虾米音乐 - 精选集
- **xiami:song**: 虾米音乐
- **ximalaya**: 喜马拉雅FM - **ximalaya**: 喜马拉雅FM
- **ximalaya:album**: 喜马拉雅FM 专辑 - **ximalaya:album**: 喜马拉雅FM 专辑
- **xinpianchang**: xinpianchang.com - **xinpianchang**: xinpianchang.com
@@ -1614,12 +1670,12 @@
- **XXXYMovies** - **XXXYMovies**
- **Yahoo**: Yahoo screen and movies - **Yahoo**: Yahoo screen and movies
- **yahoo:gyao** - **yahoo:gyao**
- **yahoo:gyao:player** - **yahoo:gyao:player**
- **yahoo:japannews**: Yahoo! Japan News - **yahoo:japannews**: Yahoo! Japan News
- **YandexDisk** - **YandexDisk**
- **yandexmusic:album**: Яндекс.Музыка - Альбом - **yandexmusic:album**: Яндекс.Музыка - Альбом
- **yandexmusic:artist:albums**: Яндекс.Музыка - Артист - Альбомы - **yandexmusic:artist:albums**: Яндекс.Музыка - Артист - Альбомы
- **yandexmusic:artist:tracks**: Яндекс.Музыка - Артист - Треки - **yandexmusic:artist:tracks**: Яндекс.Музыка - Артист - Треки
- **yandexmusic:playlist**: Яндекс.Музыка - Плейлист - **yandexmusic:playlist**: Яндекс.Музыка - Плейлист
- **yandexmusic:track**: Яндекс.Музыка - Трек - **yandexmusic:track**: Яндекс.Музыка - Трек
- **YandexVideo** - **YandexVideo**
@@ -1627,6 +1683,7 @@
- **YapFiles** - **YapFiles**
- **YesJapan** - **YesJapan**
- **yinyuetai:video**: 音悦Tai - **yinyuetai:video**: 音悦Tai
- **YleAreena**
- **Ynet** - **Ynet**
- **YouJizz** - **YouJizz**
- **youku**: 优酷 - **youku**: 优酷
@@ -1637,18 +1694,18 @@
- **YouPorn** - **YouPorn**
- **YourPorn** - **YourPorn**
- **YourUpload** - **YourUpload**
- **youtube**: YouTube - **youtube+sample+NSIG+AGB**: YouTube
- **youtube:clip** - **youtube:clip**
- **youtube:favorites**: YouTube liked videos; ":ytfav" keyword (requires cookies) - **youtube:favorites**: YouTube liked videos; ":ytfav" keyword (requires cookies)
- **youtube:history**: Youtube watch history; ":ythis" keyword (requires cookies) - **youtube:history**: Youtube watch history; ":ythis" keyword (requires cookies)
- **youtube:music:search_url**: YouTube music search URLs with selectable sections, e.g. #songs - **youtube:music:search_url**: YouTube music search URLs with selectable sections, e.g. #songs
- **youtube:notif**: YouTube notifications; ":ytnotif" keyword (requires cookies) - **youtube:notif**: YouTube notifications; ":ytnotif" keyword (requires cookies)
- **youtube:playlist**: YouTube playlists - **youtube:playlist**: YouTube playlists
- **youtube:recommended**: YouTube recommended videos; ":ytrec" keyword - **youtube:recommended**: YouTube recommended videos; ":ytrec" keyword
- **youtube:search**: YouTube search; "ytsearch:" prefix - **youtube:search**: YouTube search; "ytsearch:" prefix
- **youtube:search:date**: YouTube search, newest videos first; "ytsearchdate:" prefix - **youtube:search:date**: YouTube search, newest videos first; "ytsearchdate:" prefix
- **youtube:search_url**: YouTube search URLs with sorting and filter support - **youtube:search_url**: YouTube search URLs with sorting and filter support
- **youtube:shorts:pivot:audio**: YouTube Shorts audio pivot (Shorts using audio of a given video) - **youtube:shorts:pivot:audio**: YouTube Shorts audio pivot (Shorts using audio of a given video)
- **youtube:stories**: YouTube channel stories; "ytstories:" prefix - **youtube:stories**: YouTube channel stories; "ytstories:" prefix
- **youtube:subscriptions**: YouTube subscriptions feed; ":ytsubs" keyword (requires cookies) - **youtube:subscriptions**: YouTube subscriptions feed; ":ytsubs" keyword (requires cookies)
- **youtube:tab**: YouTube Tabs - **youtube:tab**: YouTube Tabs
@@ -1665,6 +1722,7 @@
- **ZDFChannel** - **ZDFChannel**
- **Zee5**: [<abbr title="netrc machine"><em>zee5</em></abbr>] - **Zee5**: [<abbr title="netrc machine"><em>zee5</em></abbr>]
- **zee5:series** - **zee5:series**
- **ZeeNews**
- **ZenYandex** - **ZenYandex**
- **ZenYandexChannel** - **ZenYandexChannel**
- **Zhihu** - **Zhihu**

View File

@@ -222,6 +222,10 @@ def sanitize_got_info_dict(got_dict):
if test_info_dict.get('display_id') == test_info_dict.get('id'): if test_info_dict.get('display_id') == test_info_dict.get('id'):
test_info_dict.pop('display_id') test_info_dict.pop('display_id')
# Check url for flat entries
if got_dict.get('_type', 'video') != 'video' and got_dict.get('url'):
test_info_dict['url'] = got_dict['url']
return test_info_dict return test_info_dict
@@ -235,8 +239,9 @@ def expect_info_dict(self, got_dict, expected_dict):
for key in mandatory_fields: for key in mandatory_fields:
self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key) self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key)
# Check for mandatory fields that are automatically set by YoutubeDL # Check for mandatory fields that are automatically set by YoutubeDL
for key in ['webpage_url', 'extractor', 'extractor_key']: if got_dict.get('_type', 'video') == 'video':
self.assertTrue(got_dict.get(key), 'Missing field: %s' % key) for key in ['webpage_url', 'extractor', 'extractor_key']:
self.assertTrue(got_dict.get(key), 'Missing field: %s' % key)
test_info_dict = sanitize_got_info_dict(got_dict) test_info_dict = sanitize_got_info_dict(got_dict)
@@ -249,19 +254,16 @@ def expect_info_dict(self, got_dict, expected_dict):
return v.__name__ return v.__name__
else: else:
return repr(v) return repr(v)
info_dict_str = '' info_dict_str = ''.join(
if len(missing_keys) != len(expected_dict): f' {_repr(k)}: {_repr(v)},\n'
info_dict_str += ''.join( for k, v in test_info_dict.items() if k not in missing_keys)
f' {_repr(k)}: {_repr(v)},\n' if info_dict_str:
for k, v in test_info_dict.items() if k not in missing_keys) info_dict_str += '\n'
if info_dict_str:
info_dict_str += '\n'
info_dict_str += ''.join( info_dict_str += ''.join(
f' {_repr(k)}: {_repr(test_info_dict[k])},\n' f' {_repr(k)}: {_repr(test_info_dict[k])},\n'
for k in missing_keys) for k in missing_keys)
write_string( info_dict_str = '\n\'info_dict\': {\n' + info_dict_str + '},\n'
'\n\'info_dict\': {\n' + info_dict_str + '},\n', out=sys.stderr) write_string(info_dict_str.replace('\n', '\n '), out=sys.stderr)
self.assertFalse( self.assertFalse(
missing_keys, missing_keys,
'Missing keys in test definition: %s' % ( 'Missing keys in test definition: %s' % (

View File

@@ -44,5 +44,6 @@
"writesubtitles": false, "writesubtitles": false,
"allsubtitles": false, "allsubtitles": false,
"listsubtitles": false, "listsubtitles": false,
"fixup": "never" "fixup": "never",
"allow_playlist_files": false
} }

View File

@@ -41,7 +41,9 @@ class InfoExtractorTestRequestHandler(http.server.BaseHTTPRequestHandler):
class DummyIE(InfoExtractor): class DummyIE(InfoExtractor):
pass def _sort_formats(self, formats, field_preference=[]):
self._downloader.sort_formats(
{'formats': formats, '_format_sort_fields': field_preference})
class TestInfoExtractor(unittest.TestCase): class TestInfoExtractor(unittest.TestCase):

View File

@@ -68,8 +68,7 @@ class TestFormatSelection(unittest.TestCase):
{'ext': 'mp4', 'height': 460, 'url': TEST_URL}, {'ext': 'mp4', 'height': 460, 'url': TEST_URL},
] ]
info_dict = _make_result(formats) info_dict = _make_result(formats)
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['ext'], 'webm') self.assertEqual(downloaded['ext'], 'webm')
@@ -82,8 +81,7 @@ class TestFormatSelection(unittest.TestCase):
{'ext': 'mp4', 'height': 1080, 'url': TEST_URL}, {'ext': 'mp4', 'height': 1080, 'url': TEST_URL},
] ]
info_dict['formats'] = formats info_dict['formats'] = formats
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['ext'], 'mp4') self.assertEqual(downloaded['ext'], 'mp4')
@@ -97,8 +95,7 @@ class TestFormatSelection(unittest.TestCase):
{'ext': 'flv', 'height': 720, 'url': TEST_URL}, {'ext': 'flv', 'height': 720, 'url': TEST_URL},
] ]
info_dict['formats'] = formats info_dict['formats'] = formats
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['ext'], 'mp4') self.assertEqual(downloaded['ext'], 'mp4')
@@ -110,15 +107,14 @@ class TestFormatSelection(unittest.TestCase):
{'ext': 'webm', 'height': 720, 'url': TEST_URL}, {'ext': 'webm', 'height': 720, 'url': TEST_URL},
] ]
info_dict['formats'] = formats info_dict['formats'] = formats
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['ext'], 'webm') self.assertEqual(downloaded['ext'], 'webm')
def test_format_selection(self): def test_format_selection(self):
formats = [ formats = [
{'format_id': '35', 'ext': 'mp4', 'preference': 1, 'url': TEST_URL}, {'format_id': '35', 'ext': 'mp4', 'preference': 0, 'url': TEST_URL},
{'format_id': 'example-with-dashes', 'ext': 'webm', 'preference': 1, 'url': TEST_URL}, {'format_id': 'example-with-dashes', 'ext': 'webm', 'preference': 1, 'url': TEST_URL},
{'format_id': '45', 'ext': 'webm', 'preference': 2, 'url': TEST_URL}, {'format_id': '45', 'ext': 'webm', 'preference': 2, 'url': TEST_URL},
{'format_id': '47', 'ext': 'webm', 'preference': 3, 'url': TEST_URL}, {'format_id': '47', 'ext': 'webm', 'preference': 3, 'url': TEST_URL},
@@ -186,22 +182,19 @@ class TestFormatSelection(unittest.TestCase):
info_dict = _make_result(formats) info_dict = _make_result(formats)
ydl = YDL({'format': 'best'}) ydl = YDL({'format': 'best'})
ie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
ie._sort_formats(info_dict['formats'])
ydl.process_ie_result(copy.deepcopy(info_dict)) ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], 'aac-64') self.assertEqual(downloaded['format_id'], 'aac-64')
ydl = YDL({'format': 'mp3'}) ydl = YDL({'format': 'mp3'})
ie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
ie._sort_formats(info_dict['formats'])
ydl.process_ie_result(copy.deepcopy(info_dict)) ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], 'mp3-64') self.assertEqual(downloaded['format_id'], 'mp3-64')
ydl = YDL({'prefer_free_formats': True}) ydl = YDL({'prefer_free_formats': True})
ie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
ie._sort_formats(info_dict['formats'])
ydl.process_ie_result(copy.deepcopy(info_dict)) ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], 'ogg-64') self.assertEqual(downloaded['format_id'], 'ogg-64')
@@ -346,8 +339,7 @@ class TestFormatSelection(unittest.TestCase):
info_dict = _make_result(list(formats_order), extractor='youtube') info_dict = _make_result(list(formats_order), extractor='youtube')
ydl = YDL({'format': 'bestvideo+bestaudio'}) ydl = YDL({'format': 'bestvideo+bestaudio'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], '248+172') self.assertEqual(downloaded['format_id'], '248+172')
@@ -355,40 +347,35 @@ class TestFormatSelection(unittest.TestCase):
info_dict = _make_result(list(formats_order), extractor='youtube') info_dict = _make_result(list(formats_order), extractor='youtube')
ydl = YDL({'format': 'bestvideo[height>=999999]+bestaudio/best'}) ydl = YDL({'format': 'bestvideo[height>=999999]+bestaudio/best'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], '38') self.assertEqual(downloaded['format_id'], '38')
info_dict = _make_result(list(formats_order), extractor='youtube') info_dict = _make_result(list(formats_order), extractor='youtube')
ydl = YDL({'format': 'bestvideo/best,bestaudio'}) ydl = YDL({'format': 'bestvideo/best,bestaudio'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts] downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
self.assertEqual(downloaded_ids, ['137', '141']) self.assertEqual(downloaded_ids, ['137', '141'])
info_dict = _make_result(list(formats_order), extractor='youtube') info_dict = _make_result(list(formats_order), extractor='youtube')
ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])+bestaudio'}) ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])+bestaudio'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts] downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
self.assertEqual(downloaded_ids, ['137+141', '248+141']) self.assertEqual(downloaded_ids, ['137+141', '248+141'])
info_dict = _make_result(list(formats_order), extractor='youtube') info_dict = _make_result(list(formats_order), extractor='youtube')
ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])[height<=720]+bestaudio'}) ydl = YDL({'format': '(bestvideo[ext=mp4],bestvideo[ext=webm])[height<=720]+bestaudio'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts] downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
self.assertEqual(downloaded_ids, ['136+141', '247+141']) self.assertEqual(downloaded_ids, ['136+141', '247+141'])
info_dict = _make_result(list(formats_order), extractor='youtube') info_dict = _make_result(list(formats_order), extractor='youtube')
ydl = YDL({'format': '(bestvideo[ext=none]/bestvideo[ext=webm])+bestaudio'}) ydl = YDL({'format': '(bestvideo[ext=none]/bestvideo[ext=webm])+bestaudio'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts] downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
self.assertEqual(downloaded_ids, ['248+141']) self.assertEqual(downloaded_ids, ['248+141'])
@@ -396,16 +383,14 @@ class TestFormatSelection(unittest.TestCase):
for f1, f2 in zip(formats_order, formats_order[1:]): for f1, f2 in zip(formats_order, formats_order[1:]):
info_dict = _make_result([f1, f2], extractor='youtube') info_dict = _make_result([f1, f2], extractor='youtube')
ydl = YDL({'format': 'best/bestvideo'}) ydl = YDL({'format': 'best/bestvideo'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], f1['format_id']) self.assertEqual(downloaded['format_id'], f1['format_id'])
info_dict = _make_result([f2, f1], extractor='youtube') info_dict = _make_result([f2, f1], extractor='youtube')
ydl = YDL({'format': 'best/bestvideo'}) ydl = YDL({'format': 'best/bestvideo'})
yie = YoutubeIE(ydl) ydl.sort_formats(info_dict)
yie._sort_formats(info_dict['formats'])
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], f1['format_id']) self.assertEqual(downloaded['format_id'], f1['format_id'])
@@ -480,7 +465,7 @@ class TestFormatSelection(unittest.TestCase):
for f in formats: for f in formats:
f['url'] = 'http://_/' f['url'] = 'http://_/'
f['ext'] = 'unknown' f['ext'] = 'unknown'
info_dict = _make_result(formats) info_dict = _make_result(formats, _format_sort_fields=('id', ))
ydl = YDL({'format': 'best[filesize<3000]'}) ydl = YDL({'format': 'best[filesize<3000]'})
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)

View File

@@ -11,7 +11,6 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import base64 import base64
from yt_dlp.aes import ( from yt_dlp.aes import (
BLOCK_SIZE_BYTES,
aes_cbc_decrypt, aes_cbc_decrypt,
aes_cbc_decrypt_bytes, aes_cbc_decrypt_bytes,
aes_cbc_encrypt, aes_cbc_encrypt,
@@ -103,8 +102,7 @@ class TestAES(unittest.TestCase):
def test_ecb_encrypt(self): def test_ecb_encrypt(self):
data = bytes_to_intlist(self.secret_msg) data = bytes_to_intlist(self.secret_msg)
data += [0x08] * (BLOCK_SIZE_BYTES - len(data) % BLOCK_SIZE_BYTES) encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key))
encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key, self.iv))
self.assertEqual( self.assertEqual(
encrypted, encrypted,
b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:') b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')

View File

@@ -277,9 +277,24 @@ class TestLenientSimpleCookie(unittest.TestCase):
"a=b; invalid; Version=1; c=d", "a=b; invalid; Version=1; c=d",
{"a": "b", "c": "d"}, {"a": "b", "c": "d"},
), ),
(
"Reset morsel after invalid to not capture attributes",
"a=b; $invalid; $Version=1; c=d",
{"a": "b", "c": "d"},
),
( (
"Continue after non-flag attribute without value", "Continue after non-flag attribute without value",
"a=b; path; Version=1; c=d", "a=b; path; Version=1; c=d",
{"a": "b", "c": "d"}, {"a": "b", "c": "d"},
), ),
(
"Allow cookie attributes with `$` prefix",
'Customer="WILE_E_COYOTE"; $Version=1; $Secure; $Path=/acme',
{"Customer": ("WILE_E_COYOTE", {"version": "1", "secure": True, "path": "/acme"})},
),
(
"Invalid Morsel keys should not result in an error",
"Key=Value; [Invalid]=Value; Another=Value",
{"Key": "Value", "Another": "Value"},
),
) )

View File

@@ -106,7 +106,7 @@ def generator(test_case, tname):
params = tc.get('params', {}) params = tc.get('params', {})
if not info_dict.get('id'): if not info_dict.get('id'):
raise Exception(f'Test {tname} definition incorrect - "id" key is not present') raise Exception(f'Test {tname} definition incorrect - "id" key is not present')
elif not info_dict.get('ext'): elif not info_dict.get('ext') and info_dict.get('_type', 'video') == 'video':
if params.get('skip_download') and params.get('ignore_no_formats_error'): if params.get('skip_download') and params.get('ignore_no_formats_error'):
continue continue
raise Exception(f'Test {tname} definition incorrect - "ext" key must be present to define the output file') raise Exception(f'Test {tname} definition incorrect - "ext" key must be present to define the output file')
@@ -122,7 +122,8 @@ def generator(test_case, tname):
params['outtmpl'] = tname + '_' + params['outtmpl'] params['outtmpl'] = tname + '_' + params['outtmpl']
if is_playlist and 'playlist' not in test_case: if is_playlist and 'playlist' not in test_case:
params.setdefault('extract_flat', 'in_playlist') params.setdefault('extract_flat', 'in_playlist')
params.setdefault('playlistend', test_case.get('playlist_mincount')) params.setdefault('playlistend', test_case.get(
'playlist_mincount', test_case.get('playlist_count', -2) + 1))
params.setdefault('skip_download', True) params.setdefault('skip_download', True)
ydl = YoutubeDL(params, auto_init=False) ydl = YoutubeDL(params, auto_init=False)
@@ -212,6 +213,8 @@ def generator(test_case, tname):
tc_res_dict = res_dict['entries'][tc_num] tc_res_dict = res_dict['entries'][tc_num]
# First, check test cases' data against extracted data alone # First, check test cases' data against extracted data alone
expect_info_dict(self, tc_res_dict, tc.get('info_dict', {})) expect_info_dict(self, tc_res_dict, tc.get('info_dict', {}))
if tc_res_dict.get('_type', 'video') != 'video':
continue
# Now, check downloaded file consistency # Now, check downloaded file consistency
tc_filename = get_tc_filename(tc) tc_filename = get_tc_filename(tc)
if not test_case.get('params', {}).get('skip_download', False): if not test_case.get('params', {}).get('skip_download', False):

View File

@@ -392,6 +392,11 @@ class TestJSInterpreter(unittest.TestCase):
''') ''')
self.assertEqual(jsi.call_function('x').pattern, r',][}",],()}(\[)') self.assertEqual(jsi.call_function('x').pattern, r',][}",],()}(\[)')
jsi = JSInterpreter(R'''
function x() { let a=[/[)\\]/]; return a[0]; }
''')
self.assertEqual(jsi.call_function('x').pattern, r'[)\\]')
def test_char_code_at(self): def test_char_code_at(self):
jsi = JSInterpreter('function x(i){return "test".charCodeAt(i)}') jsi = JSInterpreter('function x(i){return "test".charCodeAt(i)}')
self.assertEqual(jsi.call_function('x', 0), 116) self.assertEqual(jsi.call_function('x', 0), 116)

73
test/test_plugins.py Normal file
View File

@@ -0,0 +1,73 @@
import importlib
import os
import shutil
import sys
import unittest
from pathlib import Path
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
TEST_DATA_DIR = Path(os.path.dirname(os.path.abspath(__file__)), 'testdata')
sys.path.append(str(TEST_DATA_DIR))
importlib.invalidate_caches()
from yt_dlp.plugins import PACKAGE_NAME, directories, load_plugins
class TestPlugins(unittest.TestCase):
TEST_PLUGIN_DIR = TEST_DATA_DIR / PACKAGE_NAME
def test_directories_containing_plugins(self):
self.assertIn(self.TEST_PLUGIN_DIR, map(Path, directories()))
def test_extractor_classes(self):
for module_name in tuple(sys.modules):
if module_name.startswith(f'{PACKAGE_NAME}.extractor'):
del sys.modules[module_name]
plugins_ie = load_plugins('extractor', 'IE')
self.assertIn(f'{PACKAGE_NAME}.extractor.normal', sys.modules.keys())
self.assertIn('NormalPluginIE', plugins_ie.keys())
# don't load modules with underscore prefix
self.assertFalse(
f'{PACKAGE_NAME}.extractor._ignore' in sys.modules.keys(),
'loaded module beginning with underscore')
self.assertNotIn('IgnorePluginIE', plugins_ie.keys())
# Don't load extractors with underscore prefix
self.assertNotIn('_IgnoreUnderscorePluginIE', plugins_ie.keys())
# Don't load extractors not specified in __all__ (if supplied)
self.assertNotIn('IgnoreNotInAllPluginIE', plugins_ie.keys())
self.assertIn('InAllPluginIE', plugins_ie.keys())
def test_postprocessor_classes(self):
plugins_pp = load_plugins('postprocessor', 'PP')
self.assertIn('NormalPluginPP', plugins_pp.keys())
def test_importing_zipped_module(self):
zip_path = TEST_DATA_DIR / 'zipped_plugins.zip'
shutil.make_archive(str(zip_path)[:-4], 'zip', str(zip_path)[:-4])
sys.path.append(str(zip_path)) # add zip to search paths
importlib.invalidate_caches() # reset the import caches
try:
for plugin_type in ('extractor', 'postprocessor'):
package = importlib.import_module(f'{PACKAGE_NAME}.{plugin_type}')
self.assertIn(zip_path / PACKAGE_NAME / plugin_type, map(Path, package.__path__))
plugins_ie = load_plugins('extractor', 'IE')
self.assertIn('ZippedPluginIE', plugins_ie.keys())
plugins_pp = load_plugins('postprocessor', 'PP')
self.assertIn('ZippedPluginPP', plugins_pp.keys())
finally:
sys.path.remove(str(zip_path))
os.remove(zip_path)
importlib.invalidate_caches() # reset the import caches
if __name__ == '__main__':
unittest.main()

View File

@@ -16,6 +16,7 @@ from yt_dlp.postprocessor import (
MetadataFromFieldPP, MetadataFromFieldPP,
MetadataParserPP, MetadataParserPP,
ModifyChaptersPP, ModifyChaptersPP,
SponsorBlockPP,
) )
@@ -76,11 +77,15 @@ class TestModifyChaptersPP(unittest.TestCase):
self._pp = ModifyChaptersPP(YoutubeDL()) self._pp = ModifyChaptersPP(YoutubeDL())
@staticmethod @staticmethod
def _sponsor_chapter(start, end, cat, remove=False): def _sponsor_chapter(start, end, cat, remove=False, title=None):
c = {'start_time': start, 'end_time': end, '_categories': [(cat, start, end)]} if title is None:
if remove: title = SponsorBlockPP.CATEGORIES[cat]
c['remove'] = True return {
return c 'start_time': start,
'end_time': end,
'_categories': [(cat, start, end, title)],
**({'remove': True} if remove else {}),
}
@staticmethod @staticmethod
def _chapter(start, end, title=None, remove=False): def _chapter(start, end, title=None, remove=False):
@@ -130,6 +135,19 @@ class TestModifyChaptersPP(unittest.TestCase):
'c', '[SponsorBlock]: Filler Tangent', 'c']) 'c', '[SponsorBlock]: Filler Tangent', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_SponsorBlockChapters(self):
chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 20, 'chapter', title='sb c1'),
self._sponsor_chapter(15, 16, 'chapter', title='sb c2'),
self._sponsor_chapter(30, 40, 'preview'),
self._sponsor_chapter(50, 60, 'filler')]
expected = self._chapters(
[10, 15, 16, 20, 30, 40, 50, 60, 70],
['c', '[SponsorBlock]: sb c1', '[SponsorBlock]: sb c1, sb c2', '[SponsorBlock]: sb c1',
'c', '[SponsorBlock]: Preview/Recap',
'c', '[SponsorBlock]: Filler Tangent', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self): def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self):
chapters = self._chapters([120], ['c']) + [ chapters = self._chapters([120], ['c']) + [
self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(20, 40, 'selfpromo'), self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(20, 40, 'selfpromo'),
@@ -173,7 +191,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self): def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self):
cuts = [self._sponsor_chapter(20, 50, 'selpromo', remove=True)] cuts = [self._sponsor_chapter(20, 50, 'selfpromo', remove=True)]
chapters = self._chapters([60], ['c']) + [ chapters = self._chapters([60], ['c']) + [
self._sponsor_chapter(10, 20, 'intro'), self._sponsor_chapter(10, 20, 'intro'),
self._sponsor_chapter(30, 40, 'sponsor'), self._sponsor_chapter(30, 40, 'sponsor'),
@@ -199,7 +217,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._sponsor_chapter(10, 20, 'sponsor'), self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'interaction', remove=True), self._sponsor_chapter(20, 30, 'interaction', remove=True),
self._chapter(30, 40, remove=True), self._chapter(30, 40, remove=True),
self._sponsor_chapter(40, 50, 'selpromo', remove=True), self._sponsor_chapter(40, 50, 'selfpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')] self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters([10, 20, 30, 40], expected = self._chapters([10, 20, 30, 40],
['c', '[SponsorBlock]: Sponsor', ['c', '[SponsorBlock]: Sponsor',
@@ -282,7 +300,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters = self._chapters([70], ['c']) + [ chapters = self._chapters([70], ['c']) + [
self._sponsor_chapter(10, 30, 'sponsor'), self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'interaction'), self._sponsor_chapter(20, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selpromo', remove=True), self._sponsor_chapter(30, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 60, 'sponsor'), self._sponsor_chapter(40, 60, 'sponsor'),
self._sponsor_chapter(50, 60, 'interaction')] self._sponsor_chapter(50, 60, 'interaction')]
expected = self._chapters( expected = self._chapters(

View File

@@ -2,6 +2,7 @@
# Allow direct execution # Allow direct execution
import os import os
import re
import sys import sys
import unittest import unittest
@@ -953,6 +954,85 @@ class TestUtil(unittest.TestCase):
) )
self.assertEqual(escape_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0') self.assertEqual(escape_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0')
def test_js_to_json_vars_strings(self):
self.assertDictEqual(
json.loads(js_to_json(
'''{
'null': a,
'nullStr': b,
'true': c,
'trueStr': d,
'false': e,
'falseStr': f,
'unresolvedVar': g,
}''',
{
'a': 'null',
'b': '"null"',
'c': 'true',
'd': '"true"',
'e': 'false',
'f': '"false"',
'g': 'var',
}
)),
{
'null': None,
'nullStr': 'null',
'true': True,
'trueStr': 'true',
'false': False,
'falseStr': 'false',
'unresolvedVar': 'var'
}
)
self.assertDictEqual(
json.loads(js_to_json(
'''{
'int': a,
'intStr': b,
'float': c,
'floatStr': d,
}''',
{
'a': '123',
'b': '"123"',
'c': '1.23',
'd': '"1.23"',
}
)),
{
'int': 123,
'intStr': '123',
'float': 1.23,
'floatStr': '1.23',
}
)
self.assertDictEqual(
json.loads(js_to_json(
'''{
'object': a,
'objectStr': b,
'array': c,
'arrayStr': d,
}''',
{
'a': '{}',
'b': '"{}"',
'c': '[]',
'd': '"[]"',
}
)),
{
'object': {},
'objectStr': '{}',
'array': [],
'arrayStr': '[]',
}
)
def test_js_to_json_realworld(self): def test_js_to_json_realworld(self):
inp = '''{ inp = '''{
'clip':{'provider':'pseudo'} 'clip':{'provider':'pseudo'}
@@ -1099,6 +1179,12 @@ class TestUtil(unittest.TestCase):
on = js_to_json('[1,//{},\n2]') on = js_to_json('[1,//{},\n2]')
self.assertEqual(json.loads(on), [1, 2]) self.assertEqual(json.loads(on), [1, 2])
on = js_to_json(R'"\^\$\#"')
self.assertEqual(json.loads(on), R'^$#', msg='Unnecessary escapes should be stripped')
on = js_to_json('\'"\\""\'')
self.assertEqual(json.loads(on), '"""', msg='Unnecessary quote escape should be escaped')
def test_js_to_json_malformed(self): def test_js_to_json_malformed(self):
self.assertEqual(js_to_json('42a1'), '42"a1"') self.assertEqual(js_to_json('42a1'), '42"a1"')
self.assertEqual(js_to_json('42a-1'), '42"a"-1') self.assertEqual(js_to_json('42a-1'), '42"a"-1')
@@ -1678,6 +1764,9 @@ Line 1
self.assertEqual(list(get_elements_text_and_html_by_attribute('class', 'foo', html)), []) self.assertEqual(list(get_elements_text_and_html_by_attribute('class', 'foo', html)), [])
self.assertEqual(list(get_elements_text_and_html_by_attribute('class', 'no-such-foo', html)), []) self.assertEqual(list(get_elements_text_and_html_by_attribute('class', 'no-such-foo', html)), [])
self.assertEqual(list(get_elements_text_and_html_by_attribute(
'class', 'foo', '<a class="foo">nice</a><span class="foo">nice</span>', tag='a')), [('nice', '<a class="foo">nice</a>')])
GET_ELEMENT_BY_TAG_TEST_STRING = ''' GET_ELEMENT_BY_TAG_TEST_STRING = '''
random text lorem ipsum</p> random text lorem ipsum</p>
<div> <div>
@@ -1864,6 +1953,8 @@ Line 1
vcodecs=[None], acodecs=[None], vexts=['webm'], aexts=['m4a']), 'mkv') vcodecs=[None], acodecs=[None], vexts=['webm'], aexts=['m4a']), 'mkv')
self.assertEqual(get_compatible_ext( self.assertEqual(get_compatible_ext(
vcodecs=[None], acodecs=[None], vexts=['webm'], aexts=['webm']), 'webm') vcodecs=[None], acodecs=[None], vexts=['webm'], aexts=['webm']), 'webm')
self.assertEqual(get_compatible_ext(
vcodecs=[None], acodecs=[None], vexts=['webm'], aexts=['weba']), 'webm')
self.assertEqual(get_compatible_ext( self.assertEqual(get_compatible_ext(
vcodecs=['h264'], acodecs=['mp4a'], vexts=['mov'], aexts=['m4a']), 'mp4') vcodecs=['h264'], acodecs=['mp4a'], vexts=['mov'], aexts=['m4a']), 'mp4')
@@ -1890,6 +1981,7 @@ Line 1
{'index': 2}, {'index': 2},
{'index': 3}, {'index': 3},
), ),
'dict': {},
} }
# Test base functionality # Test base functionality
@@ -1926,11 +2018,15 @@ Line 1
# Test alternative paths # Test alternative paths
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str', self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
msg='multiple `path_list` should be treated as alternative paths') msg='multiple `paths` should be treated as alternative paths')
self.assertEqual(traverse_obj(_TEST_DATA, 'str', 100), 'str', self.assertEqual(traverse_obj(_TEST_DATA, 'str', 100), 'str',
msg='alternatives should exit early') msg='alternatives should exit early')
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'fail'), None, self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'fail'), None,
msg='alternatives should return `default` if exhausted') msg='alternatives should return `default` if exhausted')
self.assertEqual(traverse_obj(_TEST_DATA, (..., 'fail'), 100), 100,
msg='alternatives should track their own branching return')
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', ...), ('data', ...)), list(_TEST_DATA['data']),
msg='alternatives on empty objects should search further')
# Test branch and path nesting # Test branch and path nesting
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')), ['https://www.example.com/0'], self.assertEqual(traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')), ['https://www.example.com/0'],
@@ -1963,8 +2059,16 @@ Line 1
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}), self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}),
{0: ['https://www.example.com/1', 'https://www.example.com/0']}, {0: ['https://www.example.com/1', 'https://www.example.com/0']},
msg='tripple nesting in dict path should be treated as branches') msg='tripple nesting in dict path should be treated as branches')
self.assertEqual(traverse_obj({}, {0: 1}, default=...), {0: ...}, self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
msg='do not remove `None` values when dict key') msg='remove `None` values when dict key')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
msg='do not remove `None` values if `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {0: {}},
msg='do not remove empty values when dict key')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: {}},
msg='do not remove empty values when dict key and a default')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {0: []},
msg='if branch in dict key not successful, return `[]`')
# Testing default parameter behavior # Testing default parameter behavior
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []} _DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
@@ -1981,7 +2085,13 @@ Line 1
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', 10)), None, self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', 10)), None,
msg='`IndexError` should result in `default`') msg='`IndexError` should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=1), 1, self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=1), 1,
msg='if branched but not successfull return `default`, not `[]`') msg='if branched but not successful return `default` if defined, not `[]`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=None), None,
msg='if branched but not successful return `default` even if `default` is `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail')), [],
msg='if branched but not successful return `[]`, not `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
msg='if branched but object is empty return `[]`, not `default`')
# Testing expected_type behavior # Testing expected_type behavior
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0} _EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
@@ -2061,6 +2171,25 @@ Line 1
with self.assertRaises(TypeError, msg='too many params should result in error'): with self.assertRaises(TypeError, msg='too many params should result in error'):
traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':::'), is_user_input=True) traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':::'), is_user_input=True)
# Test re.Match as input obj
mobj = re.fullmatch(r'0(12)(?P<group>3)(4)?', '0123')
self.assertEqual(traverse_obj(mobj, ...), [x for x in mobj.groups() if x is not None],
msg='`...` on a `re.Match` should give its `groups()`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 2)), ['0123', '3'],
msg='function on a `re.Match` should give groupno, value starting at 0')
self.assertEqual(traverse_obj(mobj, 'group'), '3',
msg='str key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 2), '3',
msg='int key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 'gRoUp', casesense=False), '3',
msg='str key on a `re.Match` should respect casesense')
self.assertEqual(traverse_obj(mobj, 'fail'), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 'gRoUpS', casesense=False), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 8), None,
msg='failing int key on a `re.Match` should return `default`')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -10,6 +10,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, is_download_test from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import YoutubeIE, YoutubeTabIE from yt_dlp.extractor import YoutubeIE, YoutubeTabIE
from yt_dlp.utils import ExtractorError
@is_download_test @is_download_test
@@ -53,6 +54,18 @@ class TestYoutubeLists(unittest.TestCase):
self.assertEqual(video['duration'], 10) self.assertEqual(video['duration'], 10)
self.assertEqual(video['uploader'], 'Philipp Hagemeister') self.assertEqual(video['uploader'], 'Philipp Hagemeister')
def test_youtube_channel_no_uploads(self):
dl = FakeYDL()
dl.params['extract_flat'] = True
ie = YoutubeTabIE(dl)
# no uploads
with self.assertRaisesRegex(ExtractorError, r'no uploads'):
ie.extract('https://www.youtube.com/channel/UC2yXPzFejc422buOIzn_0CA')
# no uploads and no UCID given
with self.assertRaisesRegex(ExtractorError, r'no uploads'):
ie.extract('https://www.youtube.com/news')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -130,6 +130,10 @@ _NSIG_TESTS = [
'https://www.youtube.com/s/player/5a3b6271/player_ias.vflset/en_US/base.js', 'https://www.youtube.com/s/player/5a3b6271/player_ias.vflset/en_US/base.js',
'B2j7f_UPT4rfje85Lu_e', 'm5DmNymaGQ5RdQ', 'B2j7f_UPT4rfje85Lu_e', 'm5DmNymaGQ5RdQ',
), ),
(
'https://www.youtube.com/s/player/7a062b77/player_ias.vflset/en_US/base.js',
'NRcE3y3mVtm_cV-W', 'VbsCYUATvqlt5w',
),
] ]

View File

@@ -0,0 +1,5 @@
from yt_dlp.extractor.common import InfoExtractor
class IgnorePluginIE(InfoExtractor):
pass

View File

@@ -0,0 +1,12 @@
from yt_dlp.extractor.common import InfoExtractor
class IgnoreNotInAllPluginIE(InfoExtractor):
pass
class InAllPluginIE(InfoExtractor):
pass
__all__ = ['InAllPluginIE']

View File

@@ -0,0 +1,9 @@
from yt_dlp.extractor.common import InfoExtractor
class NormalPluginIE(InfoExtractor):
pass
class _IgnoreUnderscorePluginIE(InfoExtractor):
pass

View File

@@ -0,0 +1,5 @@
from yt_dlp.postprocessor.common import PostProcessor
class NormalPluginPP(PostProcessor):
pass

View File

@@ -0,0 +1,5 @@
from yt_dlp.extractor.common import InfoExtractor
class ZippedPluginIE(InfoExtractor):
pass

View File

@@ -0,0 +1,5 @@
from yt_dlp.postprocessor.common import PostProcessor
class ZippedPluginPP(PostProcessor):
pass

View File

@@ -32,7 +32,8 @@ from .extractor import gen_extractor_classes, get_info_extractor
from .extractor.common import UnsupportedURLIE from .extractor.common import UnsupportedURLIE
from .extractor.openload import PhantomJSwrapper from .extractor.openload import PhantomJSwrapper
from .minicurses import format_text from .minicurses import format_text
from .postprocessor import _PLUGIN_CLASSES as plugin_postprocessors from .plugins import directories as plugin_directories
from .postprocessor import _PLUGIN_CLASSES as plugin_pps
from .postprocessor import ( from .postprocessor import (
EmbedThumbnailPP, EmbedThumbnailPP,
FFmpegFixupDuplicateMoovPP, FFmpegFixupDuplicateMoovPP,
@@ -67,6 +68,7 @@ from .utils import (
EntryNotInPlaylist, EntryNotInPlaylist,
ExistingVideoReached, ExistingVideoReached,
ExtractorError, ExtractorError,
FormatSorter,
GeoRestrictedError, GeoRestrictedError,
HEADRequest, HEADRequest,
ISO3166Utils, ISO3166Utils,
@@ -316,6 +318,7 @@ class YoutubeDL:
If not provided and the key is encrypted, yt-dlp will ask interactively If not provided and the key is encrypted, yt-dlp will ask interactively
prefer_insecure: Use HTTP instead of HTTPS to retrieve information. prefer_insecure: Use HTTP instead of HTTPS to retrieve information.
(Only supported by some extractors) (Only supported by some extractors)
enable_file_urls: Enable file:// URLs. This is disabled by default for security reasons.
http_headers: A dictionary of custom headers to be used for all requests http_headers: A dictionary of custom headers to be used for all requests
proxy: URL of the proxy server to use proxy: URL of the proxy server to use
geo_verification_proxy: URL of the proxy to use for IP address verification geo_verification_proxy: URL of the proxy to use for IP address verification
@@ -547,8 +550,8 @@ class YoutubeDL:
_format_fields = { _format_fields = {
# NB: Keep in sync with the docstring of extractor/common.py # NB: Keep in sync with the docstring of extractor/common.py
'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note', 'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note',
'width', 'height', 'resolution', 'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'audio_channels', 'width', 'height', 'aspect_ratio', 'resolution', 'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'audio_channels',
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns',
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start', 'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start',
'preference', 'language', 'language_preference', 'quality', 'source_preference', 'preference', 'language', 'language_preference', 'quality', 'source_preference',
'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'downloader_options', 'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'downloader_options',
@@ -616,46 +619,6 @@ class YoutubeDL:
' If you experience any issues while using this option, ' ' If you experience any issues while using this option, '
f'{self._format_err("DO NOT", self.Styles.ERROR)} open a bug report') f'{self._format_err("DO NOT", self.Styles.ERROR)} open a bug report')
def check_deprecated(param, option, suggestion):
if self.params.get(param) is not None:
self.report_warning(f'{option} is deprecated. Use {suggestion} instead')
return True
return False
if check_deprecated('cn_verification_proxy', '--cn-verification-proxy', '--geo-verification-proxy'):
if self.params.get('geo_verification_proxy') is None:
self.params['geo_verification_proxy'] = self.params['cn_verification_proxy']
check_deprecated('autonumber', '--auto-number', '-o "%(autonumber)s-%(title)s.%(ext)s"')
check_deprecated('usetitle', '--title', '-o "%(title)s-%(id)s.%(ext)s"')
check_deprecated('useid', '--id', '-o "%(id)s.%(ext)s"')
for msg in self.params.get('_warnings', []):
self.report_warning(msg)
for msg in self.params.get('_deprecation_warnings', []):
self.deprecated_feature(msg)
self.params['compat_opts'] = set(self.params.get('compat_opts', ()))
if 'list-formats' in self.params['compat_opts']:
self.params['listformats_table'] = False
if 'overwrites' not in self.params and self.params.get('nooverwrites') is not None:
# nooverwrites was unnecessarily changed to overwrites
# in 0c3d0f51778b153f65c21906031c2e091fcfb641
# This ensures compatibility with both keys
self.params['overwrites'] = not self.params['nooverwrites']
elif self.params.get('overwrites') is None:
self.params.pop('overwrites', None)
else:
self.params['nooverwrites'] = not self.params['overwrites']
self.params.setdefault('forceprint', {})
self.params.setdefault('print_to_file', {})
# Compatibility with older syntax
if not isinstance(params['forceprint'], dict):
self.params['forceprint'] = {'video': params['forceprint']}
if self.params.get('bidi_workaround', False): if self.params.get('bidi_workaround', False):
try: try:
import pty import pty
@@ -676,9 +639,57 @@ class YoutubeDL:
else: else:
raise raise
self.params['compat_opts'] = set(self.params.get('compat_opts', ()))
if auto_init and auto_init != 'no_verbose_header':
self.print_debug_header()
def check_deprecated(param, option, suggestion):
if self.params.get(param) is not None:
self.report_warning(f'{option} is deprecated. Use {suggestion} instead')
return True
return False
if check_deprecated('cn_verification_proxy', '--cn-verification-proxy', '--geo-verification-proxy'):
if self.params.get('geo_verification_proxy') is None:
self.params['geo_verification_proxy'] = self.params['cn_verification_proxy']
check_deprecated('autonumber', '--auto-number', '-o "%(autonumber)s-%(title)s.%(ext)s"')
check_deprecated('usetitle', '--title', '-o "%(title)s-%(id)s.%(ext)s"')
check_deprecated('useid', '--id', '-o "%(id)s.%(ext)s"')
for msg in self.params.get('_warnings', []):
self.report_warning(msg)
for msg in self.params.get('_deprecation_warnings', []):
self.deprecated_feature(msg)
if 'list-formats' in self.params['compat_opts']:
self.params['listformats_table'] = False
if 'overwrites' not in self.params and self.params.get('nooverwrites') is not None:
# nooverwrites was unnecessarily changed to overwrites
# in 0c3d0f51778b153f65c21906031c2e091fcfb641
# This ensures compatibility with both keys
self.params['overwrites'] = not self.params['nooverwrites']
elif self.params.get('overwrites') is None:
self.params.pop('overwrites', None)
else:
self.params['nooverwrites'] = not self.params['overwrites']
if self.params.get('simulate') is None and any((
self.params.get('list_thumbnails'),
self.params.get('listformats'),
self.params.get('listsubtitles'),
)):
self.params['simulate'] = 'list_only'
self.params.setdefault('forceprint', {})
self.params.setdefault('print_to_file', {})
# Compatibility with older syntax
if not isinstance(params['forceprint'], dict):
self.params['forceprint'] = {'video': params['forceprint']}
if auto_init: if auto_init:
if auto_init != 'no_verbose_header':
self.print_debug_header()
self.add_default_info_extractors() self.add_default_info_extractors()
if (sys.platform != 'win32' if (sys.platform != 'win32'
@@ -1059,7 +1070,7 @@ class YoutubeDL:
# correspondingly that is not what we want since we need to keep # correspondingly that is not what we want since we need to keep
# '%%' intact for template dict substitution step. Working around # '%%' intact for template dict substitution step. Working around
# with boundary-alike separator hack. # with boundary-alike separator hack.
sep = ''.join([random.choice(ascii_letters) for _ in range(32)]) sep = ''.join(random.choices(ascii_letters, k=32))
outtmpl = outtmpl.replace('%%', f'%{sep}%').replace('$$', f'${sep}$') outtmpl = outtmpl.replace('%%', f'%{sep}%').replace('$$', f'${sep}$')
# outtmpl should be expand_path'ed before template dict substitution # outtmpl should be expand_path'ed before template dict substitution
@@ -1249,7 +1260,7 @@ class YoutubeDL:
elif fmt[-1] == 'j': # json elif fmt[-1] == 'j': # json
value, fmt = json.dumps( value, fmt = json.dumps(
value, default=_dumpjson_default, value, default=_dumpjson_default,
indent=4 if '#' in flags else None, ensure_ascii=False), str_fmt indent=4 if '#' in flags else None, ensure_ascii='+' not in flags), str_fmt
elif fmt[-1] == 'h': # html elif fmt[-1] == 'h': # html
value, fmt = escapeHTML(str(value)), str_fmt value, fmt = escapeHTML(str(value)), str_fmt
elif fmt[-1] == 'q': # quoted elif fmt[-1] == 'q': # quoted
@@ -1349,11 +1360,19 @@ class YoutubeDL:
return self.get_output_path(dir_type, filename) return self.get_output_path(dir_type, filename)
def _match_entry(self, info_dict, incomplete=False, silent=False): def _match_entry(self, info_dict, incomplete=False, silent=False):
""" Returns None if the file should be downloaded """ """Returns None if the file should be downloaded"""
_type = info_dict.get('_type', 'video')
assert incomplete or _type == 'video', 'Only video result can be considered complete'
video_title = info_dict.get('title', info_dict.get('id', 'entry')) video_title = info_dict.get('title', info_dict.get('id', 'entry'))
def check_filter(): def check_filter():
if _type in ('playlist', 'multi_video'):
return
elif _type in ('url', 'url_transparent') and not try_call(
lambda: self.get_info_extractor(info_dict['ie_key']).is_single_video(info_dict['url'])):
return
if 'title' in info_dict: if 'title' in info_dict:
# This can happen when we're just evaluating the playlist # This can happen when we're just evaluating the playlist
title = info_dict['title'] title = info_dict['title']
@@ -1365,6 +1384,7 @@ class YoutubeDL:
if rejecttitle: if rejecttitle:
if re.search(rejecttitle, title, re.IGNORECASE): if re.search(rejecttitle, title, re.IGNORECASE):
return '"' + title + '" title matched reject pattern "' + rejecttitle + '"' return '"' + title + '" title matched reject pattern "' + rejecttitle + '"'
date = info_dict.get('upload_date') date = info_dict.get('upload_date')
if date is not None: if date is not None:
dateRange = self.params.get('daterange', DateRange()) dateRange = self.params.get('daterange', DateRange())
@@ -1608,8 +1628,8 @@ class YoutubeDL:
if result_type in ('url', 'url_transparent'): if result_type in ('url', 'url_transparent'):
ie_result['url'] = sanitize_url( ie_result['url'] = sanitize_url(
ie_result['url'], scheme='http' if self.params.get('prefer_insecure') else 'https') ie_result['url'], scheme='http' if self.params.get('prefer_insecure') else 'https')
if ie_result.get('original_url'): if ie_result.get('original_url') and not extra_info.get('original_url'):
extra_info.setdefault('original_url', ie_result['original_url']) extra_info = {'original_url': ie_result['original_url'], **extra_info}
extract_flat = self.params.get('extract_flat', False) extract_flat = self.params.get('extract_flat', False)
if ((extract_flat == 'in_playlist' and 'playlist' in extra_info) if ((extract_flat == 'in_playlist' and 'playlist' in extra_info)
@@ -1621,6 +1641,7 @@ class YoutubeDL:
self.add_default_extra_info(info_copy, ie, ie_result['url']) self.add_default_extra_info(info_copy, ie, ie_result['url'])
self.add_extra_info(info_copy, extra_info) self.add_extra_info(info_copy, extra_info)
info_copy, _ = self.pre_process(info_copy) info_copy, _ = self.pre_process(info_copy)
self._fill_common_fields(info_copy, False)
self.__forced_printings(info_copy, self.prepare_filename(info_copy), incomplete=True) self.__forced_printings(info_copy, self.prepare_filename(info_copy), incomplete=True)
self._raise_pending_errors(info_copy) self._raise_pending_errors(info_copy)
if self.params.get('force_write_download_archive', False): if self.params.get('force_write_download_archive', False):
@@ -1807,7 +1828,7 @@ class YoutubeDL:
elif self.params.get('playlistrandom'): elif self.params.get('playlistrandom'):
random.shuffle(entries) random.shuffle(entries)
self.to_screen(f'[{ie_result["extractor"]}] Playlist {title}: Downloading {n_entries} videos' self.to_screen(f'[{ie_result["extractor"]}] Playlist {title}: Downloading {n_entries} items'
f'{format_field(ie_result, "playlist_count", " of %s")}') f'{format_field(ie_result, "playlist_count", " of %s")}')
keep_resolved_entries = self.params.get('extract_flat') != 'discard' keep_resolved_entries = self.params.get('extract_flat') != 'discard'
@@ -1840,14 +1861,13 @@ class YoutubeDL:
resolved_entries[i] = (playlist_index, NO_DEFAULT) resolved_entries[i] = (playlist_index, NO_DEFAULT)
continue continue
self.to_screen('[download] Downloading video %s of %s' % ( self.to_screen('[download] Downloading item %s of %s' % (
self._format_screen(i + 1, self.Styles.ID), self._format_screen(n_entries, self.Styles.EMPHASIS))) self._format_screen(i + 1, self.Styles.ID), self._format_screen(n_entries, self.Styles.EMPHASIS)))
extra.update({ entry_result = self.__process_iterable_entry(entry, download, collections.ChainMap({
'playlist_index': playlist_index, 'playlist_index': playlist_index,
'playlist_autonumber': i + 1, 'playlist_autonumber': i + 1,
}) }, extra))
entry_result = self.__process_iterable_entry(entry, download, extra)
if not entry_result: if not entry_result:
failures += 1 failures += 1
if failures >= max_failures: if failures >= max_failures:
@@ -1858,8 +1878,11 @@ class YoutubeDL:
resolved_entries[i] = (playlist_index, entry_result) resolved_entries[i] = (playlist_index, entry_result)
# Update with processed data # Update with processed data
ie_result['requested_entries'] = [i for i, e in resolved_entries if e is not NO_DEFAULT]
ie_result['entries'] = [e for _, e in resolved_entries if e is not NO_DEFAULT] ie_result['entries'] = [e for _, e in resolved_entries if e is not NO_DEFAULT]
ie_result['requested_entries'] = [i for i, e in resolved_entries if e is not NO_DEFAULT]
if ie_result['requested_entries'] == try_call(lambda: list(range(1, ie_result['playlist_count'] + 1))):
# Do not set for full playlist
ie_result.pop('requested_entries')
# Write the updated info to json # Write the updated info to json
if _infojson_written is True and self._write_info_json( if _infojson_written is True and self._write_info_json(
@@ -2165,6 +2188,7 @@ class YoutubeDL:
'vcodec': the_only_video.get('vcodec'), 'vcodec': the_only_video.get('vcodec'),
'vbr': the_only_video.get('vbr'), 'vbr': the_only_video.get('vbr'),
'stretched_ratio': the_only_video.get('stretched_ratio'), 'stretched_ratio': the_only_video.get('stretched_ratio'),
'aspect_ratio': the_only_video.get('aspect_ratio'),
}) })
if the_only_audio: if the_only_audio:
@@ -2379,10 +2403,9 @@ class YoutubeDL:
else: else:
info_dict['thumbnails'] = thumbnails info_dict['thumbnails'] = thumbnails
def _fill_common_fields(self, info_dict, is_video=True): def _fill_common_fields(self, info_dict, final=True):
# TODO: move sanitization here # TODO: move sanitization here
if is_video: if final:
# playlists are allowed to lack "title"
title = info_dict.get('title', NO_DEFAULT) title = info_dict.get('title', NO_DEFAULT)
if title is NO_DEFAULT: if title is NO_DEFAULT:
raise ExtractorError('Missing "title" field in extractor result', raise ExtractorError('Missing "title" field in extractor result',
@@ -2432,7 +2455,7 @@ class YoutubeDL:
# Auto generate title fields corresponding to the *_number fields when missing # Auto generate title fields corresponding to the *_number fields when missing
# in order to always have clean titles. This is very common for TV series. # in order to always have clean titles. This is very common for TV series.
for field in ('chapter', 'season', 'episode'): for field in ('chapter', 'season', 'episode'):
if info_dict.get('%s_number' % field) is not None and not info_dict.get(field): if final and info_dict.get('%s_number' % field) is not None and not info_dict.get(field):
info_dict[field] = '%s %d' % (field.capitalize(), info_dict['%s_number' % field]) info_dict[field] = '%s %d' % (field.capitalize(), info_dict['%s_number' % field])
def _raise_pending_errors(self, info): def _raise_pending_errors(self, info):
@@ -2440,6 +2463,18 @@ class YoutubeDL:
if err: if err:
self.report_error(err, tb=False) self.report_error(err, tb=False)
def sort_formats(self, info_dict):
formats = self._get_formats(info_dict)
if not formats:
return
# Backward compatibility with InfoExtractor._sort_formats
field_preference = formats[0].pop('__sort_fields', None)
if field_preference:
info_dict['_format_sort_fields'] = field_preference
formats.sort(key=FormatSorter(
self, info_dict.get('_format_sort_fields', [])).calculate_preference)
def process_video_result(self, info_dict, download=True): def process_video_result(self, info_dict, download=True):
assert info_dict.get('_type', 'video') == 'video' assert info_dict.get('_type', 'video') == 'video'
self._num_videos += 1 self._num_videos += 1
@@ -2525,11 +2560,8 @@ class YoutubeDL:
info_dict['requested_subtitles'] = self.process_subtitles( info_dict['requested_subtitles'] = self.process_subtitles(
info_dict['id'], subtitles, automatic_captions) info_dict['id'], subtitles, automatic_captions)
if info_dict.get('formats') is None: self.sort_formats(info_dict)
# There's only one format available formats = self._get_formats(info_dict)
formats = [info_dict]
else:
formats = info_dict['formats']
# or None ensures --clean-infojson removes it # or None ensures --clean-infojson removes it
info_dict['_has_drm'] = any(f.get('has_drm') for f in formats) or None info_dict['_has_drm'] = any(f.get('has_drm') for f in formats) or None
@@ -2612,6 +2644,8 @@ class YoutubeDL:
format['resolution'] = self.format_resolution(format, default=None) format['resolution'] = self.format_resolution(format, default=None)
if format.get('dynamic_range') is None and format.get('vcodec') != 'none': if format.get('dynamic_range') is None and format.get('vcodec') != 'none':
format['dynamic_range'] = 'SDR' format['dynamic_range'] = 'SDR'
if format.get('aspect_ratio') is None:
format['aspect_ratio'] = try_call(lambda: round(format['width'] / format['height'], 2))
if (info_dict.get('duration') and format.get('tbr') if (info_dict.get('duration') and format.get('tbr')
and not format.get('filesize') and not format.get('filesize_approx')): and not format.get('filesize') and not format.get('filesize_approx')):
format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8)) format['filesize_approx'] = int(info_dict['duration'] * format['tbr'] * (1024 / 8))
@@ -2644,10 +2678,9 @@ class YoutubeDL:
info_dict, _ = self.pre_process(info_dict, 'after_filter') info_dict, _ = self.pre_process(info_dict, 'after_filter')
# The pre-processors may have modified the formats # The pre-processors may have modified the formats
formats = info_dict.get('formats', [info_dict]) formats = self._get_formats(info_dict)
list_only = self.params.get('simulate') is None and ( list_only = self.params.get('simulate') == 'list_only'
self.params.get('list_thumbnails') or self.params.get('listformats') or self.params.get('listsubtitles'))
interactive_format_selection = not list_only and self.format_selector == '-' interactive_format_selection = not list_only and self.format_selector == '-'
if self.params.get('list_thumbnails'): if self.params.get('list_thumbnails'):
self.list_thumbnails(info_dict) self.list_thumbnails(info_dict)
@@ -2724,7 +2757,8 @@ class YoutubeDL:
if chapter or offset: if chapter or offset:
new_info.update({ new_info.update({
'section_start': offset + chapter.get('start_time', 0), 'section_start': offset + chapter.get('start_time', 0),
'section_end': end_time if end_time < offset + duration else None, # duration may not be accurate. So allow deviations <1sec
'section_end': end_time if end_time <= offset + duration + 1 else None,
'section_title': chapter.get('title'), 'section_title': chapter.get('title'),
'section_number': chapter.get('index'), 'section_number': chapter.get('index'),
}) })
@@ -2938,14 +2972,22 @@ class YoutubeDL:
if 'format' not in info_dict and 'ext' in info_dict: if 'format' not in info_dict and 'ext' in info_dict:
info_dict['format'] = info_dict['ext'] info_dict['format'] = info_dict['ext']
# This is mostly just for backward compatibility of process_info
# As a side-effect, this allows for format-specific filters
if self._match_entry(info_dict) is not None: if self._match_entry(info_dict) is not None:
info_dict['__write_download_archive'] = 'ignore' info_dict['__write_download_archive'] = 'ignore'
return return
# Does nothing under normal operation - for backward compatibility of process_info # Does nothing under normal operation - for backward compatibility of process_info
self.post_extract(info_dict) self.post_extract(info_dict)
def replace_info_dict(new_info):
nonlocal info_dict
if new_info == info_dict:
return
info_dict.clear()
info_dict.update(new_info)
new_info, _ = self.pre_process(info_dict, 'video')
replace_info_dict(new_info)
self._num_downloads += 1 self._num_downloads += 1
# info_dict['_filename'] needs to be set for backward compatibility # info_dict['_filename'] needs to be set for backward compatibility
@@ -3059,13 +3101,6 @@ class YoutubeDL:
for link_type, should_write in write_links.items()): for link_type, should_write in write_links.items()):
return return
def replace_info_dict(new_info):
nonlocal info_dict
if new_info == info_dict:
return
info_dict.clear()
info_dict.update(new_info)
new_info, files_to_move = self.pre_process(info_dict, 'before_dl', files_to_move) new_info, files_to_move = self.pre_process(info_dict, 'before_dl', files_to_move)
replace_info_dict(new_info) replace_info_dict(new_info)
@@ -3092,7 +3127,7 @@ class YoutubeDL:
fd, success = None, True fd, success = None, True
if info_dict.get('protocol') or info_dict.get('url'): if info_dict.get('protocol') or info_dict.get('url'):
fd = get_suitable_downloader(info_dict, self.params, to_stdout=temp_filename == '-') fd = get_suitable_downloader(info_dict, self.params, to_stdout=temp_filename == '-')
if fd is not FFmpegFD and ( if fd is not FFmpegFD and 'no-direct-merge' not in self.params['compat_opts'] and (
info_dict.get('section_start') or info_dict.get('section_end')): info_dict.get('section_start') or info_dict.get('section_end')):
msg = ('This format cannot be partially downloaded' if FFmpegFD.available() msg = ('This format cannot be partially downloaded' if FFmpegFD.available()
else 'You have requested downloading the video partially, but ffmpeg is not installed') else 'You have requested downloading the video partially, but ffmpeg is not installed')
@@ -3357,6 +3392,7 @@ class YoutubeDL:
reject = lambda k, v: v is None or k.startswith('__') or k in { reject = lambda k, v: v is None or k.startswith('__') or k in {
'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries', 'requested_downloads', 'requested_formats', 'requested_subtitles', 'requested_entries',
'entries', 'filepath', '_filename', 'infojson_filename', 'original_url', 'playlist_autonumber', 'entries', 'filepath', '_filename', 'infojson_filename', 'original_url', 'playlist_autonumber',
'_format_sort_fields',
} }
else: else:
reject = lambda k, v: False reject = lambda k, v: False
@@ -3426,7 +3462,8 @@ class YoutubeDL:
return infodict return infodict
def run_all_pps(self, key, info, *, additional_pps=None): def run_all_pps(self, key, info, *, additional_pps=None):
self._forceprint(key, info) if key != 'video':
self._forceprint(key, info)
for pp in (additional_pps or []) + self._pps[key]: for pp in (additional_pps or []) + self._pps[key]:
info = self.run_pp(pp, info) info = self.run_pp(pp, info)
return info return info
@@ -3571,11 +3608,17 @@ class YoutubeDL:
res += '~' + format_bytes(fdict['filesize_approx']) res += '~' + format_bytes(fdict['filesize_approx'])
return res return res
def render_formats_table(self, info_dict): def _get_formats(self, info_dict):
if not info_dict.get('formats') and not info_dict.get('url'): if info_dict.get('formats') is None:
return None if info_dict.get('url') and info_dict.get('_type', 'video') == 'video':
return [info_dict]
return []
return info_dict['formats']
formats = info_dict.get('formats', [info_dict]) def render_formats_table(self, info_dict):
formats = self._get_formats(info_dict)
if not formats:
return
if not self.params.get('listformats_table', True) is not False: if not self.params.get('listformats_table', True) is not False:
table = [ table = [
[ [
@@ -3583,7 +3626,7 @@ class YoutubeDL:
format_field(f, 'ext'), format_field(f, 'ext'),
self.format_resolution(f), self.format_resolution(f),
self._format_note(f) self._format_note(f)
] for f in formats if f.get('preference') is None or f['preference'] >= -1000] ] for f in formats if (f.get('preference') or 0) >= -1000]
return render_table(['format code', 'extension', 'resolution', 'note'], table, extra_gap=1) return render_table(['format code', 'extension', 'resolution', 'note'], table, extra_gap=1)
def simplified_codec(f, field): def simplified_codec(f, field):
@@ -3689,7 +3732,10 @@ class YoutubeDL:
# These imports can be slow. So import them only as needed # These imports can be slow. So import them only as needed
from .extractor.extractors import _LAZY_LOADER from .extractor.extractors import _LAZY_LOADER
from .extractor.extractors import _PLUGIN_CLASSES as plugin_extractors from .extractor.extractors import (
_PLUGIN_CLASSES as plugin_ies,
_PLUGIN_OVERRIDES as plugin_ie_overrides
)
def get_encoding(stream): def get_encoding(stream):
ret = str(getattr(stream, 'encoding', 'missing (%s)' % type(stream).__name__)) ret = str(getattr(stream, 'encoding', 'missing (%s)' % type(stream).__name__))
@@ -3725,15 +3771,15 @@ class YoutubeDL:
'' if source == 'unknown' else f'({source})', '' if source == 'unknown' else f'({source})',
'' if _IN_CLI else 'API', '' if _IN_CLI else 'API',
delim=' ')) delim=' '))
if not _IN_CLI:
write_debug(f'params: {self.params}')
if not _LAZY_LOADER: if not _LAZY_LOADER:
if os.environ.get('YTDLP_NO_LAZY_EXTRACTORS'): if os.environ.get('YTDLP_NO_LAZY_EXTRACTORS'):
write_debug('Lazy loading extractors is forcibly disabled') write_debug('Lazy loading extractors is forcibly disabled')
else: else:
write_debug('Lazy loading extractors is disabled') write_debug('Lazy loading extractors is disabled')
if plugin_extractors or plugin_postprocessors:
write_debug('Plugins: %s' % [
'%s%s' % (klass.__name__, '' if klass.__name__ == name else f' as {name}')
for name, klass in itertools.chain(plugin_extractors.items(), plugin_postprocessors.items())])
if self.params['compat_opts']: if self.params['compat_opts']:
write_debug('Compatibility options: %s' % ', '.join(self.params['compat_opts'])) write_debug('Compatibility options: %s' % ', '.join(self.params['compat_opts']))
@@ -3767,6 +3813,21 @@ class YoutubeDL:
proxy_map.update(handler.proxies) proxy_map.update(handler.proxies)
write_debug(f'Proxy map: {proxy_map}') write_debug(f'Proxy map: {proxy_map}')
for plugin_type, plugins in {'Extractor': plugin_ies, 'Post-Processor': plugin_pps}.items():
display_list = ['%s%s' % (
klass.__name__, '' if klass.__name__ == name else f' as {name}')
for name, klass in plugins.items()]
if plugin_type == 'Extractor':
display_list.extend(f'{plugins[-1].IE_NAME.partition("+")[2]} ({parent.__name__})'
for parent, plugins in plugin_ie_overrides.items())
if not display_list:
continue
write_debug(f'{plugin_type} Plugins: {", ".join(sorted(display_list))}')
plugin_dirs = plugin_directories()
if plugin_dirs:
write_debug(f'Plugin directories: {plugin_dirs}')
# Not implemented # Not implemented
if False and self.params.get('call_home'): if False and self.params.get('call_home'):
ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode() ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode()
@@ -3816,9 +3877,12 @@ class YoutubeDL:
# https://github.com/ytdl-org/youtube-dl/issues/8227) # https://github.com/ytdl-org/youtube-dl/issues/8227)
file_handler = urllib.request.FileHandler() file_handler = urllib.request.FileHandler()
def file_open(*args, **kwargs): if not self.params.get('enable_file_urls'):
raise urllib.error.URLError('file:// scheme is explicitly disabled in yt-dlp for security reasons') def file_open(*args, **kwargs):
file_handler.file_open = file_open raise urllib.error.URLError(
'file:// URLs are explicitly disabled in yt-dlp for security reasons. '
'Use --enable-file-urls to enable at your own risk.')
file_handler.file_open = file_open
opener = urllib.request.build_opener( opener = urllib.request.build_opener(
proxy_handler, https_handler, cookie_processor, ydlh, redirect_handler, data_handler, file_handler) proxy_handler, https_handler, cookie_processor, ydlh, redirect_handler, data_handler, file_handler)
@@ -3880,7 +3944,7 @@ class YoutubeDL:
elif not self.params.get('overwrites', True) and os.path.exists(descfn): elif not self.params.get('overwrites', True) and os.path.exists(descfn):
self.to_screen(f'[info] {label.title()} description is already present') self.to_screen(f'[info] {label.title()} description is already present')
elif ie_result.get('description') is None: elif ie_result.get('description') is None:
self.report_warning(f'There\'s no {label} description to write') self.to_screen(f'[info] There\'s no {label} description to write')
return False return False
else: else:
try: try:
@@ -3896,15 +3960,18 @@ class YoutubeDL:
''' Write subtitles to file and return list of (sub_filename, final_sub_filename); or None if error''' ''' Write subtitles to file and return list of (sub_filename, final_sub_filename); or None if error'''
ret = [] ret = []
subtitles = info_dict.get('requested_subtitles') subtitles = info_dict.get('requested_subtitles')
if not subtitles or not (self.params.get('writesubtitles') or self.params.get('writeautomaticsub')): if not (self.params.get('writesubtitles') or self.params.get('writeautomaticsub')):
# subtitles download errors are already managed as troubles in relevant IE # subtitles download errors are already managed as troubles in relevant IE
# that way it will silently go on when used with unsupporting IE # that way it will silently go on when used with unsupporting IE
return ret return ret
elif not subtitles:
self.to_screen('[info] There\'s no subtitles for the requested languages')
return ret
sub_filename_base = self.prepare_filename(info_dict, 'subtitle') sub_filename_base = self.prepare_filename(info_dict, 'subtitle')
if not sub_filename_base: if not sub_filename_base:
self.to_screen('[info] Skipping writing video subtitles') self.to_screen('[info] Skipping writing video subtitles')
return ret return ret
for sub_lang, sub_info in subtitles.items(): for sub_lang, sub_info in subtitles.items():
sub_format = sub_info['ext'] sub_format = sub_info['ext']
sub_filename = subtitles_filename(filename, sub_lang, sub_format, info_dict.get('ext')) sub_filename = subtitles_filename(filename, sub_lang, sub_format, info_dict.get('ext'))
@@ -3951,6 +4018,9 @@ class YoutubeDL:
thumbnails, ret = [], [] thumbnails, ret = [], []
if write_all or self.params.get('writethumbnail', False): if write_all or self.params.get('writethumbnail', False):
thumbnails = info_dict.get('thumbnails') or [] thumbnails = info_dict.get('thumbnails') or []
if not thumbnails:
self.to_screen(f'[info] There\'s no {label} thumbnails to download')
return ret
multiple = write_all and len(thumbnails) > 1 multiple = write_all and len(thumbnails) > 1
if thumb_filename_base is None: if thumb_filename_base is None:

View File

@@ -16,11 +16,9 @@ import sys
from .compat import compat_shlex_quote from .compat import compat_shlex_quote
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
from .downloader import FileDownloader
from .downloader.external import get_external_downloader from .downloader.external import get_external_downloader
from .extractor import list_extractor_classes from .extractor import list_extractor_classes
from .extractor.adobepass import MSO_INFO from .extractor.adobepass import MSO_INFO
from .extractor.common import InfoExtractor
from .options import parseOpts from .options import parseOpts
from .postprocessor import ( from .postprocessor import (
FFmpegExtractAudioPP, FFmpegExtractAudioPP,
@@ -40,6 +38,7 @@ from .utils import (
DateRange, DateRange,
DownloadCancelled, DownloadCancelled,
DownloadError, DownloadError,
FormatSorter,
GeoUtils, GeoUtils,
PlaylistEntries, PlaylistEntries,
SameFileError, SameFileError,
@@ -50,6 +49,7 @@ from .utils import (
format_field, format_field,
int_or_none, int_or_none,
match_filter_func, match_filter_func,
parse_bytes,
parse_duration, parse_duration,
preferredencoding, preferredencoding,
read_batch_urls, read_batch_urls,
@@ -91,12 +91,11 @@ def get_urls(urls, batchfile, verbose):
def print_extractor_information(opts, urls): def print_extractor_information(opts, urls):
# Importing GenericIE is currently slow since it imports other extractors
# TODO: Move this back to module level after generalization of embed detection
from .extractor.generic import GenericIE
out = '' out = ''
if opts.list_extractors: if opts.list_extractors:
# Importing GenericIE is currently slow since it imports YoutubeIE
from .extractor.generic import GenericIE
urls = dict.fromkeys(urls, False) urls = dict.fromkeys(urls, False)
for ie in list_extractor_classes(opts.age_limit): for ie in list_extractor_classes(opts.age_limit):
out += ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie.working() else '') + '\n' out += ie.IE_NAME + (' (CURRENTLY BROKEN)' if not ie.working() else '') + '\n'
@@ -152,7 +151,7 @@ def set_compat_opts(opts):
else: else:
opts.embed_infojson = False opts.embed_infojson = False
if 'format-sort' in opts.compat_opts: if 'format-sort' in opts.compat_opts:
opts.format_sort.extend(InfoExtractor.FormatSort.ytdl_default) opts.format_sort.extend(FormatSorter.ytdl_default)
_video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False) _video_multistreams_set = set_default_compat('multistreams', 'allow_multiple_video_streams', False, remove_compat=False)
_audio_multistreams_set = set_default_compat('multistreams', 'allow_multiple_audio_streams', False, remove_compat=False) _audio_multistreams_set = set_default_compat('multistreams', 'allow_multiple_audio_streams', False, remove_compat=False)
if _video_multistreams_set is False and _audio_multistreams_set is False: if _video_multistreams_set is False and _audio_multistreams_set is False:
@@ -227,7 +226,7 @@ def validate_options(opts):
# Format sort # Format sort
for f in opts.format_sort: for f in opts.format_sort:
validate_regex('format sorting', f, InfoExtractor.FormatSort.regex) validate_regex('format sorting', f, FormatSorter.regex)
# Postprocessor formats # Postprocessor formats
validate_regex('merge output format', opts.merge_output_format, validate_regex('merge output format', opts.merge_output_format,
@@ -281,19 +280,19 @@ def validate_options(opts):
raise ValueError(f'invalid {key} retry sleep expression {expr!r}') raise ValueError(f'invalid {key} retry sleep expression {expr!r}')
# Bytes # Bytes
def parse_bytes(name, value): def validate_bytes(name, value):
if value is None: if value is None:
return None return None
numeric_limit = FileDownloader.parse_bytes(value) numeric_limit = parse_bytes(value)
validate(numeric_limit is not None, 'rate limit', value) validate(numeric_limit is not None, 'rate limit', value)
return numeric_limit return numeric_limit
opts.ratelimit = parse_bytes('rate limit', opts.ratelimit) opts.ratelimit = validate_bytes('rate limit', opts.ratelimit)
opts.throttledratelimit = parse_bytes('throttled rate limit', opts.throttledratelimit) opts.throttledratelimit = validate_bytes('throttled rate limit', opts.throttledratelimit)
opts.min_filesize = parse_bytes('min filesize', opts.min_filesize) opts.min_filesize = validate_bytes('min filesize', opts.min_filesize)
opts.max_filesize = parse_bytes('max filesize', opts.max_filesize) opts.max_filesize = validate_bytes('max filesize', opts.max_filesize)
opts.buffersize = parse_bytes('buffer size', opts.buffersize) opts.buffersize = validate_bytes('buffer size', opts.buffersize)
opts.http_chunk_size = parse_bytes('http chunk size', opts.http_chunk_size) opts.http_chunk_size = validate_bytes('http chunk size', opts.http_chunk_size)
# Output templates # Output templates
def validate_outtmpl(tmpl, msg): def validate_outtmpl(tmpl, msg):
@@ -333,7 +332,7 @@ def validate_options(opts):
mobj = range_ != '-' and re.fullmatch(r'([^-]+)?\s*-\s*([^-]+)?', range_) mobj = range_ != '-' and re.fullmatch(r'([^-]+)?\s*-\s*([^-]+)?', range_)
dur = mobj and (parse_timestamp(mobj.group(1) or '0'), parse_timestamp(mobj.group(2) or 'inf')) dur = mobj and (parse_timestamp(mobj.group(1) or '0'), parse_timestamp(mobj.group(2) or 'inf'))
if None in (dur or [None]): if None in (dur or [None]):
raise ValueError(f'invalid {name} time range "{regex}". Must be of the form *start-end') raise ValueError(f'invalid {name} time range "{regex}". Must be of the form "*start-end"')
ranges.append(dur) ranges.append(dur)
continue continue
try: try:
@@ -351,7 +350,7 @@ def validate_options(opts):
mobj = re.fullmatch(r'''(?x) mobj = re.fullmatch(r'''(?x)
(?P<name>[^+:]+) (?P<name>[^+:]+)
(?:\s*\+\s*(?P<keyring>[^:]+))? (?:\s*\+\s*(?P<keyring>[^:]+))?
(?:\s*:\s*(?P<profile>.+?))? (?:\s*:\s*(?!:)(?P<profile>.+?))?
(?:\s*::\s*(?P<container>.+))? (?:\s*::\s*(?P<container>.+))?
''', opts.cookiesfrombrowser) ''', opts.cookiesfrombrowser)
if mobj is None: if mobj is None:
@@ -387,10 +386,12 @@ def validate_options(opts):
raise ValueError(f'{cmd} is invalid; {err}') raise ValueError(f'{cmd} is invalid; {err}')
yield action yield action
parse_metadata = opts.parse_metadata or []
if opts.metafromtitle is not None: if opts.metafromtitle is not None:
parse_metadata.append('title:%s' % opts.metafromtitle) opts.parse_metadata.setdefault('pre_process', []).append('title:%s' % opts.metafromtitle)
opts.parse_metadata = list(itertools.chain(*map(metadataparser_actions, parse_metadata))) opts.parse_metadata = {
k: list(itertools.chain(*map(metadataparser_actions, v)))
for k, v in opts.parse_metadata.items()
}
# Other options # Other options
if opts.playlist_items is not None: if opts.playlist_items is not None:
@@ -562,11 +563,11 @@ def validate_options(opts):
def get_postprocessors(opts): def get_postprocessors(opts):
yield from opts.add_postprocessors yield from opts.add_postprocessors
if opts.parse_metadata: for when, actions in opts.parse_metadata.items():
yield { yield {
'key': 'MetadataParser', 'key': 'MetadataParser',
'actions': opts.parse_metadata, 'actions': actions,
'when': 'pre_process' 'when': when
} }
sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove sponsorblock_query = opts.sponsorblock_mark | opts.sponsorblock_remove
if sponsorblock_query: if sponsorblock_query:
@@ -702,7 +703,7 @@ def parse_options(argv=None):
postprocessors = list(get_postprocessors(opts)) postprocessors = list(get_postprocessors(opts))
print_only = bool(opts.forceprint) and all(k not in opts.forceprint for k in POSTPROCESS_WHEN[2:]) print_only = bool(opts.forceprint) and all(k not in opts.forceprint for k in POSTPROCESS_WHEN[3:])
any_getting = any(getattr(opts, k) for k in ( any_getting = any(getattr(opts, k) for k in (
'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename', 'dumpjson', 'dump_single_json', 'getdescription', 'getduration', 'getfilename',
'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl' 'getformat', 'getid', 'getthumbnail', 'gettitle', 'geturl'
@@ -854,6 +855,7 @@ def parse_options(argv=None):
'legacyserverconnect': opts.legacy_server_connect, 'legacyserverconnect': opts.legacy_server_connect,
'nocheckcertificate': opts.no_check_certificate, 'nocheckcertificate': opts.no_check_certificate,
'prefer_insecure': opts.prefer_insecure, 'prefer_insecure': opts.prefer_insecure,
'enable_file_urls': opts.enable_file_urls,
'http_headers': opts.headers, 'http_headers': opts.headers,
'proxy': opts.proxy, 'proxy': opts.proxy,
'socket_timeout': opts.socket_timeout, 'socket_timeout': opts.socket_timeout,
@@ -962,6 +964,8 @@ def _real_main(argv=None):
def main(argv=None): def main(argv=None):
global _IN_CLI
_IN_CLI = True
try: try:
_exit(*variadic(_real_main(argv))) _exit(*variadic(_real_main(argv)))
except DownloadError: except DownloadError:

View File

@@ -5,7 +5,7 @@
import sys import sys
if __package__ is None and not hasattr(sys, 'frozen'): if __package__ is None and not getattr(sys, 'frozen', False):
# direct call of __main__.py # direct call of __main__.py
import os.path import os.path
path = os.path.realpath(os.path.abspath(__file__)) path = os.path.realpath(os.path.abspath(__file__))
@@ -14,5 +14,4 @@ if __package__ is None and not hasattr(sys, 'frozen'):
import yt_dlp import yt_dlp
if __name__ == '__main__': if __name__ == '__main__':
yt_dlp._IN_CLI = True
yt_dlp.main() yt_dlp.main()

View File

@@ -28,11 +28,23 @@ def aes_cbc_encrypt_bytes(data, key, iv, **kwargs):
return intlist_to_bytes(aes_cbc_encrypt(*map(bytes_to_intlist, (data, key, iv)), **kwargs)) return intlist_to_bytes(aes_cbc_encrypt(*map(bytes_to_intlist, (data, key, iv)), **kwargs))
BLOCK_SIZE_BYTES = 16
def unpad_pkcs7(data): def unpad_pkcs7(data):
return data[:-compat_ord(data[-1])] return data[:-compat_ord(data[-1])]
BLOCK_SIZE_BYTES = 16 def pkcs7_padding(data):
"""
PKCS#7 padding
@param {int[]} data cleartext
@returns {int[]} padding data
"""
remaining_length = BLOCK_SIZE_BYTES - len(data) % BLOCK_SIZE_BYTES
return data + [remaining_length] * remaining_length
def pad_block(block, padding_mode): def pad_block(block, padding_mode):
@@ -64,7 +76,7 @@ def pad_block(block, padding_mode):
def aes_ecb_encrypt(data, key, iv=None): def aes_ecb_encrypt(data, key, iv=None):
""" """
Encrypt with aes in ECB mode Encrypt with aes in ECB mode. Using PKCS#7 padding
@param {int[]} data cleartext @param {int[]} data cleartext
@param {int[]} key 16/24/32-Byte cipher key @param {int[]} key 16/24/32-Byte cipher key
@@ -77,8 +89,7 @@ def aes_ecb_encrypt(data, key, iv=None):
encrypted_data = [] encrypted_data = []
for i in range(block_count): for i in range(block_count):
block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES] block = data[i * BLOCK_SIZE_BYTES: (i + 1) * BLOCK_SIZE_BYTES]
encrypted_data += aes_encrypt(block, expanded_key) encrypted_data += aes_encrypt(pkcs7_padding(block), expanded_key)
encrypted_data = encrypted_data[:len(data)]
return encrypted_data return encrypted_data
@@ -551,5 +562,6 @@ __all__ = [
'key_expansion', 'key_expansion',
'pad_block', 'pad_block',
'pkcs7_padding',
'unpad_pkcs7', 'unpad_pkcs7',
] ]

View File

@@ -5,6 +5,7 @@ import os
import re import re
import shutil import shutil
import traceback import traceback
import urllib.parse
from .utils import expand_path, traverse_obj, version_tuple, write_json_file from .utils import expand_path, traverse_obj, version_tuple, write_json_file
from .version import __version__ from .version import __version__
@@ -22,11 +23,9 @@ class Cache:
return expand_path(res) return expand_path(res)
def _get_cache_fn(self, section, key, dtype): def _get_cache_fn(self, section, key, dtype):
assert re.match(r'^[a-zA-Z0-9_.-]+$', section), \ assert re.match(r'^[\w.-]+$', section), f'invalid section {section!r}'
'invalid section %r' % section key = urllib.parse.quote(key, safe='').replace('%', ',') # encode non-ascii characters
assert re.match(r'^[a-zA-Z0-9_.-]+$', key), 'invalid key %r' % key return os.path.join(self._get_root_dir(), section, f'{key}.{dtype}')
return os.path.join(
self._get_root_dir(), section, f'{key}.{dtype}')
@property @property
def enabled(self): def enabled(self):

View File

@@ -14,7 +14,7 @@ passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(
# HTMLParseError has been deprecated in Python 3.3 and removed in # HTMLParseError has been deprecated in Python 3.3 and removed in
# Python 3.5. Introducing dummy exception for Python >3.5 for compatible # Python 3.5. Introducing dummy exception for Python >3.5 for compatible
# and uniform cross-version exception handling # and uniform cross-version exception handling
class compat_HTMLParseError(Exception): class compat_HTMLParseError(ValueError):
pass pass

View File

@@ -48,6 +48,7 @@ def compat_setenv(key, value, env=os.environ):
compat_basestring = str compat_basestring = str
compat_casefold = str.casefold
compat_chr = chr compat_chr = chr
compat_collections_abc = collections.abc compat_collections_abc = collections.abc
compat_cookiejar = http.cookiejar compat_cookiejar = http.cookiejar

30
yt_dlp/compat/shutil.py Normal file
View File

@@ -0,0 +1,30 @@
# flake8: noqa: F405
from shutil import * # noqa: F403
from .compat_utils import passthrough_module
passthrough_module(__name__, 'shutil')
del passthrough_module
import sys
if sys.platform.startswith('freebsd'):
import errno
import os
import shutil
# Workaround for PermissionError when using restricted ACL mode on FreeBSD
def copy2(src, dst, *args, **kwargs):
if os.path.isdir(dst):
dst = os.path.join(dst, os.path.basename(src))
shutil.copyfile(src, dst, *args, **kwargs)
try:
shutil.copystat(src, dst, *args, **kwargs)
except PermissionError as e:
if e.errno != getattr(errno, 'EPERM', None):
raise
return dst
def move(*args, copy_function=copy2, **kwargs):
return shutil.move(*args, copy_function=copy_function, **kwargs)

View File

@@ -999,8 +999,9 @@ def _parse_browser_specification(browser_name, profile=None, keyring=None, conta
class LenientSimpleCookie(http.cookies.SimpleCookie): class LenientSimpleCookie(http.cookies.SimpleCookie):
"""More lenient version of http.cookies.SimpleCookie""" """More lenient version of http.cookies.SimpleCookie"""
# From https://github.com/python/cpython/blob/v3.10.7/Lib/http/cookies.py # From https://github.com/python/cpython/blob/v3.10.7/Lib/http/cookies.py
_LEGAL_KEY_CHARS = r"\w\d!#%&'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=" # We use Morsel's legal key chars to avoid errors on setting values
_LEGAL_VALUE_CHARS = _LEGAL_KEY_CHARS + r"\[\]" _LEGAL_KEY_CHARS = r'\w\d' + re.escape('!#$%&\'*+-.:^_`|~')
_LEGAL_VALUE_CHARS = _LEGAL_KEY_CHARS + re.escape('(),/<=>?@[]{}')
_RESERVED = { _RESERVED = {
"expires", "expires",
@@ -1046,25 +1047,17 @@ class LenientSimpleCookie(http.cookies.SimpleCookie):
return super().load(data) return super().load(data)
morsel = None morsel = None
index = 0 for match in self._COOKIE_PATTERN.finditer(data):
length = len(data) if match.group('bad'):
while 0 <= index < length:
match = self._COOKIE_PATTERN.search(data, index)
if not match:
break
index = match.end(0)
if match.group("bad"):
morsel = None morsel = None
continue continue
key, value = match.group("key", "val") key, value = match.group('key', 'val')
if key[0] == "$": is_attribute = False
if morsel is not None: if key.startswith('$'):
morsel[key[1:]] = True key = key[1:]
continue is_attribute = True
lower_key = key.lower() lower_key = key.lower()
if lower_key in self._RESERVED: if lower_key in self._RESERVED:
@@ -1081,6 +1074,9 @@ class LenientSimpleCookie(http.cookies.SimpleCookie):
morsel[key] = value morsel[key] = value
elif is_attribute:
morsel = None
elif value is not None: elif value is not None:
morsel = self.get(key, http.cookies.Morsel()) morsel = self.get(key, http.cookies.Morsel())
real_value, coded_value = self.value_decode(value) real_value, coded_value = self.value_decode(value)

View File

@@ -15,15 +15,16 @@ from ..minicurses import (
from ..utils import ( from ..utils import (
IDENTITY, IDENTITY,
NO_DEFAULT, NO_DEFAULT,
NUMBER_RE,
LockingUnsupportedError, LockingUnsupportedError,
Namespace, Namespace,
RetryManager, RetryManager,
classproperty, classproperty,
decodeArgument, decodeArgument,
deprecation_warning,
encodeFilename, encodeFilename,
format_bytes, format_bytes,
join_nonempty, join_nonempty,
parse_bytes,
remove_start, remove_start,
sanitize_open, sanitize_open,
shell_quote, shell_quote,
@@ -180,12 +181,9 @@ class FileDownloader:
@staticmethod @staticmethod
def parse_bytes(bytestr): def parse_bytes(bytestr):
"""Parse a string indicating a byte quantity into an integer.""" """Parse a string indicating a byte quantity into an integer."""
matchobj = re.match(rf'(?i)^({NUMBER_RE})([kMGTPEZY]?)$', bytestr) deprecation_warning('yt_dlp.FileDownloader.parse_bytes is deprecated and '
if matchobj is None: 'may be removed in the future. Use yt_dlp.utils.parse_bytes instead')
return None return parse_bytes(bytestr)
number = float(matchobj.group(1))
multiplier = 1024.0 ** 'bkmgtpezy'.index(matchobj.group(2).lower())
return int(round(number * multiplier))
def slow_down(self, start_time, now, byte_counter): def slow_down(self, start_time, now, byte_counter):
"""Sleep if the download speed is over the rate limit.""" """Sleep if the download speed is over the rate limit."""
@@ -333,7 +331,7 @@ class FileDownloader:
return tmpl return tmpl
return default return default
_formats_bytes = lambda k: f'{format_bytes(s.get(k)):>10s}' _format_bytes = lambda k: f'{format_bytes(s.get(k)):>10s}'
if s['status'] == 'finished': if s['status'] == 'finished':
if self.params.get('noprogress'): if self.params.get('noprogress'):
@@ -342,7 +340,7 @@ class FileDownloader:
s.update({ s.update({
'speed': speed, 'speed': speed,
'_speed_str': self.format_speed(speed).strip(), '_speed_str': self.format_speed(speed).strip(),
'_total_bytes_str': _formats_bytes('total_bytes'), '_total_bytes_str': _format_bytes('total_bytes'),
'_elapsed_str': self.format_seconds(s.get('elapsed')), '_elapsed_str': self.format_seconds(s.get('elapsed')),
'_percent_str': self.format_percent(100), '_percent_str': self.format_percent(100),
}) })
@@ -363,9 +361,9 @@ class FileDownloader:
lambda: 100 * s['downloaded_bytes'] / s['total_bytes'], lambda: 100 * s['downloaded_bytes'] / s['total_bytes'],
lambda: 100 * s['downloaded_bytes'] / s['total_bytes_estimate'], lambda: 100 * s['downloaded_bytes'] / s['total_bytes_estimate'],
lambda: s['downloaded_bytes'] == 0 and 0)), lambda: s['downloaded_bytes'] == 0 and 0)),
'_total_bytes_str': _formats_bytes('total_bytes'), '_total_bytes_str': _format_bytes('total_bytes'),
'_total_bytes_estimate_str': _formats_bytes('total_bytes_estimate'), '_total_bytes_estimate_str': _format_bytes('total_bytes_estimate'),
'_downloaded_bytes_str': _formats_bytes('downloaded_bytes'), '_downloaded_bytes_str': _format_bytes('downloaded_bytes'),
'_elapsed_str': self.format_seconds(s.get('elapsed')), '_elapsed_str': self.format_seconds(s.get('elapsed')),
}) })

View File

@@ -1,8 +1,9 @@
import time import time
import urllib.parse
from . import get_suitable_downloader from . import get_suitable_downloader
from .fragment import FragmentFD from .fragment import FragmentFD
from ..utils import urljoin from ..utils import update_url_query, urljoin
class DashSegmentsFD(FragmentFD): class DashSegmentsFD(FragmentFD):
@@ -40,7 +41,12 @@ class DashSegmentsFD(FragmentFD):
self._prepare_and_start_frag_download(ctx, fmt) self._prepare_and_start_frag_download(ctx, fmt)
ctx['start'] = real_start ctx['start'] = real_start
fragments_to_download = self._get_fragments(fmt, ctx) extra_query = None
extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url')
if extra_param_to_segment_url:
extra_query = urllib.parse.parse_qs(extra_param_to_segment_url)
fragments_to_download = self._get_fragments(fmt, ctx, extra_query)
if real_downloader: if real_downloader:
self.to_screen( self.to_screen(
@@ -51,13 +57,13 @@ class DashSegmentsFD(FragmentFD):
args.append([ctx, fragments_to_download, fmt]) args.append([ctx, fragments_to_download, fmt])
return self.download_and_append_fragments_multiple(*args) return self.download_and_append_fragments_multiple(*args, is_fatal=lambda idx: idx == 0)
def _resolve_fragments(self, fragments, ctx): def _resolve_fragments(self, fragments, ctx):
fragments = fragments(ctx) if callable(fragments) else fragments fragments = fragments(ctx) if callable(fragments) else fragments
return [next(iter(fragments))] if self.params.get('test') else fragments return [next(iter(fragments))] if self.params.get('test') else fragments
def _get_fragments(self, fmt, ctx): def _get_fragments(self, fmt, ctx, extra_query):
fragment_base_url = fmt.get('fragment_base_url') fragment_base_url = fmt.get('fragment_base_url')
fragments = self._resolve_fragments(fmt['fragments'], ctx) fragments = self._resolve_fragments(fmt['fragments'], ctx)
@@ -70,6 +76,8 @@ class DashSegmentsFD(FragmentFD):
if not fragment_url: if not fragment_url:
assert fragment_base_url assert fragment_base_url
fragment_url = urljoin(fragment_base_url, fragment['path']) fragment_url = urljoin(fragment_base_url, fragment['path'])
if extra_query:
fragment_url = update_url_query(fragment_url, extra_query)
yield { yield {
'frag_index': frag_index, 'frag_index': frag_index,

View File

@@ -1,9 +1,11 @@
import enum import enum
import json
import os.path import os.path
import re import re
import subprocess import subprocess
import sys import sys
import time import time
import uuid
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import functools from ..compat import functools
@@ -20,8 +22,10 @@ from ..utils import (
determine_ext, determine_ext,
encodeArgument, encodeArgument,
encodeFilename, encodeFilename,
find_available_port,
handle_youtubedl_headers, handle_youtubedl_headers,
remove_end, remove_end,
sanitized_Request,
traverse_obj, traverse_obj,
) )
@@ -60,7 +64,6 @@ class ExternalFD(FragmentFD):
} }
if filename != '-': if filename != '-':
fsize = os.path.getsize(encodeFilename(tmpfilename)) fsize = os.path.getsize(encodeFilename(tmpfilename))
self.to_screen(f'\r[{self.get_basename()}] Downloaded {fsize} bytes')
self.try_rename(tmpfilename, filename) self.try_rename(tmpfilename, filename)
status.update({ status.update({
'downloaded_bytes': fsize, 'downloaded_bytes': fsize,
@@ -129,8 +132,7 @@ class ExternalFD(FragmentFD):
self._debug_cmd(cmd) self._debug_cmd(cmd)
if 'fragments' not in info_dict: if 'fragments' not in info_dict:
_, stderr, returncode = Popen.run( _, stderr, returncode = self._call_process(cmd, info_dict)
cmd, text=True, stderr=subprocess.PIPE if self._CAPTURE_STDERR else None)
if returncode and stderr: if returncode and stderr:
self.to_stderr(stderr) self.to_stderr(stderr)
return returncode return returncode
@@ -140,7 +142,7 @@ class ExternalFD(FragmentFD):
retry_manager = RetryManager(self.params.get('fragment_retries'), self.report_retry, retry_manager = RetryManager(self.params.get('fragment_retries'), self.report_retry,
frag_index=None, fatal=not skip_unavailable_fragments) frag_index=None, fatal=not skip_unavailable_fragments)
for retry in retry_manager: for retry in retry_manager:
_, stderr, returncode = Popen.run(cmd, text=True, stderr=subprocess.PIPE) _, stderr, returncode = self._call_process(cmd, info_dict)
if not returncode: if not returncode:
break break
# TODO: Decide whether to retry based on error code # TODO: Decide whether to retry based on error code
@@ -172,6 +174,9 @@ class ExternalFD(FragmentFD):
self.try_remove(encodeFilename('%s.frag.urls' % tmpfilename)) self.try_remove(encodeFilename('%s.frag.urls' % tmpfilename))
return 0 return 0
def _call_process(self, cmd, info_dict):
return Popen.run(cmd, text=True, stderr=subprocess.PIPE)
class CurlFD(ExternalFD): class CurlFD(ExternalFD):
AVAILABLE_OPT = '-V' AVAILABLE_OPT = '-V'
@@ -256,6 +261,14 @@ class Aria2cFD(ExternalFD):
def _aria2c_filename(fn): def _aria2c_filename(fn):
return fn if os.path.isabs(fn) else f'.{os.path.sep}{fn}' return fn if os.path.isabs(fn) else f'.{os.path.sep}{fn}'
def _call_downloader(self, tmpfilename, info_dict):
if 'no-external-downloader-progress' not in self.params.get('compat_opts', []):
info_dict['__rpc'] = {
'port': find_available_port() or 19190,
'secret': str(uuid.uuid4()),
}
return super()._call_downloader(tmpfilename, info_dict)
def _make_cmd(self, tmpfilename, info_dict): def _make_cmd(self, tmpfilename, info_dict):
cmd = [self.exe, '-c', cmd = [self.exe, '-c',
'--console-log-level=warn', '--summary-interval=0', '--download-result=hide', '--console-log-level=warn', '--summary-interval=0', '--download-result=hide',
@@ -276,6 +289,12 @@ class Aria2cFD(ExternalFD):
cmd += self._bool_option('--show-console-readout', 'noprogress', 'false', 'true', '=') cmd += self._bool_option('--show-console-readout', 'noprogress', 'false', 'true', '=')
cmd += self._configuration_args() cmd += self._configuration_args()
if '__rpc' in info_dict:
cmd += [
'--enable-rpc',
f'--rpc-listen-port={info_dict["__rpc"]["port"]}',
f'--rpc-secret={info_dict["__rpc"]["secret"]}']
# aria2c strips out spaces from the beginning/end of filenames and paths. # aria2c strips out spaces from the beginning/end of filenames and paths.
# We work around this issue by adding a "./" to the beginning of the # We work around this issue by adding a "./" to the beginning of the
# filename and relative path, and adding a "/" at the end of the path. # filename and relative path, and adding a "/" at the end of the path.
@@ -304,6 +323,88 @@ class Aria2cFD(ExternalFD):
cmd += ['--', info_dict['url']] cmd += ['--', info_dict['url']]
return cmd return cmd
def aria2c_rpc(self, rpc_port, rpc_secret, method, params=()):
# Does not actually need to be UUID, just unique
sanitycheck = str(uuid.uuid4())
d = json.dumps({
'jsonrpc': '2.0',
'id': sanitycheck,
'method': method,
'params': [f'token:{rpc_secret}', *params],
}).encode('utf-8')
request = sanitized_Request(
f'http://localhost:{rpc_port}/jsonrpc',
data=d, headers={
'Content-Type': 'application/json',
'Content-Length': f'{len(d)}',
'Ytdl-request-proxy': '__noproxy__',
})
with self.ydl.urlopen(request) as r:
resp = json.load(r)
assert resp.get('id') == sanitycheck, 'Something went wrong with RPC server'
return resp['result']
def _call_process(self, cmd, info_dict):
if '__rpc' not in info_dict:
return super()._call_process(cmd, info_dict)
send_rpc = functools.partial(self.aria2c_rpc, info_dict['__rpc']['port'], info_dict['__rpc']['secret'])
started = time.time()
fragmented = 'fragments' in info_dict
frag_count = len(info_dict['fragments']) if fragmented else 1
status = {
'filename': info_dict.get('_filename'),
'status': 'downloading',
'elapsed': 0,
'downloaded_bytes': 0,
'fragment_count': frag_count if fragmented else None,
'fragment_index': 0 if fragmented else None,
}
self._hook_progress(status, info_dict)
def get_stat(key, *obj, average=False):
val = tuple(filter(None, map(float, traverse_obj(obj, (..., ..., key))))) or [0]
return sum(val) / (len(val) if average else 1)
with Popen(cmd, text=True, stdout=subprocess.DEVNULL, stderr=subprocess.PIPE) as p:
# Add a small sleep so that RPC client can receive response,
# or the connection stalls infinitely
time.sleep(0.2)
retval = p.poll()
while retval is None:
# We don't use tellStatus as we won't know the GID without reading stdout
# Ref: https://aria2.github.io/manual/en/html/aria2c.html#aria2.tellActive
active = send_rpc('aria2.tellActive')
completed = send_rpc('aria2.tellStopped', [0, frag_count])
downloaded = get_stat('totalLength', completed) + get_stat('completedLength', active)
speed = get_stat('downloadSpeed', active)
total = frag_count * get_stat('totalLength', active, completed, average=True)
if total < downloaded:
total = None
status.update({
'downloaded_bytes': int(downloaded),
'speed': speed,
'total_bytes': None if fragmented else total,
'total_bytes_estimate': total,
'eta': (total - downloaded) / (speed or 1),
'fragment_index': min(frag_count, len(completed) + 1) if fragmented else None,
'elapsed': time.time() - started
})
self._hook_progress(status, info_dict)
if not active and len(completed) >= frag_count:
send_rpc('aria2.shutdown')
retval = p.wait()
break
time.sleep(0.1)
retval = p.poll()
return '', p.stderr.read(), retval
class HttpieFD(ExternalFD): class HttpieFD(ExternalFD):
AVAILABLE_OPT = '--version' AVAILABLE_OPT = '--version'
@@ -342,7 +443,6 @@ class FFmpegFD(ExternalFD):
and cls.can_download(info_dict)) and cls.can_download(info_dict))
def _call_downloader(self, tmpfilename, info_dict): def _call_downloader(self, tmpfilename, info_dict):
urls = [f['url'] for f in info_dict.get('requested_formats', [])] or [info_dict['url']]
ffpp = FFmpegPostProcessor(downloader=self) ffpp = FFmpegPostProcessor(downloader=self)
if not ffpp.available: if not ffpp.available:
self.report_error('m3u8 download detected but ffmpeg could not be found. Please install') self.report_error('m3u8 download detected but ffmpeg could not be found. Please install')
@@ -372,16 +472,6 @@ class FFmpegFD(ExternalFD):
# http://trac.ffmpeg.org/ticket/6125#comment:10 # http://trac.ffmpeg.org/ticket/6125#comment:10
args += ['-seekable', '1' if seekable else '0'] args += ['-seekable', '1' if seekable else '0']
http_headers = None
if info_dict.get('http_headers'):
youtubedl_headers = handle_youtubedl_headers(info_dict['http_headers'])
http_headers = [
# Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv:
# [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header.
'-headers',
''.join(f'{key}: {val}\r\n' for key, val in youtubedl_headers.items())
]
env = None env = None
proxy = self.params.get('proxy') proxy = self.params.get('proxy')
if proxy: if proxy:
@@ -434,21 +524,26 @@ class FFmpegFD(ExternalFD):
start_time, end_time = info_dict.get('section_start') or 0, info_dict.get('section_end') start_time, end_time = info_dict.get('section_start') or 0, info_dict.get('section_end')
for i, url in enumerate(urls): selected_formats = info_dict.get('requested_formats') or [info_dict]
if http_headers is not None and re.match(r'^https?://', url): for i, fmt in enumerate(selected_formats):
args += http_headers if fmt.get('http_headers') and re.match(r'^https?://', fmt['url']):
headers_dict = handle_youtubedl_headers(fmt['http_headers'])
# Trailing \r\n after each HTTP header is important to prevent warning from ffmpeg/avconv:
# [http @ 00000000003d2fa0] No trailing CRLF found in HTTP header.
args.extend(['-headers', ''.join(f'{key}: {val}\r\n' for key, val in headers_dict.items())])
if start_time: if start_time:
args += ['-ss', str(start_time)] args += ['-ss', str(start_time)]
if end_time: if end_time:
args += ['-t', str(end_time - start_time)] args += ['-t', str(end_time - start_time)]
args += self._configuration_args((f'_i{i + 1}', '_i')) + ['-i', url] args += self._configuration_args((f'_i{i + 1}', '_i')) + ['-i', fmt['url']]
if not (start_time or end_time) or not self.params.get('force_keyframes_at_cuts'): if not (start_time or end_time) or not self.params.get('force_keyframes_at_cuts'):
args += ['-c', 'copy'] args += ['-c', 'copy']
if info_dict.get('requested_formats') or protocol == 'http_dash_segments': if info_dict.get('requested_formats') or protocol == 'http_dash_segments':
for (i, fmt) in enumerate(info_dict.get('requested_formats') or [info_dict]): for i, fmt in enumerate(selected_formats):
stream_number = fmt.get('manifest_stream_number', 0) stream_number = fmt.get('manifest_stream_number', 0)
args.extend(['-map', f'{i}:{stream_number}']) args.extend(['-map', f'{i}:{stream_number}'])
@@ -488,8 +583,9 @@ class FFmpegFD(ExternalFD):
args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True)) args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
self._debug_cmd(args) self._debug_cmd(args)
piped = any(fmt['url'] in ('-', 'pipe:') for fmt in selected_formats)
with Popen(args, stdin=subprocess.PIPE, env=env) as proc: with Popen(args, stdin=subprocess.PIPE, env=env) as proc:
if url in ('-', 'pipe:'): if piped:
self.on_process_started(proc, proc.stdin) self.on_process_started(proc, proc.stdin)
try: try:
retval = proc.wait() retval = proc.wait()
@@ -499,7 +595,7 @@ class FFmpegFD(ExternalFD):
# produces a file that is playable (this is mostly useful for live # produces a file that is playable (this is mostly useful for live
# streams). Note that Windows is not affected and produces playable # streams). Note that Windows is not affected and produces playable
# files (see https://github.com/ytdl-org/youtube-dl/issues/8300). # files (see https://github.com/ytdl-org/youtube-dl/issues/8300).
if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and url not in ('-', 'pipe:'): if isinstance(e, KeyboardInterrupt) and sys.platform != 'win32' and not piped:
proc.communicate_or_kill(b'q') proc.communicate_or_kill(b'q')
else: else:
proc.kill(timeout=None) proc.kill(timeout=None)

View File

@@ -424,6 +424,4 @@ class F4mFD(FragmentFD):
msg = 'Missed %d fragments' % (fragments_list[0][1] - (frag_i + 1)) msg = 'Missed %d fragments' % (fragments_list[0][1] - (frag_i + 1))
self.report_warning(msg) self.report_warning(msg)
self._finish_frag_download(ctx, info_dict) return self._finish_frag_download(ctx, info_dict)
return True

View File

@@ -295,16 +295,23 @@ class FragmentFD(FileDownloader):
self.try_remove(ytdl_filename) self.try_remove(ytdl_filename)
elapsed = time.time() - ctx['started'] elapsed = time.time() - ctx['started']
if ctx['tmpfilename'] == '-': to_file = ctx['tmpfilename'] != '-'
downloaded_bytes = ctx['complete_frags_downloaded_bytes'] if to_file:
downloaded_bytes = os.path.getsize(encodeFilename(ctx['tmpfilename']))
else: else:
downloaded_bytes = ctx['complete_frags_downloaded_bytes']
if not downloaded_bytes:
if to_file:
self.try_remove(ctx['tmpfilename'])
self.report_error('The downloaded file is empty')
return False
elif to_file:
self.try_rename(ctx['tmpfilename'], ctx['filename']) self.try_rename(ctx['tmpfilename'], ctx['filename'])
if self.params.get('updatetime', True): filetime = ctx.get('fragment_filetime')
filetime = ctx.get('fragment_filetime') if self.params.get('updatetime', True) and filetime:
if filetime: with contextlib.suppress(Exception):
with contextlib.suppress(Exception): os.utime(ctx['filename'], (time.time(), filetime))
os.utime(ctx['filename'], (time.time(), filetime))
downloaded_bytes = os.path.getsize(encodeFilename(ctx['filename']))
self._hook_progress({ self._hook_progress({
'downloaded_bytes': downloaded_bytes, 'downloaded_bytes': downloaded_bytes,
@@ -316,6 +323,7 @@ class FragmentFD(FileDownloader):
'max_progress': ctx.get('max_progress'), 'max_progress': ctx.get('max_progress'),
'progress_idx': ctx.get('progress_idx'), 'progress_idx': ctx.get('progress_idx'),
}, info_dict) }, info_dict)
return True
def _prepare_external_frag_download(self, ctx): def _prepare_external_frag_download(self, ctx):
if 'live' not in ctx: if 'live' not in ctx:
@@ -362,7 +370,7 @@ class FragmentFD(FileDownloader):
return decrypt_fragment return decrypt_fragment
def download_and_append_fragments_multiple(self, *args, pack_func=None, finish_func=None): def download_and_append_fragments_multiple(self, *args, **kwargs):
''' '''
@params (ctx1, fragments1, info_dict1), (ctx2, fragments2, info_dict2), ... @params (ctx1, fragments1, info_dict1), (ctx2, fragments2, info_dict2), ...
all args must be either tuple or list all args must be either tuple or list
@@ -370,7 +378,7 @@ class FragmentFD(FileDownloader):
interrupt_trigger = [True] interrupt_trigger = [True]
max_progress = len(args) max_progress = len(args)
if max_progress == 1: if max_progress == 1:
return self.download_and_append_fragments(*args[0], pack_func=pack_func, finish_func=finish_func) return self.download_and_append_fragments(*args[0], **kwargs)
max_workers = self.params.get('concurrent_fragment_downloads', 1) max_workers = self.params.get('concurrent_fragment_downloads', 1)
if max_progress > 1: if max_progress > 1:
self._prepare_multiline_status(max_progress) self._prepare_multiline_status(max_progress)
@@ -380,8 +388,7 @@ class FragmentFD(FileDownloader):
ctx['max_progress'] = max_progress ctx['max_progress'] = max_progress
ctx['progress_idx'] = idx ctx['progress_idx'] = idx
return self.download_and_append_fragments( return self.download_and_append_fragments(
ctx, fragments, info_dict, pack_func=pack_func, finish_func=finish_func, ctx, fragments, info_dict, **kwargs, tpe=tpe, interrupt_trigger=interrupt_trigger)
tpe=tpe, interrupt_trigger=interrupt_trigger)
class FTPE(concurrent.futures.ThreadPoolExecutor): class FTPE(concurrent.futures.ThreadPoolExecutor):
# has to stop this or it's going to wait on the worker thread itself # has to stop this or it's going to wait on the worker thread itself
@@ -428,17 +435,12 @@ class FragmentFD(FileDownloader):
return result return result
def download_and_append_fragments( def download_and_append_fragments(
self, ctx, fragments, info_dict, *, pack_func=None, finish_func=None, self, ctx, fragments, info_dict, *, is_fatal=(lambda idx: False),
tpe=None, interrupt_trigger=None): pack_func=(lambda content, idx: content), finish_func=None,
if not interrupt_trigger: tpe=None, interrupt_trigger=(True, )):
interrupt_trigger = (True, )
is_fatal = ( if not self.params.get('skip_unavailable_fragments', True):
((lambda _: False) if info_dict.get('is_live') else (lambda idx: idx == 0)) is_fatal = lambda _: True
if self.params.get('skip_unavailable_fragments', True) else (lambda _: True))
if not pack_func:
pack_func = lambda frag_content, _: frag_content
def download_fragment(fragment, ctx): def download_fragment(fragment, ctx):
if not interrupt_trigger[0]: if not interrupt_trigger[0]:
@@ -527,5 +529,4 @@ class FragmentFD(FileDownloader):
if finish_func is not None: if finish_func is not None:
ctx['dest_stream'].write(finish_func()) ctx['dest_stream'].write(finish_func())
ctx['dest_stream'].flush() ctx['dest_stream'].flush()
self._finish_frag_download(ctx, info_dict) return self._finish_frag_download(ctx, info_dict)
return True

View File

@@ -280,5 +280,4 @@ class IsmFD(FragmentFD):
return False return False
self.report_skip_fragment(frag_index) self.report_skip_fragment(frag_index)
self._finish_frag_download(ctx, info_dict) return self._finish_frag_download(ctx, info_dict)
return True

View File

@@ -186,5 +186,4 @@ body > figure > img {
ctx['dest_stream'].write( ctx['dest_stream'].write(
b'--%b--\r\n\r\n' % frag_boundary.encode('us-ascii')) b'--%b--\r\n\r\n' % frag_boundary.encode('us-ascii'))
self._finish_frag_download(ctx, info_dict) return self._finish_frag_download(ctx, info_dict)
return True

View File

@@ -191,8 +191,7 @@ class YoutubeLiveChatFD(FragmentFD):
if test: if test:
break break
self._finish_frag_download(ctx, info_dict) return self._finish_frag_download(ctx, info_dict)
return True
@staticmethod @staticmethod
def parse_live_timestamp(action): def parse_live_timestamp(action):

View File

@@ -65,12 +65,20 @@ from .aenetworks import (
HistoryPlayerIE, HistoryPlayerIE,
BiographyIE, BiographyIE,
) )
from .aeonco import AeonCoIE
from .afreecatv import ( from .afreecatv import (
AfreecaTVIE, AfreecaTVIE,
AfreecaTVLiveIE, AfreecaTVLiveIE,
AfreecaTVUserIE, AfreecaTVUserIE,
) )
from .agora import (
TokFMAuditionIE,
TokFMPodcastIE,
WyborczaPodcastIE,
WyborczaVideoIE,
)
from .airmozilla import AirMozillaIE from .airmozilla import AirMozillaIE
from .airtv import AirTVIE
from .aljazeera import AlJazeeraIE from .aljazeera import AlJazeeraIE
from .alphaporno import AlphaPornoIE from .alphaporno import AlphaPornoIE
from .amara import AmaraIE from .amara import AmaraIE
@@ -79,7 +87,15 @@ from .alura import (
AluraCourseIE AluraCourseIE
) )
from .amcnetworks import AMCNetworksIE from .amcnetworks import AMCNetworksIE
from .amazon import AmazonStoreIE from .amazon import (
AmazonStoreIE,
AmazonReviewsIE,
)
from .amazonminitv import (
AmazonMiniTVIE,
AmazonMiniTVSeasonIE,
AmazonMiniTVSeriesIE,
)
from .americastestkitchen import ( from .americastestkitchen import (
AmericasTestKitchenIE, AmericasTestKitchenIE,
AmericasTestKitchenSeasonIE, AmericasTestKitchenSeasonIE,
@@ -171,6 +187,10 @@ from .bbc import (
from .beeg import BeegIE from .beeg import BeegIE
from .behindkink import BehindKinkIE from .behindkink import BehindKinkIE
from .bellmedia import BellMediaIE from .bellmedia import BellMediaIE
from .beatbump import (
BeatBumpVideoIE,
BeatBumpPlaylistIE,
)
from .beatport import BeatportIE from .beatport import BeatportIE
from .berufetv import BerufeTVIE from .berufetv import BerufeTVIE
from .bet import BetIE from .bet import BetIE
@@ -186,9 +206,10 @@ from .bigo import BigoIE
from .bild import BildIE from .bild import BildIE
from .bilibili import ( from .bilibili import (
BiliBiliIE, BiliBiliIE,
BiliBiliBangumiIE,
BiliBiliBangumiMediaIE,
BiliBiliSearchIE, BiliBiliSearchIE,
BilibiliCategoryIE, BilibiliCategoryIE,
BiliBiliBangumiIE,
BilibiliAudioIE, BilibiliAudioIE,
BilibiliAudioAlbumIE, BilibiliAudioAlbumIE,
BiliBiliPlayerIE, BiliBiliPlayerIE,
@@ -247,6 +268,7 @@ from .camdemy import (
CamdemyFolderIE CamdemyFolderIE
) )
from .cammodels import CamModelsIE from .cammodels import CamModelsIE
from .camsoda import CamsodaIE
from .camtasia import CamtasiaEmbedIE from .camtasia import CamtasiaEmbedIE
from .camwithher import CamWithHerIE from .camwithher import CamWithHerIE
from .canalalpha import CanalAlphaIE from .canalalpha import CanalAlphaIE
@@ -310,6 +332,7 @@ from .chirbit import (
) )
from .cinchcast import CinchcastIE from .cinchcast import CinchcastIE
from .cinemax import CinemaxIE from .cinemax import CinemaxIE
from .cinetecamilano import CinetecaMilanoIE
from .ciscolive import ( from .ciscolive import (
CiscoLiveSessionIE, CiscoLiveSessionIE,
CiscoLiveSearchIE, CiscoLiveSearchIE,
@@ -364,8 +387,6 @@ from .crowdbunker import (
CrowdBunkerChannelIE, CrowdBunkerChannelIE,
) )
from .crunchyroll import ( from .crunchyroll import (
CrunchyrollIE,
CrunchyrollShowPlaylistIE,
CrunchyrollBetaIE, CrunchyrollBetaIE,
CrunchyrollBetaShowIE, CrunchyrollBetaShowIE,
) )
@@ -440,6 +461,7 @@ from .dplay import (
AnimalPlanetIE, AnimalPlanetIE,
TLCIE, TLCIE,
MotorTrendIE, MotorTrendIE,
MotorTrendOnDemandIE,
DiscoveryPlusIndiaIE, DiscoveryPlusIndiaIE,
DiscoveryNetworksDeIE, DiscoveryNetworksDeIE,
DiscoveryPlusItalyIE, DiscoveryPlusItalyIE,
@@ -461,11 +483,14 @@ from .duboku import (
) )
from .dumpert import DumpertIE from .dumpert import DumpertIE
from .defense import DefenseGouvFrIE from .defense import DefenseGouvFrIE
from .deuxm import (
DeuxMIE,
DeuxMNewsIE
)
from .digitalconcerthall import DigitalConcertHallIE from .digitalconcerthall import DigitalConcertHallIE
from .discovery import DiscoveryIE from .discovery import DiscoveryIE
from .disney import DisneyIE from .disney import DisneyIE
from .dispeak import DigitallySpeakingIE from .dispeak import DigitallySpeakingIE
from .doodstream import DoodStreamIE
from .dropbox import DropboxIE from .dropbox import DropboxIE
from .dropout import ( from .dropout import (
DropoutSeasonIE, DropoutSeasonIE,
@@ -519,7 +544,7 @@ from .espn import (
ESPNCricInfoIE, ESPNCricInfoIE,
) )
from .esri import EsriVideoIE from .esri import EsriVideoIE
from .europa import EuropaIE from .europa import EuropaIE, EuroParlWebstreamIE
from .europeantour import EuropeanTourIE from .europeantour import EuropeanTourIE
from .eurosport import EurosportIE from .eurosport import EurosportIE
from .euscreen import EUScreenIE from .euscreen import EUScreenIE
@@ -577,6 +602,7 @@ from .foxgay import FoxgayIE
from .foxnews import ( from .foxnews import (
FoxNewsIE, FoxNewsIE,
FoxNewsArticleIE, FoxNewsArticleIE,
FoxNewsVideoIE,
) )
from .foxsports import FoxSportsIE from .foxsports import FoxSportsIE
from .fptplay import FptplayIE from .fptplay import FptplayIE
@@ -627,6 +653,10 @@ from .gazeta import GazetaIE
from .gdcvault import GDCVaultIE from .gdcvault import GDCVaultIE
from .gedidigital import GediDigitalIE from .gedidigital import GediDigitalIE
from .generic import GenericIE from .generic import GenericIE
from .genius import (
GeniusIE,
GeniusLyricsIE,
)
from .gettr import ( from .gettr import (
GettrIE, GettrIE,
GettrStreamingIE, GettrStreamingIE,
@@ -683,6 +713,7 @@ from .hotstar import (
HotStarIE, HotStarIE,
HotStarPrefixIE, HotStarPrefixIE,
HotStarPlaylistIE, HotStarPlaylistIE,
HotStarSeasonIE,
HotStarSeriesIE, HotStarSeriesIE,
) )
from .howcast import HowcastIE from .howcast import HowcastIE
@@ -696,7 +727,10 @@ from .hse import (
HSEShowIE, HSEShowIE,
HSEProductIE, HSEProductIE,
) )
from .genericembeds import HTML5MediaEmbedIE from .genericembeds import (
HTML5MediaEmbedIE,
QuotedHTMLIE,
)
from .huajiao import HuajiaoIE from .huajiao import HuajiaoIE
from .huya import HuyaLiveIE from .huya import HuyaLiveIE
from .huffpost import HuffPostIE from .huffpost import HuffPostIE
@@ -786,12 +820,21 @@ from .jamendo import (
JamendoIE, JamendoIE,
JamendoAlbumIE, JamendoAlbumIE,
) )
from .japandiet import (
ShugiinItvLiveIE,
ShugiinItvLiveRoomIE,
ShugiinItvVodIE,
SangiinInstructionIE,
SangiinIE,
)
from .jeuxvideo import JeuxVideoIE from .jeuxvideo import JeuxVideoIE
from .jove import JoveIE from .jove import JoveIE
from .joj import JojIE from .joj import JojIE
from .jwplatform import JWPlatformIE from .jwplatform import JWPlatformIE
from .kakao import KakaoIE from .kakao import KakaoIE
from .kaltura import KalturaIE from .kaltura import KalturaIE
from .kanal2 import Kanal2IE
from .kankanews import KankaNewsIE
from .karaoketv import KaraoketvIE from .karaoketv import KaraoketvIE
from .karrierevideos import KarriereVideosIE from .karrierevideos import KarriereVideosIE
from .keezmovies import KeezMoviesIE from .keezmovies import KeezMoviesIE
@@ -801,6 +844,10 @@ from .khanacademy import (
KhanAcademyIE, KhanAcademyIE,
KhanAcademyUnitIE, KhanAcademyUnitIE,
) )
from .kick import (
KickIE,
KickVODIE,
)
from .kicker import KickerIE from .kicker import KickerIE
from .kickstarter import KickStarterIE from .kickstarter import KickStarterIE
from .kinja import KinjaEmbedIE from .kinja import KinjaEmbedIE
@@ -885,6 +932,7 @@ from .linkedin import (
) )
from .linuxacademy import LinuxAcademyIE from .linuxacademy import LinuxAcademyIE
from .liputan6 import Liputan6IE from .liputan6 import Liputan6IE
from .listennotes import ListenNotesIE
from .litv import LiTVIE from .litv import LiTVIE
from .livejournal import LiveJournalIE from .livejournal import LiveJournalIE
from .livestream import ( from .livestream import (
@@ -947,6 +995,10 @@ from .mediasite import (
MediasiteCatalogIE, MediasiteCatalogIE,
MediasiteNamedCatalogIE, MediasiteNamedCatalogIE,
) )
from .mediastream import (
MediaStreamIE,
WinSportsVideoIE,
)
from .mediaworksnz import MediaWorksNZVODIE from .mediaworksnz import MediaWorksNZVODIE
from .medici import MediciIE from .medici import MediciIE
from .megaphone import MegaphoneIE from .megaphone import MegaphoneIE
@@ -998,6 +1050,7 @@ from .mlb import (
MLBIE, MLBIE,
MLBVideoIE, MLBVideoIE,
MLBTVIE, MLBTVIE,
MLBArticleIE,
) )
from .mlssoccer import MLSSoccerIE from .mlssoccer import MLSSoccerIE
from .mnet import MnetIE from .mnet import MnetIE
@@ -1114,6 +1167,7 @@ from .neteasemusic import (
from .netverse import ( from .netverse import (
NetverseIE, NetverseIE,
NetversePlaylistIE, NetversePlaylistIE,
NetverseSearchIE,
) )
from .newgrounds import ( from .newgrounds import (
NewgroundsIE, NewgroundsIE,
@@ -1175,11 +1229,13 @@ from .nintendo import NintendoIE
from .nitter import NitterIE from .nitter import NitterIE
from .njpwworld import NJPWWorldIE from .njpwworld import NJPWWorldIE
from .nobelprize import NobelPrizeIE from .nobelprize import NobelPrizeIE
from .noice import NoicePodcastIE
from .nonktube import NonkTubeIE from .nonktube import NonkTubeIE
from .noodlemagazine import NoodleMagazineIE from .noodlemagazine import NoodleMagazineIE
from .noovo import NoovoIE from .noovo import NoovoIE
from .normalboots import NormalbootsIE from .normalboots import NormalbootsIE
from .nosvideo import NosVideoIE from .nosvideo import NosVideoIE
from .nosnl import NOSNLArticleIE
from .nova import ( from .nova import (
NovaEmbedIE, NovaEmbedIE,
NovaIE, NovaIE,
@@ -1229,12 +1285,17 @@ from .nzherald import NZHeraldIE
from .nzz import NZZIE from .nzz import NZZIE
from .odatv import OdaTVIE from .odatv import OdaTVIE
from .odnoklassniki import OdnoklassnikiIE from .odnoklassniki import OdnoklassnikiIE
from .oftv import (
OfTVIE,
OfTVPlaylistIE
)
from .oktoberfesttv import OktoberfestTVIE from .oktoberfesttv import OktoberfestTVIE
from .olympics import OlympicsReplayIE from .olympics import OlympicsReplayIE
from .on24 import On24IE from .on24 import On24IE
from .ondemandkorea import OnDemandKoreaIE from .ondemandkorea import OnDemandKoreaIE
from .onefootball import OneFootballIE from .onefootball import OneFootballIE
from .onenewsnz import OneNewsNZIE from .onenewsnz import OneNewsNZIE
from .oneplace import OnePlacePodcastIE
from .onet import ( from .onet import (
OnetIE, OnetIE,
OnetChannelIE, OnetChannelIE,
@@ -1343,6 +1404,7 @@ from .pluralsight import (
PluralsightIE, PluralsightIE,
PluralsightCourseIE, PluralsightCourseIE,
) )
from .podbayfm import PodbayFMIE, PodbayFMChannelIE
from .podchaser import PodchaserIE from .podchaser import PodchaserIE
from .podomatic import PodomaticIE from .podomatic import PodomaticIE
from .pokemon import ( from .pokemon import (
@@ -1356,6 +1418,8 @@ from .pokergo import (
from .polsatgo import PolsatGoIE from .polsatgo import PolsatGoIE
from .polskieradio import ( from .polskieradio import (
PolskieRadioIE, PolskieRadioIE,
PolskieRadioLegacyIE,
PolskieRadioAuditionIE,
PolskieRadioCategoryIE, PolskieRadioCategoryIE,
PolskieRadioPlayerIE, PolskieRadioPlayerIE,
PolskieRadioPodcastIE, PolskieRadioPodcastIE,
@@ -1397,6 +1461,7 @@ from .prx import (
) )
from .puls4 import Puls4IE from .puls4 import Puls4IE
from .pyvideo import PyvideoIE from .pyvideo import PyvideoIE
from .qingting import QingTingIE
from .qqmusic import ( from .qqmusic import (
QQMusicIE, QQMusicIE,
QQMusicSingerIE, QQMusicSingerIE,
@@ -1524,6 +1589,7 @@ from .ruhd import RUHDIE
from .rule34video import Rule34VideoIE from .rule34video import Rule34VideoIE
from .rumble import ( from .rumble import (
RumbleEmbedIE, RumbleEmbedIE,
RumbleIE,
RumbleChannelIE, RumbleChannelIE,
) )
from .rutube import ( from .rutube import (
@@ -1564,7 +1630,9 @@ from .samplefocus import SampleFocusIE
from .sapo import SapoIE from .sapo import SapoIE
from .savefrom import SaveFromIE from .savefrom import SaveFromIE
from .sbs import SBSIE from .sbs import SBSIE
from .screen9 import Screen9IE
from .screencast import ScreencastIE from .screencast import ScreencastIE
from .screencastify import ScreencastifyIE
from .screencastomatic import ScreencastOMaticIE from .screencastomatic import ScreencastOMaticIE
from .scrippsnetworks import ( from .scrippsnetworks import (
ScrippsNetworksWatchIE, ScrippsNetworksWatchIE,
@@ -1594,6 +1662,7 @@ from .shared import (
VivoIE, VivoIE,
) )
from .sharevideos import ShareVideosEmbedIE from .sharevideos import ShareVideosEmbedIE
from .sibnet import SibnetEmbedIE
from .shemaroome import ShemarooMeIE from .shemaroome import ShemarooMeIE
from .showroomlive import ShowRoomLiveIE from .showroomlive import ShowRoomLiveIE
from .simplecast import ( from .simplecast import (
@@ -1609,7 +1678,6 @@ from .skyit import (
SkyItVideoIE, SkyItVideoIE,
SkyItVideoLiveIE, SkyItVideoLiveIE,
SkyItIE, SkyItIE,
SkyItAcademyIE,
SkyItArteIE, SkyItArteIE,
CieloTVItIE, CieloTVItIE,
TV8ItIE, TV8ItIE,
@@ -1642,6 +1710,7 @@ from .soundcloud import (
SoundcloudSetIE, SoundcloudSetIE,
SoundcloudRelatedIE, SoundcloudRelatedIE,
SoundcloudUserIE, SoundcloudUserIE,
SoundcloudUserPermalinkIE,
SoundcloudTrackStationIE, SoundcloudTrackStationIE,
SoundcloudPlaylistIE, SoundcloudPlaylistIE,
SoundcloudSearchIE, SoundcloudSearchIE,
@@ -1729,6 +1798,7 @@ from .svt import (
SVTPlayIE, SVTPlayIE,
SVTSeriesIE, SVTSeriesIE,
) )
from .swearnet import SwearnetEpisodeIE
from .swrmediathek import SWRMediathekIE from .swrmediathek import SWRMediathekIE
from .syvdk import SYVDKIE from .syvdk import SYVDKIE
from .syfy import SyfyIE from .syfy import SyfyIE
@@ -1802,6 +1872,11 @@ from .theweatherchannel import TheWeatherChannelIE
from .thisamericanlife import ThisAmericanLifeIE from .thisamericanlife import ThisAmericanLifeIE
from .thisav import ThisAVIE from .thisav import ThisAVIE
from .thisoldhouse import ThisOldHouseIE from .thisoldhouse import ThisOldHouseIE
from .thisvid import (
ThisVidIE,
ThisVidMemberIE,
ThisVidPlaylistIE,
)
from .threespeak import ( from .threespeak import (
ThreeSpeakIE, ThreeSpeakIE,
ThreeSpeakUserIE, ThreeSpeakUserIE,
@@ -1851,6 +1926,7 @@ from .trovo import (
TrovoChannelVodIE, TrovoChannelVodIE,
TrovoChannelClipIE, TrovoChannelClipIE,
) )
from .trtcocuk import TrtCocukVideoIE
from .trueid import TrueIDIE from .trueid import TrueIDIE
from .trunews import TruNewsIE from .trunews import TruNewsIE
from .truth import TruthIE from .truth import TruthIE
@@ -1879,7 +1955,6 @@ from .tv2 import (
) )
from .tv24ua import ( from .tv24ua import (
TV24UAVideoIE, TV24UAVideoIE,
TV24UAGenericPassthroughIE
) )
from .tv2dk import ( from .tv2dk import (
TV2DKIE, TV2DKIE,
@@ -1930,7 +2005,8 @@ from .tvp import (
TVPEmbedIE, TVPEmbedIE,
TVPIE, TVPIE,
TVPStreamIE, TVPStreamIE,
TVPWebsiteIE, TVPVODSeriesIE,
TVPVODVideoIE,
) )
from .tvplay import ( from .tvplay import (
TVPlayIE, TVPlayIE,
@@ -1961,6 +2037,7 @@ from .twitter import (
TwitterIE, TwitterIE,
TwitterAmplifyIE, TwitterAmplifyIE,
TwitterBroadcastIE, TwitterBroadcastIE,
TwitterSpacesIE,
TwitterShortenerIE, TwitterShortenerIE,
) )
from .udemy import ( from .udemy import (
@@ -1984,6 +2061,7 @@ from .umg import UMGDeIE
from .unistra import UnistraIE from .unistra import UnistraIE
from .unity import UnityIE from .unity import UnityIE
from .unscripted import UnscriptedNewsVideoIE from .unscripted import UnscriptedNewsVideoIE
from .unsupported import KnownDRMIE, KnownPiracyIE
from .uol import UOLIE from .uol import UOLIE
from .uplynk import ( from .uplynk import (
UplynkIE, UplynkIE,
@@ -2003,7 +2081,10 @@ from .varzesh3 import Varzesh3IE
from .vbox7 import Vbox7IE from .vbox7 import Vbox7IE
from .veehd import VeeHDIE from .veehd import VeeHDIE
from .veo import VeoIE from .veo import VeoIE
from .veoh import VeohIE from .veoh import (
VeohIE,
VeohUserIE
)
from .vesti import VestiIE from .vesti import VestiIE
from .vevo import ( from .vevo import (
VevoIE, VevoIE,
@@ -2029,6 +2110,13 @@ from .videocampus_sachsen import (
) )
from .videodetective import VideoDetectiveIE from .videodetective import VideoDetectiveIE
from .videofyme import VideofyMeIE from .videofyme import VideofyMeIE
from .videoken import (
VideoKenIE,
VideoKenPlayerIE,
VideoKenPlaylistIE,
VideoKenCategoryIE,
VideoKenTopicIE,
)
from .videomore import ( from .videomore import (
VideomoreIE, VideomoreIE,
VideomoreVideoIE, VideomoreVideoIE,
@@ -2053,6 +2141,7 @@ from .vimeo import (
VimeoGroupsIE, VimeoGroupsIE,
VimeoLikesIE, VimeoLikesIE,
VimeoOndemandIE, VimeoOndemandIE,
VimeoProIE,
VimeoReviewIE, VimeoReviewIE,
VimeoUserIE, VimeoUserIE,
VimeoWatchLaterIE, VimeoWatchLaterIE,
@@ -2140,6 +2229,7 @@ from .wdr import (
WDRElefantIE, WDRElefantIE,
WDRMobileIE, WDRMobileIE,
) )
from .webcamerapl import WebcameraplIE
from .webcaster import ( from .webcaster import (
WebcasterIE, WebcasterIE,
WebcasterFeedIE, WebcasterFeedIE,
@@ -2162,7 +2252,10 @@ from .wistia import (
WistiaPlaylistIE, WistiaPlaylistIE,
WistiaChannelIE, WistiaChannelIE,
) )
from .wordpress import WordpressPlaylistEmbedIE from .wordpress import (
WordpressPlaylistEmbedIE,
WordpressMiniAudioPlayerEmbedIE,
)
from .worldstarhiphop import WorldStarHipHopIE from .worldstarhiphop import WorldStarHipHopIE
from .wppilot import ( from .wppilot import (
WPPilotIE, WPPilotIE,
@@ -2181,12 +2274,6 @@ from .xhamster import (
XHamsterEmbedIE, XHamsterEmbedIE,
XHamsterUserIE, XHamsterUserIE,
) )
from .xiami import (
XiamiSongIE,
XiamiAlbumIE,
XiamiArtistIE,
XiamiCollectionIE
)
from .ximalaya import ( from .ximalaya import (
XimalayaIE, XimalayaIE,
XimalayaAlbumIE XimalayaAlbumIE
@@ -2223,6 +2310,7 @@ from .yandexvideo import (
from .yapfiles import YapFilesIE from .yapfiles import YapFilesIE
from .yesjapan import YesJapanIE from .yesjapan import YesJapanIE
from .yinyuetai import YinYueTaiIE from .yinyuetai import YinYueTaiIE
from .yle_areena import YleAreenaIE
from .ynet import YnetIE from .ynet import YnetIE
from .youjizz import YouJizzIE from .youjizz import YouJizzIE
from .youku import ( from .youku import (
@@ -2285,6 +2373,7 @@ from .zee5 import (
Zee5IE, Zee5IE,
Zee5SeriesIE, Zee5SeriesIE,
) )
from .zeenews import ZeeNewsIE
from .zhihu import ZhihuIE from .zhihu import ZhihuIE
from .zingmp3 import ( from .zingmp3 import (
ZingMp3IE, ZingMp3IE,

View File

@@ -155,8 +155,6 @@ class ABCIE(InfoExtractor):
'format_id': format_id 'format_id': format_id
}) })
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
'title': self._og_search_title(webpage), 'title': self._og_search_title(webpage),
@@ -221,7 +219,6 @@ class ABCIViewIE(InfoExtractor):
entry_protocol='m3u8_native', m3u8_id='hls', fatal=False) entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)
if formats: if formats:
break break
self._sort_formats(formats)
subtitles = {} subtitles = {}
src_vtt = stream.get('captions', {}).get('src-vtt') src_vtt = stream.get('captions', {}).get('src-vtt')

View File

@@ -78,7 +78,6 @@ class ABCOTVSIE(InfoExtractor):
'url': mp4_url, 'url': mp4_url,
'width': 640, 'width': 640,
}) })
self._sort_formats(formats)
image = video.get('image') or {} image = video.get('image') or {}
@@ -119,7 +118,6 @@ class ABCOTVSClipsIE(InfoExtractor):
title = video_data['title'] title = video_data['title']
formats = self._extract_m3u8_formats( formats = self._extract_m3u8_formats(
video_data['videoURL'].split('?')[0], video_id, 'mp4') video_data['videoURL'].split('?')[0], video_id, 'mp4')
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -27,7 +27,6 @@ class AcFunVideoBaseIE(InfoExtractor):
**parse_codecs(video.get('codecs', '')) **parse_codecs(video.get('codecs', ''))
}) })
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
'formats': formats, 'formats': formats,
@@ -161,7 +160,7 @@ class AcFunBangumiIE(AcFunVideoBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
ac_idx = parse_qs(url).get('ac', [None])[-1] ac_idx = parse_qs(url).get('ac', [None])[-1]
video_id = f'{video_id}{format_field(ac_idx, template="__%s")}' video_id = f'{video_id}{format_field(ac_idx, None, "__%s")}'
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
json_bangumi_data = self._search_json(r'window.bangumiData\s*=', webpage, 'bangumiData', video_id) json_bangumi_data = self._search_json(r'window.bangumiData\s*=', webpage, 'bangumiData', video_id)

View File

@@ -28,30 +28,34 @@ from ..utils import (
class ADNIE(InfoExtractor): class ADNIE(InfoExtractor):
IE_DESC = 'Anime Digital Network' IE_DESC = 'Animation Digital Network'
_VALID_URL = r'https?://(?:www\.)?animedigitalnetwork\.fr/video/[^/]+/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?(?:animation|anime)digitalnetwork\.fr/video/[^/]+/(?P<id>\d+)'
_TEST = { _TESTS = [{
'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites', 'url': 'https://animationdigitalnetwork.fr/video/fruits-basket/9841-episode-1-a-ce-soir',
'md5': '0319c99885ff5547565cacb4f3f9348d', 'md5': '1c9ef066ceb302c86f80c2b371615261',
'info_dict': { 'info_dict': {
'id': '7778', 'id': '9841',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Blue Exorcist - Kyôto Saga - Episode 1', 'title': 'Fruits Basket - Episode 1',
'description': 'md5:2f7b5aa76edbc1a7a92cedcda8a528d5', 'description': 'md5:14be2f72c3c96809b0ca424b0097d336',
'series': 'Blue Exorcist - Kyôto Saga', 'series': 'Fruits Basket',
'duration': 1467, 'duration': 1437,
'release_date': '20170106', 'release_date': '20190405',
'comment_count': int, 'comment_count': int,
'average_rating': float, 'average_rating': float,
'season_number': 2, 'season_number': 1,
'episode': 'Début des hostilités', 'episode': 'À ce soir !',
'episode_number': 1, 'episode_number': 1,
} },
} 'skip': 'Only available in region (FR, ...)',
}, {
'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites',
'only_matching': True,
}]
_NETRC_MACHINE = 'animedigitalnetwork' _NETRC_MACHINE = 'animationdigitalnetwork'
_BASE_URL = 'http://animedigitalnetwork.fr' _BASE = 'animationdigitalnetwork.fr'
_API_BASE_URL = 'https://gw.api.animedigitalnetwork.fr/' _API_BASE_URL = 'https://gw.api.' + _BASE + '/'
_PLAYER_BASE_URL = _API_BASE_URL + 'player/' _PLAYER_BASE_URL = _API_BASE_URL + 'player/'
_HEADERS = {} _HEADERS = {}
_LOGIN_ERR_MESSAGE = 'Unable to log in' _LOGIN_ERR_MESSAGE = 'Unable to log in'
@@ -75,11 +79,11 @@ class ADNIE(InfoExtractor):
if subtitle_location: if subtitle_location:
enc_subtitles = self._download_webpage( enc_subtitles = self._download_webpage(
subtitle_location, video_id, 'Downloading subtitles data', subtitle_location, video_id, 'Downloading subtitles data',
fatal=False, headers={'Origin': 'https://animedigitalnetwork.fr'}) fatal=False, headers={'Origin': 'https://' + self._BASE})
if not enc_subtitles: if not enc_subtitles:
return None return None
# http://animedigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js # http://animationdigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js
dec_subtitles = unpad_pkcs7(aes_cbc_decrypt_bytes( dec_subtitles = unpad_pkcs7(aes_cbc_decrypt_bytes(
compat_b64decode(enc_subtitles[24:]), compat_b64decode(enc_subtitles[24:]),
binascii.unhexlify(self._K + '7fac1178830cfe0c'), binascii.unhexlify(self._K + '7fac1178830cfe0c'),
@@ -164,7 +168,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
}, data=b'')['token'] }, data=b'')['token']
links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link') links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link')
self._K = ''.join([random.choice('0123456789abcdef') for _ in range(16)]) self._K = ''.join(random.choices('0123456789abcdef', k=16))
message = bytes_to_intlist(json.dumps({ message = bytes_to_intlist(json.dumps({
'k': self._K, 'k': self._K,
't': token, 't': token,
@@ -231,7 +235,6 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
for f in m3u8_formats: for f in m3u8_formats:
f['language'] = 'fr' f['language'] = 'fr'
formats.extend(m3u8_formats) formats.extend(m3u8_formats)
self._sort_formats(formats)
video = (self._download_json( video = (self._download_json(
self._API_BASE_URL + 'video/%s' % video_id, video_id, self._API_BASE_URL + 'video/%s' % video_id, video_id,

View File

@@ -1352,7 +1352,7 @@ MSO_INFO = {
} }
class AdobePassIE(InfoExtractor): class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor
_SERVICE_PROVIDER_TEMPLATE = 'https://sp.auth.adobe.com/adobe-services/%s' _SERVICE_PROVIDER_TEMPLATE = 'https://sp.auth.adobe.com/adobe-services/%s'
_USER_AGENT = 'Mozilla/5.0 (X11; Linux i686; rv:47.0) Gecko/20100101 Firefox/47.0' _USER_AGENT = 'Mozilla/5.0 (X11; Linux i686; rv:47.0) Gecko/20100101 Firefox/47.0'
_MVPD_CACHE = 'ap-mvpd' _MVPD_CACHE = 'ap-mvpd'

View File

@@ -70,7 +70,6 @@ class AdobeTVBaseIE(InfoExtractor):
}) })
s3_extracted = True s3_extracted = True
formats.append(f) formats.append(f)
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
@@ -269,7 +268,6 @@ class AdobeTVVideoIE(AdobeTVBaseIE):
'width': int_or_none(source.get('width') or None), 'width': int_or_none(source.get('width') or None),
'url': source_src, 'url': source_src,
}) })
self._sort_formats(formats)
# For both metadata and downloaded files the duration varies among # For both metadata and downloaded files the duration varies among
# formats. I just pick the max one # formats. I just pick the max one

View File

@@ -180,7 +180,6 @@ class AdultSwimIE(TurnerBaseIE):
info['subtitles'].setdefault('en', []).append({ info['subtitles'].setdefault('en', []).append({
'url': asset_url, 'url': asset_url,
}) })
self._sort_formats(info['formats'])
return info return info
else: else:

View File

@@ -8,7 +8,7 @@ from ..utils import (
) )
class AENetworksBaseIE(ThePlatformIE): class AENetworksBaseIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
_BASE_URL_REGEX = r'''(?x)https?:// _BASE_URL_REGEX = r'''(?x)https?://
(?:(?:www|play|watch)\.)? (?:(?:www|play|watch)\.)?
(?P<domain> (?P<domain>
@@ -62,7 +62,6 @@ class AENetworksBaseIE(ThePlatformIE):
subtitles = self._merge_subtitles(subtitles, tp_subtitles) subtitles = self._merge_subtitles(subtitles, tp_subtitles)
if last_e and not formats: if last_e and not formats:
raise last_e raise last_e
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
'formats': formats, 'formats': formats,
@@ -304,7 +303,6 @@ class HistoryTopicIE(AENetworksBaseIE):
class HistoryPlayerIE(AENetworksBaseIE): class HistoryPlayerIE(AENetworksBaseIE):
IE_NAME = 'history:player' IE_NAME = 'history:player'
_VALID_URL = r'https?://(?:www\.)?(?P<domain>(?:history|biography)\.com)/player/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?(?P<domain>(?:history|biography)\.com)/player/(?P<id>\d+)'
_TESTS = []
def _real_extract(self, url): def _real_extract(self, url):
domain, video_id = self._match_valid_url(url).groups() domain, video_id = self._match_valid_url(url).groups()

View File

@@ -0,0 +1,40 @@
from .common import InfoExtractor
from .vimeo import VimeoIE
class AeonCoIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?aeon\.co/videos/(?P<id>[^/?]+)'
_TESTS = [{
'url': 'https://aeon.co/videos/raw-solar-storm-footage-is-the-punk-rock-antidote-to-sleek-james-webb-imagery',
'md5': 'e5884d80552c9b6ea8d268a258753362',
'info_dict': {
'id': '1284717',
'ext': 'mp4',
'title': 'Brilliant Noise',
'thumbnail': 'https://i.vimeocdn.com/video/21006315-1a1e49da8b07fd908384a982b4ba9ff0268c509a474576ebdf7b1392f4acae3b-d_960',
'uploader': 'Semiconductor',
'uploader_id': 'semiconductor',
'uploader_url': 'https://vimeo.com/semiconductor',
'duration': 348
}
}, {
'url': 'https://aeon.co/videos/dazzling-timelapse-shows-how-microbes-spoil-our-food-and-sometimes-enrich-it',
'md5': '4e5f3dad9dbda0dbfa2da41a851e631e',
'info_dict': {
'id': '728595228',
'ext': 'mp4',
'title': 'Wrought',
'thumbnail': 'https://i.vimeocdn.com/video/1484618528-c91452611f9a4e4497735a533da60d45b2fe472deb0c880f0afaab0cd2efb22a-d_1280',
'uploader': 'Biofilm Productions',
'uploader_id': 'user140352216',
'uploader_url': 'https://vimeo.com/user140352216',
'duration': 1344
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
vimeo_id = self._search_regex(r'hosterId":\s*"(?P<id>[0-9]+)', webpage, 'vimeo id')
vimeo_url = VimeoIE._smuggle_referrer(f'https://player.vimeo.com/video/{vimeo_id}', 'https://aeon.co')
return self.url_result(vimeo_url, VimeoIE)

View File

@@ -338,7 +338,6 @@ class AfreecaTVIE(InfoExtractor):
}] }]
if not formats and not self.get_param('ignore_no_formats'): if not formats and not self.get_param('ignore_no_formats'):
continue continue
self._sort_formats(formats)
file_info = common_entry.copy() file_info = common_entry.copy()
file_info.update({ file_info.update({
'id': format_id, 'id': format_id,
@@ -380,7 +379,7 @@ class AfreecaTVIE(InfoExtractor):
return info return info
class AfreecaTVLiveIE(AfreecaTVIE): class AfreecaTVLiveIE(AfreecaTVIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'afreecatv:live' IE_NAME = 'afreecatv:live'
_VALID_URL = r'https?://play\.afreeca(?:tv)?\.com/(?P<id>[^/]+)(?:/(?P<bno>\d+))?' _VALID_URL = r'https?://play\.afreeca(?:tv)?\.com/(?P<id>[^/]+)(?:/(?P<bno>\d+))?'
@@ -464,8 +463,6 @@ class AfreecaTVLiveIE(AfreecaTVIE):
'quality': quality_key(quality_str), 'quality': quality_key(quality_str),
}) })
self._sort_formats(formats)
station_info = self._download_json( station_info = self._download_json(
'https://st.afreecatv.com/api/get_station_status.php', broadcast_no, 'https://st.afreecatv.com/api/get_station_status.php', broadcast_no,
query={'szBjId': broadcaster_id}, fatal=False, query={'szBjId': broadcaster_id}, fatal=False,

251
yt_dlp/extractor/agora.py Normal file
View File

@@ -0,0 +1,251 @@
import functools
import uuid
from .common import InfoExtractor
from ..utils import (
ExtractorError,
OnDemandPagedList,
int_or_none,
month_by_name,
parse_duration,
try_call,
)
class WyborczaVideoIE(InfoExtractor):
# this id is not an article id, it has to be extracted from the article
_VALID_URL = r'(?:wyborcza:video:|https?://wyborcza\.pl/(?:api-)?video/)(?P<id>\d+)'
IE_NAME = 'wyborcza:video'
_TESTS = [{
'url': 'wyborcza:video:26207634',
'info_dict': {
'id': '26207634',
'ext': 'mp4',
'title': '- Polska w 2020 r. jest innym państwem niż w 2015 r. Nie zmieniła się konstytucja, ale jest to już inny ustrój - mówi Adam Bodnar',
'description': ' ',
'uploader': 'Dorota Roman',
'duration': 2474,
'thumbnail': r're:https://.+\.jpg',
},
}, {
'url': 'https://wyborcza.pl/video/26207634',
'only_matching': True,
}, {
'url': 'https://wyborcza.pl/api-video/26207634',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
meta = self._download_json(f'https://wyborcza.pl/api-video/{video_id}', video_id)
formats = []
base_url = meta['redirector'].replace('http://', 'https://') + meta['basePath']
for quality in ('standard', 'high'):
if not meta['files'].get(quality):
continue
formats.append({
'url': base_url + meta['files'][quality],
'height': int_or_none(
self._search_regex(
r'p(\d+)[a-z]+\.mp4$', meta['files'][quality],
'mp4 video height', default=None)),
'format_id': quality,
})
if meta['files'].get('dash'):
formats.extend(self._extract_mpd_formats(base_url + meta['files']['dash'], video_id))
return {
'id': video_id,
'formats': formats,
'title': meta.get('title'),
'description': meta.get('lead'),
'uploader': meta.get('signature'),
'thumbnail': meta.get('imageUrl'),
'duration': meta.get('duration'),
}
class WyborczaPodcastIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://(?:www\.)?(?:
wyborcza\.pl/podcast(?:/0,172673\.html)?|
wysokieobcasy\.pl/wysokie-obcasy/0,176631\.html
)(?:\?(?:[^&#]+?&)*podcast=(?P<id>\d+))?
'''
_TESTS = [{
'url': 'https://wyborcza.pl/podcast/0,172673.html?podcast=100720#S.main_topic-K.C-B.6-L.1.podcast',
'info_dict': {
'id': '100720',
'ext': 'mp3',
'title': 'Cyfrodziewczyny. Kim były pionierki polskiej informatyki ',
'uploader': 'Michał Nogaś ',
'upload_date': '20210117',
'description': 'md5:49f0a06ffc4c1931210d3ab1416a651d',
'duration': 3684.0,
'thumbnail': r're:https://.+\.jpg',
},
}, {
'url': 'https://www.wysokieobcasy.pl/wysokie-obcasy/0,176631.html?podcast=100673',
'info_dict': {
'id': '100673',
'ext': 'mp3',
'title': 'Czym jest ubóstwo menstruacyjne i dlaczego dotyczy każdej i każdego z nas?',
'uploader': 'Agnieszka Urazińska ',
'upload_date': '20210115',
'description': 'md5:c161dc035f8dbb60077011fc41274899',
'duration': 1803.0,
'thumbnail': r're:https://.+\.jpg',
},
}, {
'url': 'https://wyborcza.pl/podcast',
'info_dict': {
'id': '334',
'title': 'Gościnnie: Wyborcza, 8:10',
'series': 'Gościnnie: Wyborcza, 8:10',
},
'playlist_mincount': 370,
}, {
'url': 'https://www.wysokieobcasy.pl/wysokie-obcasy/0,176631.html',
'info_dict': {
'id': '395',
'title': 'Gościnnie: Wysokie Obcasy',
'series': 'Gościnnie: Wysokie Obcasy',
},
'playlist_mincount': 12,
}]
def _real_extract(self, url):
podcast_id = self._match_id(url)
if not podcast_id: # playlist
podcast_id = '395' if 'wysokieobcasy.pl/' in url else '334'
return self.url_result(TokFMAuditionIE._create_url(podcast_id), TokFMAuditionIE, podcast_id)
meta = self._download_json('https://wyborcza.pl/api/podcast', podcast_id,
query={'guid': podcast_id, 'type': 'wo' if 'wysokieobcasy.pl/' in url else None})
day, month, year = self._search_regex(r'^(\d\d?) (\w+) (\d{4})$', meta.get('publishedDate'),
'upload date', group=(1, 2, 3), default=(None, None, None))
return {
'id': podcast_id,
'url': meta['url'],
'title': meta.get('title'),
'description': meta.get('description'),
'thumbnail': meta.get('imageUrl'),
'duration': parse_duration(meta.get('duration')),
'uploader': meta.get('author'),
'upload_date': try_call(lambda: f'{year}{month_by_name(month, lang="pl"):0>2}{day:0>2}'),
}
class TokFMPodcastIE(InfoExtractor):
_VALID_URL = r'(?:https?://audycje\.tokfm\.pl/podcast/|tokfm:podcast:)(?P<id>\d+),?'
IE_NAME = 'tokfm:podcast'
_TESTS = [{
'url': 'https://audycje.tokfm.pl/podcast/91275,-Systemowy-rasizm-Czy-zamieszki-w-USA-po-morderstwie-w-Minneapolis-doprowadza-do-zmian-w-sluzbach-panstwowych',
'info_dict': {
'id': '91275',
'ext': 'aac',
'title': 'md5:a9b15488009065556900169fb8061cce',
'episode': 'md5:a9b15488009065556900169fb8061cce',
'series': 'Analizy',
},
}]
def _real_extract(self, url):
media_id = self._match_id(url)
# in case it breaks see this but it returns a lot of useless data
# https://api.podcast.radioagora.pl/api4/getPodcasts?podcast_id=100091&with_guests=true&with_leaders_for_mobile=true
metadata = self._download_json(
f'https://audycje.tokfm.pl/getp/3{media_id}', media_id, 'Downloading podcast metadata')
if not metadata:
raise ExtractorError('No such podcast', expected=True)
metadata = metadata[0]
formats = []
for ext in ('aac', 'mp3'):
url_data = self._download_json(
f'https://api.podcast.radioagora.pl/api4/getSongUrl?podcast_id={media_id}&device_id={uuid.uuid4()}&ppre=false&audio={ext}',
media_id, 'Downloading podcast %s URL' % ext)
# prevents inserting the mp3 (default) multiple times
if 'link_ssl' in url_data and f'.{ext}' in url_data['link_ssl']:
formats.append({
'url': url_data['link_ssl'],
'ext': ext,
'vcodec': 'none',
'acodec': ext,
})
return {
'id': media_id,
'formats': formats,
'title': metadata.get('podcast_name'),
'series': metadata.get('series_name'),
'episode': metadata.get('podcast_name'),
}
class TokFMAuditionIE(InfoExtractor):
_VALID_URL = r'(?:https?://audycje\.tokfm\.pl/audycja/|tokfm:audition:)(?P<id>\d+),?'
IE_NAME = 'tokfm:audition'
_TESTS = [{
'url': 'https://audycje.tokfm.pl/audycja/218,Analizy',
'info_dict': {
'id': '218',
'title': 'Analizy',
'series': 'Analizy',
},
'playlist_count': 1635,
}]
_PAGE_SIZE = 30
_HEADERS = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 9; Redmi 3S Build/PQ3A.190801.002; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/87.0.4280.101 Mobile Safari/537.36',
}
@staticmethod
def _create_url(id):
return f'https://audycje.tokfm.pl/audycja/{id}'
def _real_extract(self, url):
audition_id = self._match_id(url)
data = self._download_json(
f'https://api.podcast.radioagora.pl/api4/getSeries?series_id={audition_id}',
audition_id, 'Downloading audition metadata', headers=self._HEADERS)
if not data:
raise ExtractorError('No such audition', expected=True)
data = data[0]
entries = OnDemandPagedList(functools.partial(
self._fetch_page, audition_id, data), self._PAGE_SIZE)
return {
'_type': 'playlist',
'id': audition_id,
'title': data.get('series_name'),
'series': data.get('series_name'),
'entries': entries,
}
def _fetch_page(self, audition_id, data, page):
for retry in self.RetryManager():
podcast_page = self._download_json(
f'https://api.podcast.radioagora.pl/api4/getPodcasts?series_id={audition_id}&limit=30&offset={page}&with_guests=true&with_leaders_for_mobile=true',
audition_id, f'Downloading podcast list page {page + 1}', headers=self._HEADERS)
if not podcast_page:
retry.error = ExtractorError('Agora returned empty page', expected=True)
for podcast in podcast_page:
yield {
'_type': 'url_transparent',
'url': podcast['podcast_sharing_url'],
'ie_key': TokFMPodcastIE.ie_key(),
'title': podcast.get('podcast_name'),
'episode': podcast.get('podcast_name'),
'description': podcast.get('podcast_description'),
'timestamp': int_or_none(podcast.get('podcast_timestamp')),
'series': data.get('series_name'),
}

96
yt_dlp/extractor/airtv.py Normal file
View File

@@ -0,0 +1,96 @@
from .common import InfoExtractor
from .youtube import YoutubeIE
from ..utils import (
determine_ext,
int_or_none,
mimetype2ext,
parse_iso8601,
traverse_obj
)
class AirTVIE(InfoExtractor):
_VALID_URL = r'https?://www\.air\.tv/watch\?v=(?P<id>\w+)'
_TESTS = [{
# without youtube_id
'url': 'https://www.air.tv/watch?v=W87jcWleSn2hXZN47zJZsQ',
'info_dict': {
'id': 'W87jcWleSn2hXZN47zJZsQ',
'ext': 'mp4',
'release_date': '20221003',
'release_timestamp': 1664792603,
'channel_id': 'vgfManQlRQKgoFQ8i8peFQ',
'title': 'md5:c12d49ed367c3dadaa67659aff43494c',
'upload_date': '20221003',
'duration': 151,
'view_count': int,
'thumbnail': 'https://cdn-sp-gcs.air.tv/videos/W/8/W87jcWleSn2hXZN47zJZsQ/b13fc56464f47d9d62a36d110b9b5a72-4096x2160_9.jpg',
'timestamp': 1664792603,
}
}, {
# with youtube_id
'url': 'https://www.air.tv/watch?v=sv57EC8tRXG6h8dNXFUU1Q',
'info_dict': {
'id': '2ZTqmpee-bQ',
'ext': 'mp4',
'comment_count': int,
'tags': 'count:11',
'channel_follower_count': int,
'like_count': int,
'uploader': 'Newsflare',
'thumbnail': 'https://i.ytimg.com/vi_webp/2ZTqmpee-bQ/maxresdefault.webp',
'availability': 'public',
'title': 'Geese Chase Alligator Across Golf Course',
'uploader_id': 'NewsflareBreaking',
'channel_url': 'https://www.youtube.com/channel/UCzSSoloGEz10HALUAbYhngQ',
'description': 'md5:99b21d9cea59330149efbd9706e208f5',
'age_limit': 0,
'channel_id': 'UCzSSoloGEz10HALUAbYhngQ',
'uploader_url': 'http://www.youtube.com/user/NewsflareBreaking',
'view_count': int,
'categories': ['News & Politics'],
'live_status': 'not_live',
'playable_in_embed': True,
'channel': 'Newsflare',
'duration': 37,
'upload_date': '20180511',
}
}]
def _get_formats_and_subtitle(self, json_data, video_id):
formats, subtitles = [], {}
for source in traverse_obj(json_data, 'sources', 'sources_desktop', ...):
ext = determine_ext(source.get('src'), mimetype2ext(source.get('type')))
if ext == 'm3u8':
fmts, subs = self._extract_m3u8_formats_and_subtitles(source.get('src'), video_id)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
else:
formats.append({'url': source.get('src'), 'ext': ext})
return formats, subtitles
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
nextjs_json = self._search_nextjs_data(webpage, display_id)['props']['pageProps']['initialState']['videos'][display_id]
if nextjs_json.get('youtube_id'):
return self.url_result(
f'https://www.youtube.com/watch?v={nextjs_json.get("youtube_id")}', YoutubeIE)
formats, subtitles = self._get_formats_and_subtitle(nextjs_json, display_id)
return {
'id': display_id,
'title': nextjs_json.get('title') or self._html_search_meta('og:title', webpage),
'formats': formats,
'subtitles': subtitles,
'description': nextjs_json.get('description') or None,
'duration': int_or_none(nextjs_json.get('duration')),
'thumbnails': [
{'url': thumbnail}
for thumbnail in traverse_obj(nextjs_json, ('default_thumbnails', ...))],
'channel_id': traverse_obj(nextjs_json, 'channel', 'channel_slug'),
'timestamp': parse_iso8601(nextjs_json.get('created')),
'release_timestamp': parse_iso8601(nextjs_json.get('published')),
'view_count': int_or_none(nextjs_json.get('views')),
}

View File

@@ -112,8 +112,6 @@ class AllocineIE(InfoExtractor):
}) })
duration, view_count, timestamp = [None] * 3 duration, view_count, timestamp = [None] * 3
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
'display_id': display_id, 'display_id': display_id,

View File

@@ -22,7 +22,6 @@ class Alsace20TVBaseIE(InfoExtractor):
self._extract_smil_formats(fmt_url, video_id, fatal=False) self._extract_smil_formats(fmt_url, video_id, fatal=False)
if '/smil:_' in fmt_url if '/smil:_' in fmt_url
else self._extract_mpd_formats(fmt_url, video_id, mpd_id=res, fatal=False)) else self._extract_mpd_formats(fmt_url, video_id, mpd_id=res, fatal=False))
self._sort_formats(formats)
webpage = (url and self._download_webpage(url, video_id, fatal=False)) or '' webpage = (url and self._download_webpage(url, video_id, fatal=False)) or ''
thumbnail = url_or_none(dict_get(info, ('image', 'preview', )) or self._og_search_thumbnail(webpage)) thumbnail = url_or_none(dict_get(info, ('image', 'preview', )) or self._og_search_thumbnail(webpage))

View File

@@ -63,8 +63,6 @@ class AluraIE(InfoExtractor):
f['height'] = int('720' if m.group('res') == 'hd' else '480') f['height'] = int('720' if m.group('res') == 'hd' else '480')
formats.extend(video_format) formats.extend(video_format)
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
'title': video_title, 'title': video_title,
@@ -113,7 +111,7 @@ class AluraIE(InfoExtractor):
raise ExtractorError('Unable to log in') raise ExtractorError('Unable to log in')
class AluraCourseIE(AluraIE): class AluraCourseIE(AluraIE): # XXX: Do not subclass from concrete IE
_VALID_URL = r'https?://(?:cursos\.)?alura\.com\.br/course/(?P<id>[^/]+)' _VALID_URL = r'https?://(?:cursos\.)?alura\.com\.br/course/(?P<id>[^/]+)'
_LOGIN_URL = 'https://cursos.alura.com.br/loginForm?urlAfterLogin=/loginForm' _LOGIN_URL = 'https://cursos.alura.com.br/loginForm?urlAfterLogin=/loginForm'

View File

@@ -1,5 +1,17 @@
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError, int_or_none from ..utils import (
ExtractorError,
clean_html,
float_or_none,
get_element_by_attribute,
get_element_by_class,
int_or_none,
js_to_json,
traverse_obj,
url_or_none,
)
class AmazonStoreIE(InfoExtractor): class AmazonStoreIE(InfoExtractor):
@@ -9,7 +21,7 @@ class AmazonStoreIE(InfoExtractor):
'url': 'https://www.amazon.co.uk/dp/B098XNCHLD/', 'url': 'https://www.amazon.co.uk/dp/B098XNCHLD/',
'info_dict': { 'info_dict': {
'id': 'B098XNCHLD', 'id': 'B098XNCHLD',
'title': 'md5:dae240564cbb2642170c02f7f0d7e472', 'title': str,
}, },
'playlist_mincount': 1, 'playlist_mincount': 1,
'playlist': [{ 'playlist': [{
@@ -20,28 +32,32 @@ class AmazonStoreIE(InfoExtractor):
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
'duration': 34, 'duration': 34,
}, },
}] }],
'expected_warnings': ['Unable to extract data'],
}, { }, {
'url': 'https://www.amazon.in/Sony-WH-1000XM4-Cancelling-Headphones-Bluetooth/dp/B0863TXGM3', 'url': 'https://www.amazon.in/Sony-WH-1000XM4-Cancelling-Headphones-Bluetooth/dp/B0863TXGM3',
'info_dict': { 'info_dict': {
'id': 'B0863TXGM3', 'id': 'B0863TXGM3',
'title': 'md5:d1d3352428f8f015706c84b31e132169', 'title': str,
}, },
'playlist_mincount': 4, 'playlist_mincount': 4,
'expected_warnings': ['Unable to extract data'],
}, { }, {
'url': 'https://www.amazon.com/dp/B0845NXCXF/', 'url': 'https://www.amazon.com/dp/B0845NXCXF/',
'info_dict': { 'info_dict': {
'id': 'B0845NXCXF', 'id': 'B0845NXCXF',
'title': 'md5:f3fa12779bf62ddb6a6ec86a360a858e', 'title': str,
}, },
'playlist-mincount': 1, 'playlist-mincount': 1,
'expected_warnings': ['Unable to extract data'],
}, { }, {
'url': 'https://www.amazon.es/Samsung-Smartphone-s-AMOLED-Quad-c%C3%A1mara-espa%C3%B1ola/dp/B08WX337PQ', 'url': 'https://www.amazon.es/Samsung-Smartphone-s-AMOLED-Quad-c%C3%A1mara-espa%C3%B1ola/dp/B08WX337PQ',
'info_dict': { 'info_dict': {
'id': 'B08WX337PQ', 'id': 'B08WX337PQ',
'title': 'md5:f3fa12779bf62ddb6a6ec86a360a858e', 'title': str,
}, },
'playlist_mincount': 1, 'playlist_mincount': 1,
'expected_warnings': ['Unable to extract data'],
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -52,7 +68,7 @@ class AmazonStoreIE(InfoExtractor):
try: try:
data_json = self._search_json( data_json = self._search_json(
r'var\s?obj\s?=\s?jQuery\.parseJSON\(\'', webpage, 'data', id, r'var\s?obj\s?=\s?jQuery\.parseJSON\(\'', webpage, 'data', id,
transform_source=lambda x: x.replace(R'\\u', R'\u')) transform_source=js_to_json)
except ExtractorError as e: except ExtractorError as e:
retry.error = e retry.error = e
@@ -66,3 +82,89 @@ class AmazonStoreIE(InfoExtractor):
'width': int_or_none(video.get('videoWidth')), 'width': int_or_none(video.get('videoWidth')),
} for video in (data_json.get('videos') or []) if video.get('isVideo') and video.get('url')] } for video in (data_json.get('videos') or []) if video.get('isVideo') and video.get('url')]
return self.playlist_result(entries, playlist_id=id, playlist_title=data_json.get('title')) return self.playlist_result(entries, playlist_id=id, playlist_title=data_json.get('title'))
class AmazonReviewsIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?amazon\.(?:[a-z]{2,3})(?:\.[a-z]{2})?/gp/customer-reviews/(?P<id>[^/&#$?]+)'
_TESTS = [{
'url': 'https://www.amazon.com/gp/customer-reviews/R10VE9VUSY19L3/ref=cm_cr_arp_d_rvw_ttl',
'info_dict': {
'id': 'R10VE9VUSY19L3',
'ext': 'mp4',
'title': 'Get squad #Suspicious',
'description': 'md5:7012695052f440a1e064e402d87e0afb',
'uploader': 'Kimberly Cronkright',
'average_rating': 1.0,
'thumbnail': r're:^https?://.*\.jpg$',
},
'expected_warnings': ['Review body was not found in webpage'],
}, {
'url': 'https://www.amazon.com/gp/customer-reviews/R10VE9VUSY19L3/ref=cm_cr_arp_d_rvw_ttl?language=es_US',
'info_dict': {
'id': 'R10VE9VUSY19L3',
'ext': 'mp4',
'title': 'Get squad #Suspicious',
'description': 'md5:7012695052f440a1e064e402d87e0afb',
'uploader': 'Kimberly Cronkright',
'average_rating': 1.0,
'thumbnail': r're:^https?://.*\.jpg$',
},
'expected_warnings': ['Review body was not found in webpage'],
}, {
'url': 'https://www.amazon.in/gp/customer-reviews/RV1CO8JN5VGXV/',
'info_dict': {
'id': 'RV1CO8JN5VGXV',
'ext': 'mp4',
'title': 'Not sure about its durability',
'description': 'md5:1a252c106357f0a3109ebf37d2e87494',
'uploader': 'Shoaib Gulzar',
'average_rating': 2.0,
'thumbnail': r're:^https?://.*\.jpg$',
},
'expected_warnings': ['Review body was not found in webpage'],
}]
def _real_extract(self, url):
video_id = self._match_id(url)
for retry in self.RetryManager():
webpage = self._download_webpage(url, video_id)
review_body = get_element_by_attribute('data-hook', 'review-body', webpage)
if not review_body:
retry.error = ExtractorError('Review body was not found in webpage', expected=True)
formats, subtitles = [], {}
manifest_url = self._search_regex(
r'data-video-url="([^"]+)"', review_body, 'm3u8 url', default=None)
if url_or_none(manifest_url):
fmts, subtitles = self._extract_m3u8_formats_and_subtitles(
manifest_url, video_id, 'mp4', fatal=False)
formats.extend(fmts)
video_url = self._search_regex(
r'<input[^>]+\bvalue="([^"]+)"[^>]+\bclass="video-url"', review_body, 'mp4 url', default=None)
if url_or_none(video_url):
formats.append({
'url': video_url,
'ext': 'mp4',
'format_id': 'http-mp4',
})
if not formats:
self.raise_no_formats('No video found for this customer review', expected=True)
return {
'id': video_id,
'title': (clean_html(get_element_by_attribute('data-hook', 'review-title', webpage))
or self._html_extract_title(webpage)),
'description': clean_html(traverse_obj(re.findall(
r'<span(?:\s+class="cr-original-review-content")?>(.+?)</span>', review_body), -1)),
'uploader': clean_html(get_element_by_class('a-profile-name', webpage)),
'average_rating': float_or_none(clean_html(get_element_by_attribute(
'data-hook', 'review-star-rating', webpage) or '').partition(' ')[0]),
'thumbnail': self._search_regex(
r'data-thumbnail-url="([^"]+)"', review_body, 'thumbnail', default=None),
'formats': formats,
'subtitles': subtitles,
}

View File

@@ -0,0 +1,290 @@
import json
from .common import InfoExtractor
from ..utils import ExtractorError, int_or_none, traverse_obj, try_get
class AmazonMiniTVBaseIE(InfoExtractor):
def _real_initialize(self):
self._download_webpage(
'https://www.amazon.in/minitv', None,
note='Fetching guest session cookies')
AmazonMiniTVBaseIE.session_id = self._get_cookies('https://www.amazon.in')['session-id'].value
def _call_api(self, asin, data=None, note=None):
device = {'clientId': 'ATVIN', 'deviceLocale': 'en_GB'}
if data:
data['variables'].update({
'contentType': 'VOD',
'sessionIdToken': self.session_id,
**device,
})
resp = self._download_json(
f'https://www.amazon.in/minitv/api/web/{"graphql" if data else "prs"}',
asin, note=note, headers={'Content-Type': 'application/json'},
data=json.dumps(data).encode() if data else None,
query=None if data else {
'deviceType': 'A1WMMUXPCUJL4N',
'contentId': asin,
**device,
})
if resp.get('errors'):
raise ExtractorError(f'MiniTV said: {resp["errors"][0]["message"]}')
elif not data:
return resp
return resp['data'][data['operationName']]
class AmazonMiniTVIE(AmazonMiniTVBaseIE):
_VALID_URL = r'(?:https?://(?:www\.)?amazon\.in/minitv/tp/|amazonminitv:(?:amzn1\.dv\.gti\.)?)(?P<id>[a-f0-9-]+)'
_TESTS = [{
'url': 'https://www.amazon.in/minitv/tp/75fe3a75-b8fe-4499-8100-5c9424344840?referrer=https%3A%2F%2Fwww.amazon.in%2Fminitv',
'info_dict': {
'id': 'amzn1.dv.gti.75fe3a75-b8fe-4499-8100-5c9424344840',
'ext': 'mp4',
'title': 'May I Kiss You?',
'language': 'Hindi',
'thumbnail': r're:^https?://.*\.jpg$',
'description': 'md5:a549bfc747973e04feb707833474e59d',
'release_timestamp': 1644710400,
'release_date': '20220213',
'duration': 846,
'chapters': 'count:2',
'series': 'Couple Goals',
'series_id': 'amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
'season': 'Season 3',
'season_number': 3,
'season_id': 'amzn1.dv.gti.20331016-d9b9-4968-b991-c89fa4927a36',
'episode': 'May I Kiss You?',
'episode_number': 2,
'episode_id': 'amzn1.dv.gti.75fe3a75-b8fe-4499-8100-5c9424344840',
},
}, {
'url': 'https://www.amazon.in/minitv/tp/280d2564-584f-452f-9c98-7baf906e01ab?referrer=https%3A%2F%2Fwww.amazon.in%2Fminitv',
'info_dict': {
'id': 'amzn1.dv.gti.280d2564-584f-452f-9c98-7baf906e01ab',
'ext': 'mp4',
'title': 'Jahaan',
'language': 'Hindi',
'thumbnail': r're:^https?://.*\.jpg',
'description': 'md5:05eb765a77bf703f322f120ec6867339',
'release_timestamp': 1647475200,
'release_date': '20220317',
'duration': 783,
'chapters': [],
},
}, {
'url': 'https://www.amazon.in/minitv/tp/280d2564-584f-452f-9c98-7baf906e01ab',
'only_matching': True,
}, {
'url': 'amazonminitv:amzn1.dv.gti.280d2564-584f-452f-9c98-7baf906e01ab',
'only_matching': True,
}, {
'url': 'amazonminitv:280d2564-584f-452f-9c98-7baf906e01ab',
'only_matching': True,
}]
_GRAPHQL_QUERY_CONTENT = '''
query content($sessionIdToken: String!, $deviceLocale: String, $contentId: ID!, $contentType: ContentType!, $clientId: String) {
content(
applicationContextInput: {deviceLocale: $deviceLocale, sessionIdToken: $sessionIdToken, clientId: $clientId}
contentId: $contentId
contentType: $contentType
) {
contentId
name
... on Episode {
contentId
vodType
name
images
description {
synopsis
contentLengthInSeconds
}
publicReleaseDateUTC
audioTracks
seasonId
seriesId
seriesName
seasonNumber
episodeNumber
timecode {
endCreditsTime
}
}
... on MovieContent {
contentId
vodType
name
description {
synopsis
contentLengthInSeconds
}
images
publicReleaseDateUTC
audioTracks
}
}
}'''
def _real_extract(self, url):
asin = f'amzn1.dv.gti.{self._match_id(url)}'
prs = self._call_api(asin, note='Downloading playback info')
formats, subtitles = [], {}
for type_, asset in prs['playbackAssets'].items():
if not traverse_obj(asset, 'manifestUrl'):
continue
if type_ == 'hls':
m3u8_fmts, m3u8_subs = self._extract_m3u8_formats_and_subtitles(
asset['manifestUrl'], asin, ext='mp4', entry_protocol='m3u8_native',
m3u8_id=type_, fatal=False)
formats.extend(m3u8_fmts)
subtitles = self._merge_subtitles(subtitles, m3u8_subs)
elif type_ == 'dash':
mpd_fmts, mpd_subs = self._extract_mpd_formats_and_subtitles(
asset['manifestUrl'], asin, mpd_id=type_, fatal=False)
formats.extend(mpd_fmts)
subtitles = self._merge_subtitles(subtitles, mpd_subs)
else:
self.report_warning(f'Unknown asset type: {type_}')
title_info = self._call_api(
asin, note='Downloading title info', data={
'operationName': 'content',
'variables': {'contentId': asin},
'query': self._GRAPHQL_QUERY_CONTENT,
})
credits_time = try_get(title_info, lambda x: x['timecode']['endCreditsTime'] / 1000)
is_episode = title_info.get('vodType') == 'EPISODE'
return {
'id': asin,
'title': title_info.get('name'),
'formats': formats,
'subtitles': subtitles,
'language': traverse_obj(title_info, ('audioTracks', 0)),
'thumbnails': [{
'id': type_,
'url': url,
} for type_, url in (title_info.get('images') or {}).items()],
'description': traverse_obj(title_info, ('description', 'synopsis')),
'release_timestamp': int_or_none(try_get(title_info, lambda x: x['publicReleaseDateUTC'] / 1000)),
'duration': traverse_obj(title_info, ('description', 'contentLengthInSeconds')),
'chapters': [{
'start_time': credits_time,
'title': 'End Credits',
}] if credits_time else [],
'series': title_info.get('seriesName'),
'series_id': title_info.get('seriesId'),
'season_number': title_info.get('seasonNumber'),
'season_id': title_info.get('seasonId'),
'episode': title_info.get('name') if is_episode else None,
'episode_number': title_info.get('episodeNumber'),
'episode_id': asin if is_episode else None,
}
class AmazonMiniTVSeasonIE(AmazonMiniTVBaseIE):
IE_NAME = 'amazonminitv:season'
_VALID_URL = r'amazonminitv:season:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
IE_DESC = 'Amazon MiniTV Series, "minitv:season:" prefix'
_TESTS = [{
'url': 'amazonminitv:season:amzn1.dv.gti.0aa996eb-6a1b-4886-a342-387fbd2f1db0',
'playlist_mincount': 6,
'info_dict': {
'id': 'amzn1.dv.gti.0aa996eb-6a1b-4886-a342-387fbd2f1db0',
},
}, {
'url': 'amazonminitv:season:0aa996eb-6a1b-4886-a342-387fbd2f1db0',
'only_matching': True,
}]
_GRAPHQL_QUERY = '''
query getEpisodes($sessionIdToken: String!, $clientId: String, $episodeOrSeasonId: ID!, $deviceLocale: String) {
getEpisodes(
applicationContextInput: {sessionIdToken: $sessionIdToken, deviceLocale: $deviceLocale, clientId: $clientId}
episodeOrSeasonId: $episodeOrSeasonId
) {
episodes {
... on Episode {
contentId
name
images
seriesName
seasonId
seriesId
seasonNumber
episodeNumber
description {
synopsis
contentLengthInSeconds
}
publicReleaseDateUTC
}
}
}
}
'''
def _entries(self, asin):
season_info = self._call_api(
asin, note='Downloading season info', data={
'operationName': 'getEpisodes',
'variables': {'episodeOrSeasonId': asin},
'query': self._GRAPHQL_QUERY,
})
for episode in season_info['episodes']:
yield self.url_result(
f'amazonminitv:{episode["contentId"]}', AmazonMiniTVIE, episode['contentId'])
def _real_extract(self, url):
asin = f'amzn1.dv.gti.{self._match_id(url)}'
return self.playlist_result(self._entries(asin), asin)
class AmazonMiniTVSeriesIE(AmazonMiniTVBaseIE):
IE_NAME = 'amazonminitv:series'
_VALID_URL = r'amazonminitv:series:(?:amzn1\.dv\.gti\.)?(?P<id>[a-f0-9-]+)'
_TESTS = [{
'url': 'amazonminitv:series:amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
'playlist_mincount': 3,
'info_dict': {
'id': 'amzn1.dv.gti.56521d46-b040-4fd5-872e-3e70476a04b0',
},
}, {
'url': 'amazonminitv:series:56521d46-b040-4fd5-872e-3e70476a04b0',
'only_matching': True,
}]
_GRAPHQL_QUERY = '''
query getSeasons($sessionIdToken: String!, $deviceLocale: String, $episodeOrSeasonOrSeriesId: ID!, $clientId: String) {
getSeasons(
applicationContextInput: {deviceLocale: $deviceLocale, sessionIdToken: $sessionIdToken, clientId: $clientId}
episodeOrSeasonOrSeriesId: $episodeOrSeasonOrSeriesId
) {
seasons {
seasonId
}
}
}
'''
def _entries(self, asin):
season_info = self._call_api(
asin, note='Downloading series info', data={
'operationName': 'getSeasons',
'variables': {'episodeOrSeasonOrSeriesId': asin},
'query': self._GRAPHQL_QUERY,
})
for season in season_info['seasons']:
yield self.url_result(f'amazonminitv:season:{season["seasonId"]}', AmazonMiniTVSeasonIE, season['seasonId'])
def _real_extract(self, url):
asin = f'amzn1.dv.gti.{self._match_id(url)}'
return self.playlist_result(self._entries(asin), asin)

View File

@@ -9,7 +9,7 @@ from ..utils import (
) )
class AMCNetworksIE(ThePlatformIE): class AMCNetworksIE(ThePlatformIE): # XXX: Do not subclass from concrete IE
_VALID_URL = r'https?://(?:www\.)?(?P<site>amc|bbcamerica|ifc|(?:we|sundance)tv)\.com/(?P<id>(?:movies|shows(?:/[^/]+)+)/[^/?#&]+)' _VALID_URL = r'https?://(?:www\.)?(?P<site>amc|bbcamerica|ifc|(?:we|sundance)tv)\.com/(?P<id>(?:movies|shows(?:/[^/]+)+)/[^/?#&]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.bbcamerica.com/shows/the-graham-norton-show/videos/tina-feys-adorable-airline-themed-family-dinner--51631', 'url': 'https://www.bbcamerica.com/shows/the-graham-norton-show/videos/tina-feys-adorable-airline-themed-family-dinner--51631',
@@ -106,7 +106,6 @@ class AMCNetworksIE(ThePlatformIE):
media_url = update_url_query(media_url, query) media_url = update_url_query(media_url, query)
formats, subtitles = self._extract_theplatform_smil( formats, subtitles = self._extract_theplatform_smil(
media_url, video_id) media_url, video_id)
self._sort_formats(formats)
thumbnails = [] thumbnails = []
thumbnail_urls = [properties.get('imageDesktop')] thumbnail_urls = [properties.get('imageDesktop')]

View File

@@ -11,7 +11,7 @@ from ..utils import (
class AmericasTestKitchenIE(InfoExtractor): class AmericasTestKitchenIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?:americastestkitchen|cooks(?:country|illustrated))\.com/(?P<resource_type>episode|videos)/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?americastestkitchen\.com/(?:cooks(?:country|illustrated)/)?(?P<resource_type>episode|videos)/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers', 'url': 'https://www.americastestkitchen.com/episode/582-weeknight-japanese-suppers',
'md5': 'b861c3e365ac38ad319cfd509c30577f', 'md5': 'b861c3e365ac38ad319cfd509c30577f',
@@ -19,15 +19,20 @@ class AmericasTestKitchenIE(InfoExtractor):
'id': '5b400b9ee338f922cb06450c', 'id': '5b400b9ee338f922cb06450c',
'title': 'Japanese Suppers', 'title': 'Japanese Suppers',
'ext': 'mp4', 'ext': 'mp4',
'display_id': 'weeknight-japanese-suppers',
'description': 'md5:64e606bfee910627efc4b5f050de92b3', 'description': 'md5:64e606bfee910627efc4b5f050de92b3',
'thumbnail': r're:^https?://', 'timestamp': 1523304000,
'timestamp': 1523318400, 'upload_date': '20180409',
'upload_date': '20180410', 'release_date': '20180409',
'release_date': '20180410', 'series': 'America\'s Test Kitchen',
'series': "America's Test Kitchen", 'season': 'Season 18',
'season_number': 18,
'episode': 'Japanese Suppers', 'episode': 'Japanese Suppers',
'season_number': 18,
'episode_number': 15, 'episode_number': 15,
'duration': 1376,
'thumbnail': r're:^https?://',
'average_rating': 0,
'view_count': int,
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@@ -40,15 +45,20 @@ class AmericasTestKitchenIE(InfoExtractor):
'id': '5fbe8c61bda2010001c6763b', 'id': '5fbe8c61bda2010001c6763b',
'title': 'Simple Chicken Dinner', 'title': 'Simple Chicken Dinner',
'ext': 'mp4', 'ext': 'mp4',
'display_id': 'atktv_2103_simple-chicken-dinner_full-episode_web-mp4',
'description': 'md5:eb68737cc2fd4c26ca7db30139d109e7', 'description': 'md5:eb68737cc2fd4c26ca7db30139d109e7',
'thumbnail': r're:^https?://', 'timestamp': 1610737200,
'timestamp': 1610755200, 'upload_date': '20210115',
'upload_date': '20210116', 'release_date': '20210115',
'release_date': '20210116', 'series': 'America\'s Test Kitchen',
'series': "America's Test Kitchen", 'season': 'Season 21',
'season_number': 21,
'episode': 'Simple Chicken Dinner', 'episode': 'Simple Chicken Dinner',
'season_number': 21,
'episode_number': 3, 'episode_number': 3,
'duration': 1397,
'thumbnail': r're:^https?://',
'view_count': int,
'average_rating': 0,
}, },
'params': { 'params': {
'skip_download': True, 'skip_download': True,
@@ -57,10 +67,10 @@ class AmericasTestKitchenIE(InfoExtractor):
'url': 'https://www.americastestkitchen.com/videos/3420-pan-seared-salmon', 'url': 'https://www.americastestkitchen.com/videos/3420-pan-seared-salmon',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://www.cookscountry.com/episode/564-when-only-chocolate-will-do', 'url': 'https://www.americastestkitchen.com/cookscountry/episode/564-when-only-chocolate-will-do',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://www.cooksillustrated.com/videos/4478-beef-wellington', 'url': 'https://www.americastestkitchen.com/cooksillustrated/videos/4478-beef-wellington',
'only_matching': True, 'only_matching': True,
}] }]
@@ -90,7 +100,7 @@ class AmericasTestKitchenIE(InfoExtractor):
class AmericasTestKitchenSeasonIE(InfoExtractor): class AmericasTestKitchenSeasonIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?(?P<show>americastestkitchen|cookscountry)\.com/episodes/browse/season_(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?americastestkitchen\.com(?P<show>/cookscountry)?/episodes/browse/season_(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
# ATK Season # ATK Season
'url': 'https://www.americastestkitchen.com/episodes/browse/season_1', 'url': 'https://www.americastestkitchen.com/episodes/browse/season_1',
@@ -101,7 +111,7 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
'playlist_count': 13, 'playlist_count': 13,
}, { }, {
# Cooks Country Season # Cooks Country Season
'url': 'https://www.cookscountry.com/episodes/browse/season_12', 'url': 'https://www.americastestkitchen.com/cookscountry/episodes/browse/season_12',
'info_dict': { 'info_dict': {
'id': 'season_12', 'id': 'season_12',
'title': 'Season 12', 'title': 'Season 12',
@@ -110,17 +120,17 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
show_name, season_number = self._match_valid_url(url).groups() show_path, season_number = self._match_valid_url(url).group('show', 'id')
season_number = int(season_number) season_number = int(season_number)
slug = 'atk' if show_name == 'americastestkitchen' else 'cco' slug = 'cco' if show_path == '/cookscountry' else 'atk'
season = 'Season %d' % season_number season = 'Season %d' % season_number
season_search = self._download_json( season_search = self._download_json(
'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug, 'https://y1fnzxui30-dsn.algolia.net/1/indexes/everest_search_%s_season_desc_production' % slug,
season, headers={ season, headers={
'Origin': 'https://www.%s.com' % show_name, 'Origin': 'https://www.americastestkitchen.com',
'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805', 'X-Algolia-API-Key': '8d504d0099ed27c1b73708d22871d805',
'X-Algolia-Application-Id': 'Y1FNZXUI30', 'X-Algolia-Application-Id': 'Y1FNZXUI30',
}, query={ }, query={
@@ -136,12 +146,12 @@ class AmericasTestKitchenSeasonIE(InfoExtractor):
def entries(): def entries():
for episode in (season_search.get('hits') or []): for episode in (season_search.get('hits') or []):
search_url = episode.get('search_url') search_url = episode.get('search_url') # always formatted like '/episode/123-title-of-episode'
if not search_url: if not search_url:
continue continue
yield { yield {
'_type': 'url', '_type': 'url',
'url': 'https://www.%s.com%s' % (show_name, search_url), 'url': f'https://www.americastestkitchen.com{show_path or ""}{search_url}',
'id': try_get(episode, lambda e: e['objectID'].split('_')[-1]), 'id': try_get(episode, lambda e: e['objectID'].split('_')[-1]),
'title': episode.get('title'), 'title': episode.get('title'),
'description': episode.get('description'), 'description': episode.get('description'),

View File

@@ -10,7 +10,7 @@ from ..utils import (
) )
class AMPIE(InfoExtractor): class AMPIE(InfoExtractor): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor
# parse Akamai Adaptive Media Player feed # parse Akamai Adaptive Media Player feed
def _extract_feed_info(self, url): def _extract_feed_info(self, url):
feed = self._download_json( feed = self._download_json(
@@ -84,8 +84,6 @@ class AMPIE(InfoExtractor):
'ext': ext, 'ext': ext,
}) })
self._sort_formats(formats)
timestamp = unified_timestamp(item.get('pubDate'), ' ') or parse_iso8601(item.get('dc-date')) timestamp = unified_timestamp(item.get('pubDate'), ' ') or parse_iso8601(item.get('dc-date'))
return { return {

View File

@@ -19,7 +19,6 @@ class Ant1NewsGrBaseIE(InfoExtractor):
raise ExtractorError('no source found for %s' % video_id) raise ExtractorError('no source found for %s' % video_id)
formats, subs = (self._extract_m3u8_formats_and_subtitles(source, video_id, 'mp4') formats, subs = (self._extract_m3u8_formats_and_subtitles(source, video_id, 'mp4')
if determine_ext(source) == 'm3u8' else ([{'url': source}], {})) if determine_ext(source) == 'm3u8' else ([{'url': source}], {}))
self._sort_formats(formats)
thumbnails = scale_thumbnails_to_max_format_width( thumbnails = scale_thumbnails_to_max_format_width(
formats, [{'url': info['thumb']}], r'(?<=/imgHandler/)\d+') formats, [{'url': info['thumb']}], r'(?<=/imgHandler/)\d+')
return { return {

View File

@@ -354,8 +354,6 @@ class AnvatoIE(InfoExtractor):
}) })
formats.append(a_format) formats.append(a_format)
self._sort_formats(formats)
subtitles = {} subtitles = {}
for caption in video_data.get('captions', []): for caption in video_data.get('captions', []):
a_caption = { a_caption = {

View File

@@ -9,7 +9,7 @@ from ..utils import (
) )
class AolIE(YahooIE): class AolIE(YahooIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'aol.com' IE_NAME = 'aol.com'
_VALID_URL = r'(?:aol-video:|https?://(?:www\.)?aol\.(?:com|ca|co\.uk|de|jp)/video/(?:[^/]+/)*)(?P<id>\d{9}|[0-9a-f]{24}|[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})' _VALID_URL = r'(?:aol-video:|https?://(?:www\.)?aol\.(?:com|ca|co\.uk|de|jp)/video/(?:[^/]+/)*)(?P<id>\d{9}|[0-9a-f]{24}|[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})'
@@ -119,7 +119,6 @@ class AolIE(YahooIE):
'height': int_or_none(qs.get('h', [None])[0]), 'height': int_or_none(qs.get('h', [None])[0]),
}) })
formats.append(f) formats.append(f)
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -72,7 +72,6 @@ class APAIE(InfoExtractor):
'format_id': format_id, 'format_id': format_id,
'height': height, 'height': height,
}) })
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -73,7 +73,6 @@ class AparatIE(InfoExtractor):
r'(\d+)[pP]', label or '', 'height', r'(\d+)[pP]', label or '', 'height',
default=None)), default=None)),
}) })
self._sort_formats(formats)
info = self._search_json_ld(webpage, video_id, default={}) info = self._search_json_ld(webpage, video_id, default={})

View File

@@ -120,7 +120,6 @@ class AppleTrailersIE(InfoExtractor):
'height': int_or_none(size_data.get('height')), 'height': int_or_none(size_data.get('height')),
'language': version[:2], 'language': version[:2],
}) })
self._sort_formats(formats)
entries.append({ entries.append({
'id': movie + '-' + re.sub(r'[^a-zA-Z0-9]', '', clip_title).lower(), 'id': movie + '-' + re.sub(r'[^a-zA-Z0-9]', '', clip_title).lower(),
@@ -185,8 +184,6 @@ class AppleTrailersIE(InfoExtractor):
'height': int_or_none(format['height']), 'height': int_or_none(format['height']),
}) })
self._sort_formats(formats)
playlist.append({ playlist.append({
'_type': 'video', '_type': 'video',
'id': video_id, 'id': video_id,

View File

@@ -16,6 +16,7 @@ from ..utils import (
get_element_by_id, get_element_by_id,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
js_to_json,
merge_dicts, merge_dicts,
mimetype2ext, mimetype2ext,
orderedSet, orderedSet,
@@ -311,7 +312,7 @@ class ArchiveOrgIE(InfoExtractor):
}) })
for entry in entries.values(): for entry in entries.values():
self._sort_formats(entry['formats'], ('source', )) entry['_format_sort_fields'] = ('source', )
if len(entries) == 1: if len(entries) == 1:
# If there's only one item, use it as the main info dict # If there's only one item, use it as the main info dict
@@ -367,7 +368,9 @@ class YoutubeWebArchiveIE(InfoExtractor):
'channel_id': 'UCukCyHaD-bK3in_pKpfH9Eg', 'channel_id': 'UCukCyHaD-bK3in_pKpfH9Eg',
'duration': 32, 'duration': 32,
'uploader_id': 'Zeurel', 'uploader_id': 'Zeurel',
'uploader_url': 'http://www.youtube.com/user/Zeurel' 'uploader_url': 'https://www.youtube.com/user/Zeurel',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'channel_url': 'https://www.youtube.com/channel/UCukCyHaD-bK3in_pKpfH9Eg',
} }
}, { }, {
# Internal link # Internal link
@@ -382,7 +385,9 @@ class YoutubeWebArchiveIE(InfoExtractor):
'channel_id': 'UCHnyfMqiRRG1u-2MsSQLbXA', 'channel_id': 'UCHnyfMqiRRG1u-2MsSQLbXA',
'duration': 771, 'duration': 771,
'uploader_id': '1veritasium', 'uploader_id': '1veritasium',
'uploader_url': 'http://www.youtube.com/user/1veritasium' 'uploader_url': 'https://www.youtube.com/user/1veritasium',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'channel_url': 'https://www.youtube.com/channel/UCHnyfMqiRRG1u-2MsSQLbXA',
} }
}, { }, {
# Video from 2012, webm format itag 45. Newest capture is deleted video, with an invalid description. # Video from 2012, webm format itag 45. Newest capture is deleted video, with an invalid description.
@@ -396,7 +401,9 @@ class YoutubeWebArchiveIE(InfoExtractor):
'duration': 398, 'duration': 398,
'description': 'md5:ff4de6a7980cb65d951c2f6966a4f2f3', 'description': 'md5:ff4de6a7980cb65d951c2f6966a4f2f3',
'uploader_id': 'machinima', 'uploader_id': 'machinima',
'uploader_url': 'http://www.youtube.com/user/machinima' 'uploader_url': 'https://www.youtube.com/user/machinima',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'uploader': 'machinima'
} }
}, { }, {
# FLV video. Video file URL does not provide itag information # FLV video. Video file URL does not provide itag information
@@ -410,7 +417,10 @@ class YoutubeWebArchiveIE(InfoExtractor):
'duration': 19, 'duration': 19,
'description': 'md5:10436b12e07ac43ff8df65287a56efb4', 'description': 'md5:10436b12e07ac43ff8df65287a56efb4',
'uploader_id': 'jawed', 'uploader_id': 'jawed',
'uploader_url': 'http://www.youtube.com/user/jawed' 'uploader_url': 'https://www.youtube.com/user/jawed',
'channel_url': 'https://www.youtube.com/channel/UC4QobU6STFB0P71PMvOGN5A',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'uploader': 'jawed',
} }
}, { }, {
'url': 'https://web.archive.org/web/20110712231407/http://www.youtube.com/watch?v=lTx3G6h2xyA', 'url': 'https://web.archive.org/web/20110712231407/http://www.youtube.com/watch?v=lTx3G6h2xyA',
@@ -424,7 +434,9 @@ class YoutubeWebArchiveIE(InfoExtractor):
'duration': 204, 'duration': 204,
'description': 'md5:f7535343b6eda34a314eff8b85444680', 'description': 'md5:f7535343b6eda34a314eff8b85444680',
'uploader_id': 'itsmadeon', 'uploader_id': 'itsmadeon',
'uploader_url': 'http://www.youtube.com/user/itsmadeon' 'uploader_url': 'https://www.youtube.com/user/itsmadeon',
'channel_url': 'https://www.youtube.com/channel/UCqMDNf3Pn5L7pcNkuSEeO3w',
'thumbnail': r're:https?://.*\.(jpg|webp)',
} }
}, { }, {
# First capture is of dead video, second is the oldest from CDX response. # First capture is of dead video, second is the oldest from CDX response.
@@ -435,10 +447,13 @@ class YoutubeWebArchiveIE(InfoExtractor):
'title': 'Fake Teen Doctor Strikes AGAIN! - Weekly Weird News', 'title': 'Fake Teen Doctor Strikes AGAIN! - Weekly Weird News',
'upload_date': '20160218', 'upload_date': '20160218',
'channel_id': 'UCdIaNUarhzLSXGoItz7BHVA', 'channel_id': 'UCdIaNUarhzLSXGoItz7BHVA',
'duration': 1236, 'duration': 1235,
'description': 'md5:21032bae736421e89c2edf36d1936947', 'description': 'md5:21032bae736421e89c2edf36d1936947',
'uploader_id': 'MachinimaETC', 'uploader_id': 'MachinimaETC',
'uploader_url': 'http://www.youtube.com/user/MachinimaETC' 'uploader_url': 'https://www.youtube.com/user/MachinimaETC',
'channel_url': 'https://www.youtube.com/channel/UCdIaNUarhzLSXGoItz7BHVA',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'uploader': 'ETC News',
} }
}, { }, {
# First capture of dead video, capture date in link links to dead capture. # First capture of dead video, capture date in link links to dead capture.
@@ -449,10 +464,13 @@ class YoutubeWebArchiveIE(InfoExtractor):
'title': 'WTF: Video Games Still Launch BROKEN?! - T.U.G.S.', 'title': 'WTF: Video Games Still Launch BROKEN?! - T.U.G.S.',
'upload_date': '20160219', 'upload_date': '20160219',
'channel_id': 'UCdIaNUarhzLSXGoItz7BHVA', 'channel_id': 'UCdIaNUarhzLSXGoItz7BHVA',
'duration': 798, 'duration': 797,
'description': 'md5:a1dbf12d9a3bd7cb4c5e33b27d77ffe7', 'description': 'md5:a1dbf12d9a3bd7cb4c5e33b27d77ffe7',
'uploader_id': 'MachinimaETC', 'uploader_id': 'MachinimaETC',
'uploader_url': 'http://www.youtube.com/user/MachinimaETC' 'uploader_url': 'https://www.youtube.com/user/MachinimaETC',
'channel_url': 'https://www.youtube.com/channel/UCdIaNUarhzLSXGoItz7BHVA',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'uploader': 'ETC News',
}, },
'expected_warnings': [ 'expected_warnings': [
r'unable to download capture webpage \(it may not be archived\)' r'unable to download capture webpage \(it may not be archived\)'
@@ -472,12 +490,11 @@ class YoutubeWebArchiveIE(InfoExtractor):
'title': 'It\'s Bootleg AirPods Time.', 'title': 'It\'s Bootleg AirPods Time.',
'upload_date': '20211021', 'upload_date': '20211021',
'channel_id': 'UC7Jwj9fkrf1adN4fMmTkpug', 'channel_id': 'UC7Jwj9fkrf1adN4fMmTkpug',
'channel_url': 'http://www.youtube.com/channel/UC7Jwj9fkrf1adN4fMmTkpug', 'channel_url': 'https://www.youtube.com/channel/UC7Jwj9fkrf1adN4fMmTkpug',
'duration': 810, 'duration': 810,
'description': 'md5:7b567f898d8237b256f36c1a07d6d7bc', 'description': 'md5:7b567f898d8237b256f36c1a07d6d7bc',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'uploader': 'DankPods', 'uploader': 'DankPods',
'uploader_id': 'UC7Jwj9fkrf1adN4fMmTkpug',
'uploader_url': 'http://www.youtube.com/channel/UC7Jwj9fkrf1adN4fMmTkpug'
} }
}, { }, {
# player response contains '};' See: https://github.com/ytdl-org/youtube-dl/issues/27093 # player response contains '};' See: https://github.com/ytdl-org/youtube-dl/issues/27093
@@ -488,12 +505,135 @@ class YoutubeWebArchiveIE(InfoExtractor):
'title': 'bitch lasagna', 'title': 'bitch lasagna',
'upload_date': '20181005', 'upload_date': '20181005',
'channel_id': 'UC-lHJZR3Gqxm24_Vd_AJ5Yw', 'channel_id': 'UC-lHJZR3Gqxm24_Vd_AJ5Yw',
'channel_url': 'http://www.youtube.com/channel/UC-lHJZR3Gqxm24_Vd_AJ5Yw', 'channel_url': 'https://www.youtube.com/channel/UC-lHJZR3Gqxm24_Vd_AJ5Yw',
'duration': 135, 'duration': 135,
'description': 'md5:2dbe4051feeff2dab5f41f82bb6d11d0', 'description': 'md5:2dbe4051feeff2dab5f41f82bb6d11d0',
'uploader': 'PewDiePie', 'uploader': 'PewDiePie',
'uploader_id': 'PewDiePie', 'uploader_id': 'PewDiePie',
'uploader_url': 'http://www.youtube.com/user/PewDiePie' 'uploader_url': 'https://www.youtube.com/user/PewDiePie',
'thumbnail': r're:https?://.*\.(jpg|webp)',
}
}, {
# ~June 2010 Capture. swfconfig
'url': 'https://web.archive.org/web/0/https://www.youtube.com/watch?v=8XeW5ilk-9Y',
'info_dict': {
'id': '8XeW5ilk-9Y',
'ext': 'flv',
'title': 'Story of Stuff, The Critique Part 4 of 4',
'duration': 541,
'description': 'md5:28157da06f2c5e94c97f7f3072509972',
'uploader': 'HowTheWorldWorks',
'uploader_id': 'HowTheWorldWorks',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'uploader_url': 'https://www.youtube.com/user/HowTheWorldWorks',
'upload_date': '20090520',
}
}, {
# Jan 2011: watch-video-date/eow-date surrounded by whitespace
'url': 'https://web.archive.org/web/20110126141719/http://www.youtube.com/watch?v=Q_yjX80U7Yc',
'info_dict': {
'id': 'Q_yjX80U7Yc',
'ext': 'flv',
'title': 'Spray Paint Art by Clay Butler: Purple Fantasy Forest',
'uploader_id': 'claybutlermusic',
'description': 'md5:4595264559e3d0a0ceb3f011f6334543',
'upload_date': '20090803',
'uploader': 'claybutlermusic',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'duration': 132,
'uploader_url': 'https://www.youtube.com/user/claybutlermusic',
}
}, {
# ~May 2009 swfArgs. ytcfg is spread out over various vars
'url': 'https://web.archive.org/web/0/https://www.youtube.com/watch?v=c5uJgG05xUY',
'info_dict': {
'id': 'c5uJgG05xUY',
'ext': 'webm',
'title': 'Story of Stuff, The Critique Part 1 of 4',
'uploader_id': 'HowTheWorldWorks',
'uploader': 'HowTheWorldWorks',
'uploader_url': 'https://www.youtube.com/user/HowTheWorldWorks',
'upload_date': '20090513',
'description': 'md5:4ca77d79538064e41e4cc464e93f44f0',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'duration': 754,
}
}, {
# ~June 2012. Upload date is in another lang so cannot extract.
'url': 'https://web.archive.org/web/20120607174520/http://www.youtube.com/watch?v=xWTLLl-dQaA',
'info_dict': {
'id': 'xWTLLl-dQaA',
'ext': 'mp4',
'title': 'Black Nerd eHarmony Video Bio Parody (SPOOF)',
'uploader_url': 'https://www.youtube.com/user/BlackNerdComedy',
'description': 'md5:e25f0133aaf9e6793fb81c18021d193e',
'uploader_id': 'BlackNerdComedy',
'uploader': 'BlackNerdComedy',
'duration': 182,
'thumbnail': r're:https?://.*\.(jpg|webp)',
}
}, {
# ~July 2013
'url': 'https://web.archive.org/web/*/https://www.youtube.com/watch?v=9eO1aasHyTM',
'info_dict': {
'id': '9eO1aasHyTM',
'ext': 'mp4',
'title': 'Polar-oid',
'description': 'Cameras and bears are dangerous!',
'uploader_url': 'https://www.youtube.com/user/punkybird',
'uploader_id': 'punkybird',
'duration': 202,
'channel_id': 'UC62R2cBezNBOqxSerfb1nMQ',
'channel_url': 'https://www.youtube.com/channel/UC62R2cBezNBOqxSerfb1nMQ',
'upload_date': '20060428',
'uploader': 'punkybird',
}
}, {
# April 2020: Player response in player config
'url': 'https://web.archive.org/web/20200416034815/https://www.youtube.com/watch?v=Cf7vS8jc7dY&gl=US&hl=en',
'info_dict': {
'id': 'Cf7vS8jc7dY',
'ext': 'mp4',
'title': 'A Dramatic Pool Story (by Jamie Spicer-Lewis) - Game Grumps Animated',
'duration': 64,
'upload_date': '20200408',
'uploader_id': 'GameGrumps',
'uploader': 'GameGrumps',
'channel_url': 'https://www.youtube.com/channel/UC9CuvdOVfMPvKCiwdGKL3cQ',
'channel_id': 'UC9CuvdOVfMPvKCiwdGKL3cQ',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'description': 'md5:c625bb3c02c4f5fb4205971e468fa341',
'uploader_url': 'https://www.youtube.com/user/GameGrumps',
}
}, {
# watch7-user-header with yt-user-info
'url': 'ytarchive:kbh4T_b4Ixw:20160307085057',
'info_dict': {
'id': 'kbh4T_b4Ixw',
'ext': 'mp4',
'title': 'Shovel Knight OST - Strike the Earth! Plains of Passage 16 bit SNES style remake / remix',
'channel_url': 'https://www.youtube.com/channel/UCnTaGvsHmMy792DWeT6HbGA',
'uploader': 'Nelward music',
'duration': 213,
'description': 'md5:804b4a9ce37b050a5fefdbb23aeba54d',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'upload_date': '20150503',
'channel_id': 'UCnTaGvsHmMy792DWeT6HbGA',
}
}, {
# April 2012
'url': 'https://web.archive.org/web/0/https://www.youtube.com/watch?v=SOm7mPoPskU',
'info_dict': {
'id': 'SOm7mPoPskU',
'ext': 'mp4',
'title': 'Boyfriend - Justin Bieber Parody',
'uploader_url': 'https://www.youtube.com/user/thecomputernerd01',
'uploader': 'thecomputernerd01',
'thumbnail': r're:https?://.*\.(jpg|webp)',
'description': 'md5:dd7fa635519c2a5b4d566beaecad7491',
'duration': 200,
'upload_date': '20120407',
'uploader_id': 'thecomputernerd01',
} }
}, { }, {
'url': 'https://web.archive.org/web/http://www.youtube.com/watch?v=kH-G_aIBlFw', 'url': 'https://web.archive.org/web/http://www.youtube.com/watch?v=kH-G_aIBlFw',
@@ -574,6 +714,27 @@ class YoutubeWebArchiveIE(InfoExtractor):
initial_data = self._search_json( initial_data = self._search_json(
self._YT_INITIAL_DATA_RE, webpage, 'initial data', video_id, default={}) self._YT_INITIAL_DATA_RE, webpage, 'initial data', video_id, default={})
ytcfg = {}
for j in re.findall(r'yt\.setConfig\(\s*(?P<json>{\s*(?s:.+?)\s*})\s*\);', webpage): # ~June 2010
ytcfg.update(self._parse_json(j, video_id, fatal=False, ignore_extra=True, transform_source=js_to_json, errnote='') or {})
# XXX: this also may contain a 'ptchn' key
player_config = (
self._search_json(
r'(?:yt\.playerConfig|ytplayer\.config|swfConfig)\s*=',
webpage, 'player config', video_id, default=None)
or ytcfg.get('PLAYER_CONFIG') or {})
# XXX: this may also contain a 'creator' key.
swf_args = self._search_json(r'swfArgs\s*=', webpage, 'swf config', video_id, default={})
if swf_args and not traverse_obj(player_config, ('args',)):
player_config['args'] = swf_args
if not player_response:
# April 2020
player_response = self._parse_json(
traverse_obj(player_config, ('args', 'player_response')) or '{}', video_id, fatal=False)
initial_data_video = traverse_obj( initial_data_video = traverse_obj(
initial_data, ('contents', 'twoColumnWatchNextResults', 'results', 'results', 'contents', ..., 'videoPrimaryInfoRenderer'), initial_data, ('contents', 'twoColumnWatchNextResults', 'results', 'results', 'contents', ..., 'videoPrimaryInfoRenderer'),
expected_type=dict, get_all=False, default={}) expected_type=dict, get_all=False, default={})
@@ -588,21 +749,64 @@ class YoutubeWebArchiveIE(InfoExtractor):
video_details.get('title') video_details.get('title')
or YoutubeBaseInfoExtractor._get_text(microformats, 'title') or YoutubeBaseInfoExtractor._get_text(microformats, 'title')
or YoutubeBaseInfoExtractor._get_text(initial_data_video, 'title') or YoutubeBaseInfoExtractor._get_text(initial_data_video, 'title')
or traverse_obj(player_config, ('args', 'title'))
or self._extract_webpage_title(webpage) or self._extract_webpage_title(webpage)
or search_meta(['og:title', 'twitter:title', 'title'])) or search_meta(['og:title', 'twitter:title', 'title']))
def id_from_url(url, type_):
return self._search_regex(
rf'(?:{type_})/([^/#&?]+)', url or '', f'{type_} id', default=None)
# XXX: would the get_elements_by_... functions be better suited here?
_CHANNEL_URL_HREF_RE = r'href="[^"]*(?P<url>https?://www\.youtube\.com/(?:user|channel)/[^"]+)"'
uploader_or_channel_url = self._search_regex(
[fr'<(?:link\s*itemprop=\"url\"|a\s*id=\"watch-username\").*?\b{_CHANNEL_URL_HREF_RE}>', # @fd05024
fr'<div\s*id=\"(?:watch-channel-stats|watch-headline-user-info)\"[^>]*>\s*<a[^>]*\b{_CHANNEL_URL_HREF_RE}'], # ~ May 2009, ~June 2012
webpage, 'uploader or channel url', default=None)
owner_profile_url = url_or_none(microformats.get('ownerProfileUrl')) # @a6211d2
# Uploader refers to the /user/ id ONLY
uploader_id = (
id_from_url(owner_profile_url, 'user')
or id_from_url(uploader_or_channel_url, 'user')
or ytcfg.get('VIDEO_USERNAME'))
uploader_url = f'https://www.youtube.com/user/{uploader_id}' if uploader_id else None
# XXX: do we want to differentiate uploader and channel?
uploader = (
self._search_regex(
[r'<a\s*id="watch-username"[^>]*>\s*<strong>([^<]+)</strong>', # June 2010
r'var\s*watchUsername\s*=\s*\'(.+?)\';', # ~May 2009
r'<div\s*\bid=\"watch-channel-stats"[^>]*>\s*<a[^>]*>\s*(.+?)\s*</a', # ~May 2009
r'<a\s*id="watch-userbanner"[^>]*title="\s*(.+?)\s*"'], # ~June 2012
webpage, 'uploader', default=None)
or self._html_search_regex(
[r'(?s)<div\s*class="yt-user-info".*?<a[^>]*[^>]*>\s*(.*?)\s*</a', # March 2016
r'(?s)<a[^>]*yt-user-name[^>]*>\s*(.*?)\s*</a'], # july 2013
get_element_by_id('watch7-user-header', webpage), 'uploader', default=None)
or self._html_search_regex(
r'<button\s*href="/user/[^>]*>\s*<span[^>]*>\s*(.+?)\s*<', # April 2012
get_element_by_id('watch-headline-user-info', webpage), 'uploader', default=None)
or traverse_obj(player_config, ('args', 'creator'))
or video_details.get('author'))
channel_id = str_or_none( channel_id = str_or_none(
video_details.get('channelId') video_details.get('channelId')
or microformats.get('externalChannelId') or microformats.get('externalChannelId')
or search_meta('channelId') or search_meta('channelId')
or self._search_regex( or self._search_regex(
r'data-channel-external-id=(["\'])(?P<id>(?:(?!\1).)+)\1', # @b45a9e6 r'data-channel-external-id=(["\'])(?P<id>(?:(?!\1).)+)\1', # @b45a9e6
webpage, 'channel id', default=None, group='id')) webpage, 'channel id', default=None, group='id')
channel_url = f'http://www.youtube.com/channel/{channel_id}' if channel_id else None or id_from_url(owner_profile_url, 'channel')
or id_from_url(uploader_or_channel_url, 'channel')
or traverse_obj(player_config, ('args', 'ucid')))
channel_url = f'https://www.youtube.com/channel/{channel_id}' if channel_id else None
duration = int_or_none( duration = int_or_none(
video_details.get('lengthSeconds') video_details.get('lengthSeconds')
or microformats.get('lengthSeconds') or microformats.get('lengthSeconds')
or traverse_obj(player_config, ('args', ('length_seconds', 'l')), get_all=False)
or parse_duration(search_meta('duration'))) or parse_duration(search_meta('duration')))
description = ( description = (
video_details.get('shortDescription') video_details.get('shortDescription')
@@ -610,26 +814,13 @@ class YoutubeWebArchiveIE(InfoExtractor):
or clean_html(get_element_by_id('eow-description', webpage)) # @9e6dd23 or clean_html(get_element_by_id('eow-description', webpage)) # @9e6dd23
or search_meta(['description', 'og:description', 'twitter:description'])) or search_meta(['description', 'og:description', 'twitter:description']))
uploader = video_details.get('author')
# Uploader ID and URL
uploader_mobj = re.search(
r'<link itemprop="url" href="(?P<uploader_url>https?://www\.youtube\.com/(?:user|channel)/(?P<uploader_id>[^"]+))">', # @fd05024
webpage)
if uploader_mobj is not None:
uploader_id, uploader_url = uploader_mobj.group('uploader_id'), uploader_mobj.group('uploader_url')
else:
# @a6211d2
uploader_url = url_or_none(microformats.get('ownerProfileUrl'))
uploader_id = self._search_regex(
r'(?:user|channel)/([^/]+)', uploader_url or '', 'uploader id', default=None)
upload_date = unified_strdate( upload_date = unified_strdate(
dict_get(microformats, ('uploadDate', 'publishDate')) dict_get(microformats, ('uploadDate', 'publishDate'))
or search_meta(['uploadDate', 'datePublished']) or search_meta(['uploadDate', 'datePublished'])
or self._search_regex( or self._search_regex(
[r'(?s)id="eow-date.*?>(.*?)</span>', [r'(?s)id="eow-date.*?>\s*(.*?)\s*</span>',
r'(?:id="watch-uploader-info".*?>.*?|["\']simpleText["\']\s*:\s*["\'])(?:Published|Uploaded|Streamed live|Started) on (.+?)[<"\']'], # @7998520 r'(?:id="watch-uploader-info".*?>.*?|["\']simpleText["\']\s*:\s*["\'])(?:Published|Uploaded|Streamed live|Started) on (.+?)[<"\']', # @7998520
r'class\s*=\s*"(?:watch-video-date|watch-video-added post-date)"[^>]*>\s*([^<]+?)\s*<'], # ~June 2010, ~Jan 2009 (respectively)
webpage, 'upload date', default=None)) webpage, 'upload date', default=None))
return { return {
@@ -698,18 +889,22 @@ class YoutubeWebArchiveIE(InfoExtractor):
url_date = url_date or url_date_2 url_date = url_date or url_date_2
urlh = None urlh = None
try: retry_manager = self.RetryManager(fatal=False)
urlh = self._request_webpage( for retry in retry_manager:
HEADRequest('https://web.archive.org/web/2oe_/http://wayback-fakeurl.archive.org/yt/%s' % video_id), try:
video_id, note='Fetching archived video file url', expected_status=True) urlh = self._request_webpage(
except ExtractorError as e: HEADRequest('https://web.archive.org/web/2oe_/http://wayback-fakeurl.archive.org/yt/%s' % video_id),
# HTTP Error 404 is expected if the video is not saved. video_id, note='Fetching archived video file url', expected_status=True)
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404: except ExtractorError as e:
self.raise_no_formats( # HTTP Error 404 is expected if the video is not saved.
'The requested video is not archived, indexed, or there is an issue with web.archive.org', if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404:
expected=True) self.raise_no_formats(
else: 'The requested video is not archived, indexed, or there is an issue with web.archive.org (try again later)', expected=True)
raise else:
retry.error = e
if retry_manager.error:
self.raise_no_formats(retry_manager.error, expected=True, video_id=video_id)
capture_dates = self._get_capture_dates(video_id, int_or_none(url_date)) capture_dates = self._get_capture_dates(video_id, int_or_none(url_date))
self.write_debug('Captures to try: ' + join_nonempty(*capture_dates, delim=', ')) self.write_debug('Captures to try: ' + join_nonempty(*capture_dates, delim=', '))

View File

@@ -144,7 +144,6 @@ class ArcPublishingIE(InfoExtractor):
'url': s_url, 'url': s_url,
'quality': -10, 'quality': -10,
}) })
self._sort_formats(formats)
subtitles = {} subtitles = {}
for subtitle in (try_get(video, lambda x: x['subtitles']['urls'], list) or []): for subtitle in (try_get(video, lambda x: x['subtitles']['urls'], list) or []):

View File

@@ -40,14 +40,15 @@ class ARDMediathekBaseIE(InfoExtractor):
'This video is not available due to geoblocking', 'This video is not available due to geoblocking',
countries=self._GEO_COUNTRIES, metadata_available=True) countries=self._GEO_COUNTRIES, metadata_available=True)
self._sort_formats(formats)
subtitles = {} subtitles = {}
subtitle_url = media_info.get('_subtitleUrl') subtitle_url = media_info.get('_subtitleUrl')
if subtitle_url: if subtitle_url:
subtitles['de'] = [{ subtitles['de'] = [{
'ext': 'ttml', 'ext': 'ttml',
'url': subtitle_url, 'url': subtitle_url,
}, {
'ext': 'vtt',
'url': subtitle_url.replace('/ebutt/', '/webvtt/') + '.vtt',
}] }]
return { return {
@@ -262,7 +263,6 @@ class ARDMediathekIE(ARDMediathekBaseIE):
'format_id': fid, 'format_id': fid,
'url': furl, 'url': furl,
}) })
self._sort_formats(formats)
info = { info = {
'formats': formats, 'formats': formats,
} }
@@ -289,16 +289,16 @@ class ARDMediathekIE(ARDMediathekBaseIE):
class ARDIE(InfoExtractor): class ARDIE(InfoExtractor):
_VALID_URL = r'(?P<mainurl>https?://(?:www\.)?daserste\.de/(?:[^/?#&]+/)+(?P<id>[^/?#&]+))\.html' _VALID_URL = r'(?P<mainurl>https?://(?:www\.)?daserste\.de/(?:[^/?#&]+/)+(?P<id>[^/?#&]+))\.html'
_TESTS = [{ _TESTS = [{
# available till 7.01.2022 # available till 7.12.2023
'url': 'https://www.daserste.de/information/talk/maischberger/videos/maischberger-die-woche-video100.html', 'url': 'https://www.daserste.de/information/talk/maischberger/videos/maischberger-video-424.html',
'md5': '867d8aa39eeaf6d76407c5ad1bb0d4c1', 'md5': 'a438f671e87a7eba04000336a119ccc4',
'info_dict': { 'info_dict': {
'id': 'maischberger-die-woche-video100', 'id': 'maischberger-video-424',
'display_id': 'maischberger-die-woche-video100', 'display_id': 'maischberger-video-424',
'ext': 'mp4', 'ext': 'mp4',
'duration': 3687.0, 'duration': 4452.0,
'title': 'maischberger. die woche vom 7. Januar 2021', 'title': 'maischberger am 07.12.2022',
'upload_date': '20210107', 'upload_date': '20221207',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
}, },
}, { }, {
@@ -371,7 +371,6 @@ class ARDIE(InfoExtractor):
continue continue
f['url'] = format_url f['url'] = format_url
formats.append(f) formats.append(f)
self._sort_formats(formats)
_SUB_FORMATS = ( _SUB_FORMATS = (
('./dataTimedText', 'ttml'), ('./dataTimedText', 'ttml'),

View File

@@ -136,7 +136,6 @@ class ArkenaIE(InfoExtractor):
elif mime_type == 'application/vnd.ms-sstr+xml': elif mime_type == 'application/vnd.ms-sstr+xml':
formats.extend(self._extract_ism_formats( formats.extend(self._extract_ism_formats(
href, video_id, ism_id='mss', fatal=False)) href, video_id, ism_id='mss', fatal=False))
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -73,7 +73,6 @@ class ArnesIE(InfoExtractor):
'width': int_or_none(media.get('width')), 'width': int_or_none(media.get('width')),
'height': int_or_none(media.get('height')), 'height': int_or_none(media.get('height')),
}) })
self._sort_formats(formats)
channel = video.get('channel') or {} channel = video.get('channel') or {}
channel_id = channel.get('url') channel_id = channel.get('url')

View File

@@ -65,6 +65,21 @@ class ArteTVIE(ArteTVBaseIE):
}, { }, {
'url': 'https://api.arte.tv/api/player/v2/config/de/LIVE', 'url': 'https://api.arte.tv/api/player/v2/config/de/LIVE',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.arte.tv/de/videos/110203-006-A/zaz/',
'info_dict': {
'id': '110203-006-A',
'chapters': 'count:16',
'description': 'md5:cf592f1df52fe52007e3f8eac813c084',
'alt_title': 'Zaz',
'title': 'Baloise Session 2022',
'timestamp': 1668445200,
'duration': 4054,
'thumbnail': 'https://api-cdn.arte.tv/img/v2/image/ubQjmVCGyRx3hmBuZEK9QZ/940x530',
'upload_date': '20221114',
'ext': 'mp4',
},
'expected_warnings': ['geo restricted']
}] }]
_GEO_BYPASS = True _GEO_BYPASS = True
@@ -180,13 +195,8 @@ class ArteTVIE(ArteTVBaseIE):
else: else:
self.report_warning(f'Skipping stream with unknown protocol {stream["protocol"]}') self.report_warning(f'Skipping stream with unknown protocol {stream["protocol"]}')
# TODO: chapters from stream['segments']?
# The JS also looks for chapters in config['data']['attributes']['chapters'],
# but I am yet to find a video having those
formats.extend(secondary_formats) formats.extend(secondary_formats)
self._remove_duplicate_formats(formats) self._remove_duplicate_formats(formats)
self._sort_formats(formats)
metadata = config['data']['attributes']['metadata'] metadata = config['data']['attributes']['metadata']
@@ -206,6 +216,11 @@ class ArteTVIE(ArteTVBaseIE):
{'url': image['url'], 'id': image.get('caption')} {'url': image['url'], 'id': image.get('caption')}
for image in metadata.get('images') or [] if url_or_none(image.get('url')) for image in metadata.get('images') or [] if url_or_none(image.get('url'))
], ],
# TODO: chapters may also be in stream['segments']?
'chapters': traverse_obj(config, ('data', 'attributes', 'chapters', 'elements', ..., {
'start_time': 'startTime',
'title': 'title',
})) or None,
} }
@@ -303,9 +318,7 @@ class ArteTVCategoryIE(ArteTVBaseIE):
if any(ie.suitable(video) for ie in (ArteTVIE, ArteTVPlaylistIE, )): if any(ie.suitable(video) for ie in (ArteTVIE, ArteTVPlaylistIE, )):
items.append(video) items.append(video)
title = (self._og_search_title(webpage, default=None) title = strip_or_none(self._generic_title('', webpage, default='').rsplit('|', 1)[0]) or None
or self._html_search_regex(r'<title\b[^>]*>([^<]+)</title>', default=None))
title = strip_or_none(title.rsplit('|', 1)[0]) or self._generic_title(url)
return self.playlist_from_matches(items, playlist_id=playlist_id, playlist_title=title, return self.playlist_from_matches(items, playlist_id=playlist_id, playlist_title=title,
description=self._og_search_description(webpage, default=None)) description=self._og_search_description(webpage, default=None))

View File

@@ -84,7 +84,6 @@ class AtresPlayerIE(InfoExtractor):
elif src_type == 'application/dash+xml': elif src_type == 'application/dash+xml':
formats, subtitles = self._extract_mpd_formats( formats, subtitles = self._extract_mpd_formats(
src, video_id, mpd_id='dash', fatal=False) src, video_id, mpd_id='dash', fatal=False)
self._sort_formats(formats)
heartbeat = episode.get('heartbeat') or {} heartbeat = episode.get('heartbeat') or {}
omniture = episode.get('omniture') or {} omniture = episode.get('omniture') or {}

View File

@@ -49,7 +49,6 @@ class ATVAtIE(InfoExtractor):
'url': source_url, 'url': source_url,
'format_id': protocol, 'format_id': protocol,
}) })
self._sort_formats(formats)
return { return {
'id': clip_id, 'id': clip_id,

View File

@@ -76,7 +76,6 @@ class AudiMediaIE(InfoExtractor):
'format_id': 'http-%s' % bitrate, 'format_id': 'http-%s' % bitrate,
}) })
formats.append(f) formats.append(f)
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,

View File

@@ -168,7 +168,7 @@ class AudiusIE(AudiusBaseIE):
} }
class AudiusTrackIE(AudiusIE): class AudiusTrackIE(AudiusIE): # XXX: Do not subclass from concrete IE
_VALID_URL = r'''(?x)(?:audius:)(?:https?://(?:www\.)?.+/v1/tracks/)?(?P<track_id>\w+)''' _VALID_URL = r'''(?x)(?:audius:)(?:https?://(?:www\.)?.+/v1/tracks/)?(?P<track_id>\w+)'''
IE_NAME = 'audius:track' IE_NAME = 'audius:track'
IE_DESC = 'Audius track ID or API link. Prepend with "audius:"' IE_DESC = 'Audius track ID or API link. Prepend with "audius:"'
@@ -243,7 +243,7 @@ class AudiusPlaylistIE(AudiusBaseIE):
playlist_data.get('description')) playlist_data.get('description'))
class AudiusProfileIE(AudiusPlaylistIE): class AudiusProfileIE(AudiusPlaylistIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'audius:artist' IE_NAME = 'audius:artist'
IE_DESC = 'Audius.co profile/artist pages' IE_DESC = 'Audius.co profile/artist pages'
_VALID_URL = r'https?://(?:www)?audius\.co/(?P<id>[^\/]+)/?(?:[?#]|$)' _VALID_URL = r'https?://(?:www)?audius\.co/(?P<id>[^\/]+)/?(?:[?#]|$)'

View File

@@ -6,7 +6,7 @@ from .common import InfoExtractor
from ..compat import compat_urllib_parse_urlencode from ..compat import compat_urllib_parse_urlencode
class AWSIE(InfoExtractor): class AWSIE(InfoExtractor): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor
_AWS_ALGORITHM = 'AWS4-HMAC-SHA256' _AWS_ALGORITHM = 'AWS4-HMAC-SHA256'
_AWS_REGION = 'us-east-1' _AWS_REGION = 'us-east-1'

View File

@@ -80,8 +80,6 @@ class BanByeIE(BanByeBaseIE):
'url': f'{self._CDN_BASE}/video/{video_id}/{quality}.mp4', 'url': f'{self._CDN_BASE}/video/{video_id}/{quality}.mp4',
} for quality in data['quality']] } for quality in data['quality']]
self._sort_formats(formats)
return { return {
'id': video_id, 'id': video_id,
'title': data.get('title'), 'title': data.get('title'),

Some files were not shown because too many files have changed in this diff Show More