1
0
mirror of https://github.com/yt-dlp/yt-dlp.git synced 2026-01-17 20:31:29 +00:00

Compare commits

...

34 Commits

Author SHA1 Message Date
pukkandan
298f597b4f Release 2021.01.16 2021-01-17 00:24:52 +05:30
pukkandan
e2e43aea21 Portable Configuration file (closes #19)
Inspired by https://github.com/ytdl-org/youtube-dl/pull/27592
2021-01-17 00:05:46 +05:30
pukkandan
30a074c2b6 Update to ytdl-2021.01.16 2021-01-16 18:50:48 +05:30
pukkandan
7bc877a20d Add PyPI release 2021-01-16 01:05:53 +05:30
pukkandan
ff0bc1aa4c [version] update
:skip ci all
2021-01-14 22:02:53 +05:30
pukkandan
bf5a997e24 Release 2021.01.14 2021-01-14 21:54:03 +05:30
pukkandan
17fa3ee25f Documentation fixes
* Change all links to point to new fork URL
* Changed sponskrub links to my fork of the same
* Other typos
2021-01-14 21:51:14 +05:30
pukkandan
2e8d2629f3 [tiktok] Fix for when share_info is empty
(Related: https://github.com/blackjack4494/yt-dlc/pull/20)
2021-01-14 20:15:36 +05:30
Felix Stupp
b4d1044095 [roosterteeth] Changed API endpoint (Closes #16)
New endpoint allows to request metadata for bonus episodes

Authored by Zocker1999NET
2021-01-14 18:56:05 +05:30
pukkandan
fd51377c95 [issuetemplates] Change all links to point to new fork URL 2021-01-14 14:29:19 +05:30
pukkandan
44af9751a7 Print full error in verbose for sponskrub 2021-01-14 14:04:33 +05:30
pukkandan
806b05cf7a Fix write_debug in EmbedThumbnail
Closes #17
2021-01-14 14:03:05 +05:30
pukkandan
d83cb5312c Fix archive bug introduced in 8b0d7497d5 2021-01-13 21:03:50 +05:30
pukkandan
8b0d7497d5 Added option --break-on-reject
and modified `--break-on-existing`
2021-01-13 06:44:35 +05:30
pukkandan
90505ff153 [readme] Change all links to point to new fork URL 2021-01-13 05:17:31 +05:30
pukkandan
8c1fead3ce [version] update 2021-01-13 03:59:14 +05:30
pukkandan
9b45b9f51a Release 2021.01.12 2021-01-13 03:53:49 +05:30
pukkandan
d9d045e2ef Changed repo name to yt-dlp 2021-01-13 03:45:14 +05:30
Samik Some
dfd14aadfa [roosterteeth.com] Add subtitle support (https://github.com/ytdl-org/youtube-dl/pull/23985)
Closes #15

Authored by samiksome
2021-01-13 03:30:41 +05:30
alxnull
0c3d0f5177 Added --force-overwrites option (https://github.com/ytdl-org/youtube-dl/pull/20405)
Co-authored by alxnull
2021-01-13 03:26:23 +05:30
pukkandan
f5546c0b3c Fix typos (Closes #14)
:skip ci all

Co-authored by: FelixFrog
2021-01-12 21:42:41 +05:30
pukkandan
0ed3baddf2 [CI] Option to skip
:skip ci all
2021-01-11 23:18:56 +05:30
pukkandan
f20f5fe524 Add changelog for the unreleased changes in blackjack4494/yt-dlc
and made related changes in README
2021-01-11 23:08:11 +05:30
pukkandan
5cc6ceb73b #13 [adobepass] Added Philo MSO (https://github.com/ytdl-org/youtube-dl/pull/17821)
Authored-by: Aniruddh Joshi <aniruddh@ebincoweb.com>
2021-01-11 14:35:17 +05:30
pukkandan
6d07ec81d3 [version] update 2021-01-11 04:15:56 +05:30
pukkandan
65156eba45 Release 2021.01.10 2021-01-11 04:09:08 +05:30
pukkandan
ba3c9477ee [Animelab] Added (https://github.com/ytdl-org/youtube-dl/pull/13600)
Authored by mariuszskon
2021-01-11 03:10:53 +05:30
pukkandan
a3e26449cd [archive.org] Fix extractor and add support for audio and playlists (https://github.com/ytdl-org/youtube-dl/pull/27156)
Coauthored by wporr
2021-01-11 03:10:53 +05:30
pukkandan
7267acd1ed [youtube:search] fix view_count (https://github.com/ytdl-org/youtube-dl/pull/27588/)
Authored by ohnonot
2021-01-11 02:59:44 +05:30
pukkandan
f446cc6667 Create to_screen and similar functions in postprocessor/common
`to_screen`, `report_warning`, `report_error`, `write_debug`, `get_param`

This is a first step in standardizing these function. This has to be done eventually for extractors and downloaders too
2021-01-10 22:22:24 +05:30
pukkandan
ebdd9275c3 Enable test_youtube_search_matching
I forgot to enable this when the search url extractor was reinstated
2021-01-10 22:20:32 +05:30
pukkandan
b2f70ae74e Update version badge automatically in README
Uses: https://github.com/Schneegans/dynamic-badges-action
2021-01-09 22:58:23 +05:30
pukkandan
5ac2324460 [youtube] Show if video is embeddable in info
Closes https://github.com/ytdl-org/youtube-dl/issues/27730
2021-01-09 21:29:58 +05:30
pukkandan
4084f235eb [version] update 2021-01-09 18:44:32 +05:30
60 changed files with 1781 additions and 6068 deletions

View File

@@ -21,15 +21,15 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is 2021.01.08. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is 2021.01.14. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlc. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a broken site support - [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running youtube-dlc version **2021.01.08** - [ ] I've verified that I'm running yt-dlp version **2021.01.14**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones - [ ] I've searched the bugtracker for similar issues including closed ones
@@ -44,7 +44,7 @@ Add the `-v` flag to your command line you run youtube-dlc with (`youtube-dlc -v
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dlc version 2021.01.08 [debug] yt-dlp version 2021.01.14
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@@ -21,15 +21,15 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is 2021.01.08. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is 2021.01.14. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/pukkandan/yt-dlc. youtube-dlc does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/pukkandan/yt-dlp. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar site support requests: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a new site support request - [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running youtube-dlc version **2021.01.08** - [ ] I've verified that I'm running yt-dlp version **2021.01.14**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights - [ ] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones - [ ] I've searched the bugtracker for similar site support requests including closed ones

View File

@@ -21,13 +21,13 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is 2021.01.08. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is 2021.01.14. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar site feature requests: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a site feature request - [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running youtube-dlc version **2021.01.08** - [ ] I've verified that I'm running yt-dlp version **2021.01.14**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones - [ ] I've searched the bugtracker for similar site feature requests including closed ones

View File

@@ -21,16 +21,16 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is 2021.01.08. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is 2021.01.14. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlc. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/pukkandan/yt-dlc - Read bugs section in FAQ: https://github.com/pukkandan/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a broken site support issue - [ ] I'm reporting a broken site support issue
- [ ] I've verified that I'm running youtube-dlc version **2021.01.08** - [ ] I've verified that I'm running yt-dlp version **2021.01.14**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones - [ ] I've searched the bugtracker for similar bug reports including closed ones
@@ -46,7 +46,7 @@ Add the `-v` flag to your command line you run youtube-dlc with (`youtube-dlc -v
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dlc version 2021.01.08 [debug] yt-dlp version 2021.01.14
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@@ -21,13 +21,13 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is 2021.01.08. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is 2021.01.14. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar feature requests: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a feature request - [ ] I'm reporting a feature request
- [ ] I've verified that I'm running youtube-dlc version **2021.01.08** - [ ] I've verified that I'm running yt-dlp version **2021.01.14**
- [ ] I've searched the bugtracker for similar feature requests including closed ones - [ ] I've searched the bugtracker for similar feature requests including closed ones

View File

@@ -20,8 +20,8 @@ assignees: ''
## Checklist ## Checklist
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- Look through the README (https://github.com/blackjack4494/yt-dlc) and FAQ (https://github.com/blackjack4494/yt-dlc) for similar questions - Look through the README (https://github.com/pukkandan/yt-dlp) and FAQ (https://github.com/pukkandan/yt-dlp) for similar questions
- Search the bugtracker for similar questions: https://github.com/blackjack4494/yt-dlc - Search the bugtracker for similar questions: https://github.com/blackjack4494/yt-dlc
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->

View File

@@ -21,15 +21,15 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlc. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a broken site support - [ ] I'm reporting a broken site support
- [ ] I've verified that I'm running youtube-dlc version **%(version)s** - [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar issues including closed ones - [ ] I've searched the bugtracker for similar issues including closed ones
@@ -44,7 +44,7 @@ Add the `-v` flag to your command line you run youtube-dlc with (`youtube-dlc -v
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dlc version %(version)s [debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@@ -21,15 +21,15 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/pukkandan/yt-dlc. youtube-dlc does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights. - Make sure that site you are requesting is not dedicated to copyright infringement, see https://github.com/pukkandan/yt-dlp. yt-dlp does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar site support requests: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a new site support request - [ ] I'm reporting a new site support request
- [ ] I've verified that I'm running youtube-dlc version **%(version)s** - [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights - [ ] I've checked that none of provided URLs violate any copyrights
- [ ] I've searched the bugtracker for similar site support requests including closed ones - [ ] I've searched the bugtracker for similar site support requests including closed ones

View File

@@ -21,13 +21,13 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar site feature requests: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar site feature requests: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a site feature request - [ ] I'm reporting a site feature request
- [ ] I've verified that I'm running youtube-dlc version **%(version)s** - [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar site feature requests including closed ones - [ ] I've searched the bugtracker for similar site feature requests including closed ones

View File

@@ -21,16 +21,16 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser. - Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlc. - Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/pukkandan/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar issues: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/pukkandan/yt-dlc - Read bugs section in FAQ: https://github.com/pukkandan/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a broken site support issue - [ ] I'm reporting a broken site support issue
- [ ] I've verified that I'm running youtube-dlc version **%(version)s** - [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped - [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones - [ ] I've searched the bugtracker for similar bug reports including closed ones
@@ -46,7 +46,7 @@ Add the `-v` flag to your command line you run youtube-dlc with (`youtube-dlc -v
[debug] User config: [] [debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj'] [debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] youtube-dlc version %(version)s [debug] yt-dlp version %(version)s
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {} [debug] Proxy map: {}

View File

@@ -21,13 +21,13 @@ assignees: ''
<!-- <!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc: Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dlc:
- First of, make sure you are using the latest version of youtube-dlc. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlc on how to update. Issues with outdated version will be REJECTED. - First of, make sure you are using the latest version of yt-dlp. Run `youtube-dlc --version` and ensure your version is %(version)s. If it's not, see https://github.com/pukkandan/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Search the bugtracker for similar feature requests: https://github.com/pukkandan/yt-dlc. DO NOT post duplicates. - Search the bugtracker for similar feature requests: https://github.com/pukkandan/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space) - Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
--> -->
- [ ] I'm reporting a feature request - [ ] I'm reporting a feature request
- [ ] I've verified that I'm running youtube-dlc version **%(version)s** - [ ] I've verified that I'm running yt-dlp version **%(version)s**
- [ ] I've searched the bugtracker for similar feature requests including closed ones - [ ] I've searched the bugtracker for similar feature requests including closed ones

View File

@@ -8,7 +8,7 @@
### Before submitting a *pull request* make sure you have: ### Before submitting a *pull request* make sure you have:
- [ ] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections - [ ] At least skimmed through [adding new extractor tutorial](https://github.com/ytdl-org/youtube-dl#adding-support-for-a-new-site) and [youtube-dl coding conventions](https://github.com/ytdl-org/youtube-dl#youtube-dl-coding-conventions) sections
- [ ] [Searched](https://github.com/pukkandan/yt-dlc/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [ ] [Searched](https://github.com/pukkandan/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) - [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8)
### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options: ### In order to be accepted and merged into youtube-dl each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:

View File

@@ -37,7 +37,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with: with:
tag_name: ${{ steps.bump_version.outputs.ytdlc_version }} tag_name: ${{ steps.bump_version.outputs.ytdlc_version }}
release_name: youtube-dlc ${{ steps.bump_version.outputs.ytdlc_version }} release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlc_version }}
body: | body: |
Changelog: Changelog:
PLACEHOLDER PLACEHOLDER
@@ -58,18 +58,18 @@ jobs:
env: env:
SHA2: ${{ hashFiles('youtube-dlc') }} SHA2: ${{ hashFiles('youtube-dlc') }}
run: echo "::set-output name=sha2_unix::$SHA2" run: echo "::set-output name=sha2_unix::$SHA2"
# - name: Install dependencies for pypi - name: Install dependencies for pypi
# run: | run: |
# python -m pip install --upgrade pip python -m pip install --upgrade pip
# pip install setuptools wheel twine pip install setuptools wheel twine
# - name: Build and publish - name: Build and publish on pypi
# env: env:
# TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }} TWINE_USERNAME: __token__
# TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }} TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
# run: | run: |
# rm -rf dist/* rm -rf dist/*
# python setup.py sdist bdist_wheel python setup.py sdist bdist_wheel
# twine upload dist/* twine upload dist/*
build_windows: build_windows:
@@ -161,3 +161,19 @@ jobs:
asset_path: ./SHA2-256SUMS asset_path: ./SHA2-256SUMS
asset_name: SHA2-256SUMS asset_name: SHA2-256SUMS
asset_content_type: text/plain asset_content_type: text/plain
update_version_badge:
runs-on: ubuntu-latest
needs: build_unix
steps:
- name: Create Version Badge
uses: schneegans/dynamic-badges-action@v1.0.0
with:
auth: ${{ secrets.GIST_TOKEN }}
gistID: c69cb23c3c5b3316248e52022790aa57
filename: version.json
label: Version
message: ${{ needs.build_unix.outputs.ytdlc_version }}

View File

@@ -1,16 +1,16 @@
name: Full Test name: Full Test
on: [push] on: [push, pull_request]
jobs: jobs:
tests: tests:
name: Tests name: Tests
if: "!contains(github.event.head_commit.message, 'skip ci')"
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-18.04]
# TODO: python 2.6 # TODO: python 2.6
# 3.3, 3.4 are not running python-version: [2.7, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, pypy-2.7, pypy-3.6, pypy-3.7]
python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9, pypy-2.7, pypy-3.6, pypy-3.7]
python-impl: [cpython] python-impl: [cpython]
ytdl-test-set: [core, download] ytdl-test-set: [core, download]
run-tests-ext: [sh] run-tests-ext: [sh]

View File

@@ -1,33 +0,0 @@
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Upload Python Package
on:
push:
branches:
- release
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
rm -rf dist/*
python setup.py sdist bdist_wheel
twine upload dist/*

View File

@@ -1,8 +1,9 @@
name: Core Test name: Core Test
on: [push] on: [push, pull_request]
jobs: jobs:
tests: tests:
name: Core Tests name: Core Tests
if: "!contains(github.event.head_commit.message, 'skip ci all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
@@ -18,6 +19,7 @@ jobs:
run: ./devscripts/run_tests.sh run: ./devscripts/run_tests.sh
flake8: flake8:
name: Linter name: Linter
if: "!contains(github.event.head_commit.message, 'skip ci all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2

1
.gitignore vendored
View File

@@ -46,6 +46,7 @@ updates_key.pem
*.swf *.swf
*.part *.part
*.ytdl *.ytdl
*.conf
*.swp *.swp
*.spec *.spec
test/local_parameters.json test/local_parameters.json

View File

@@ -1,8 +1,18 @@
pukkandan pukkandan (owner)
h-h-h-h h-h-h-h
pauldubois98 pauldubois98
nixxo nixxo
GreyAlien502 GreyAlien502
kyuyeunk kyuyeunk
siikamiika siikamiika
jbruchon jbruchon
alexmerkel
glenn-slayden
Unrud
wporr
mariuszskon
ohnonot
samiksome
alxnull
FelixFrog
Zocker1999NET

5294
ChangeLog

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,71 @@
# Changelog # Changelog
<!--
# Instuctions for creating release
### 2020.01.08 * Run `make doc`
* **Merge youtube-dl:** Upto [2020.01.08](https://github.com/ytdl-org/youtube-dl/commit/bf6a74c620bd4d5726503c5302906bb36b009026) * Update Changelog.md and Authors-Fork
* Commit to master as `Release <version>`
* Push to origin/release - build task will now run
* Update version.py and run `make issuetemplates`
* Commit to master as `[version] update`
* Push to origin/master
* Update changelog in /releases
-->
### 2021.01.16
* Update to ytdl-2021.01.16
* Portable configuration file: `./yt-dlp.conf`
* Changes to configuration file paths. See [this](https://github.com/pukkandan/yt-dlp#configuration) for details
* Add PyPI release
### 2021.01.14
* Added option `--break-on-reject`
* [roosterteeth.com] Fix for bonus episodes by @Zocker1999NET
* [tiktok] Fix for when share_info is empty
* [EmbedThumbnail] Fix bug due to incorrect function name
* [documentation] Changed sponskrub links to point to [pukkandan/sponskrub](https://github.com/pukkandan/SponSkrub) since I am now providing both linux and windows releases
* [documentation] Change all links to correctly point to new fork URL
* [documentation] Fixes typos
### 2021.01.12
* [roosterteeth.com] Add subtitle support by @samiksome
* Added `--force-overwrites`, `--no-force-overwrites` by @alxnull
* Changed fork name to `yt-dlp`
* Fix typos by @FelixFrog
* [ci] Option to skip
* [changelog] Added unreleased changes in blackjack4494/yt-dlc
### 2021.01.10
* [archive.org] Fix extractor and add support for audio and playlists by @wporr
* [Animelab] Added by @mariuszskon
* [youtube:search] Fix view_count by @ohnonot
* [youtube] Show if video is embeddable in info
* Update version badge automatically in README
* Enable `test_youtube_search_matching`
* Create `to_screen` and similar functions in postprocessor/common
### 2021.01.09
* [youtube] Fix bug in automatic caption extraction
* Add `post_hooks` to YoutubeDL by @alexmerkel
* Batch file enumeration improvements by @glenn-slayden
* Stop immediately when reaching `--max-downloads` by @glenn-slayden
* Fix incorrect ANSI sequence for restoring console-window title by @glenn-slayden
* Kill child processes when yt-dlc is killed by @Unrud
### 2021.01.08
* **Merge youtube-dl:** Upto [2021.01.08](https://github.com/ytdl-org/youtube-dl/commit/bf6a74c620bd4d5726503c5302906bb36b009026)
* Extractor stitcher ([1](https://github.com/ytdl-org/youtube-dl/commit/bb38a1215718cdf36d73ff0a7830a64cd9fa37cc), [2](https://github.com/ytdl-org/youtube-dl/commit/a563c97c5cddf55f8989ed7ea8314ef78e30107f)) have not been merged * Extractor stitcher ([1](https://github.com/ytdl-org/youtube-dl/commit/bb38a1215718cdf36d73ff0a7830a64cd9fa37cc), [2](https://github.com/ytdl-org/youtube-dl/commit/a563c97c5cddf55f8989ed7ea8314ef78e30107f)) have not been merged
* Moved changelog to seperate file * Moved changelog to seperate file
### 2021.01.07-1 ### 2021.01.07-1
* [Akamai] fix by @nixxo * [Akamai] fix by @nixxo
* [Tiktok] merge youtube-dl tiktok extractor by @GreyAlien502 * [Tiktok] merge youtube-dl tiktok extractor by @GreyAlien502
@@ -16,11 +76,13 @@
* Deprecated `--sponskrub-args`. The same can now be done using `--postprocessor-args "sponskrub:<args>"` * Deprecated `--sponskrub-args`. The same can now be done using `--postprocessor-args "sponskrub:<args>"`
* [CI] Split tests into core-test and full-test * [CI] Split tests into core-test and full-test
### 2021.01.07 ### 2021.01.07
* Removed priority of `av01` codec in `-S` since most devices don't support it yet * Removed priority of `av01` codec in `-S` since most devices don't support it yet
* Added `duration_string` to be used in `--output` * Added `duration_string` to be used in `--output`
* Created First Release * Created First Release
### 2021.01.05-1 ### 2021.01.05-1
* **Changed defaults:** * **Changed defaults:**
* Enabled `--ignore` * Enabled `--ignore`
@@ -31,6 +93,7 @@
* Changed default output template to `%(title)s [%(id)s].%(ext)s` * Changed default output template to `%(title)s [%(id)s].%(ext)s`
* Enabled `--list-formats-as-table` * Enabled `--list-formats-as-table`
### 2021.01.05 ### 2021.01.05
* **Format Sort:** Added `--format-sort` (`-S`), `--format-sort-force` (`--S-force`) - See [Sorting Formats](README.md#sorting-formats) for details * **Format Sort:** Added `--format-sort` (`-S`), `--format-sort-force` (`--S-force`) - See [Sorting Formats](README.md#sorting-formats) for details
* **Format Selection:** See [Format Selection](README.md#format-selection) for details * **Format Selection:** See [Format Selection](README.md#format-selection) for details
@@ -42,12 +105,42 @@
* **Sponskrub integration:** Added `--sponskrub`, `--sponskrub-cut`, `--sponskrub-force`, `--sponskrub-location`, `--sponskrub-args` - See [SponSkrub Options](README.md#sponskrub-options-sponsorblock) for details * **Sponskrub integration:** Added `--sponskrub`, `--sponskrub-cut`, `--sponskrub-force`, `--sponskrub-location`, `--sponskrub-args` - See [SponSkrub Options](README.md#sponskrub-options-sponsorblock) for details
* Added `--force-download-archive` (`--force-write-archive`) by by h-h-h-h * Added `--force-download-archive` (`--force-write-archive`) by by h-h-h-h
* Added `--list-formats-as-table`, `--list-formats-old` * Added `--list-formats-as-table`, `--list-formats-old`
* **Negative Options:** Makes it possible to negate boolean options by adding a `no-` to the switch * **Negative Options:** Makes it possible to negate most boolean options by adding a `no-` to the switch. Usefull when you want to reverse an option that is defined in a config file
* Added `--no-ignore-dynamic-mpd`, `--no-allow-dynamic-mpd`, `--allow-dynamic-mpd`, `--youtube-include-hls-manifest`, `--no-youtube-include-hls-manifest`, `--no-youtube-skip-hls-manifest`, `--no-download`, `--no-download-archive`, `--resize-buffer`, `--part`, `--mtime`, `--no-keep-fragments`, `--no-cookies`, `--no-write-annotations`, `--no-write-info-json`, `--no-write-description`, `--no-write-thumbnail`, `--youtube-include-dash-manifest`, `--post-overwrites`, `--no-keep-video`, `--no-embed-subs`, `--no-embed-thumbnail`, `--no-add-metadata`, `--no-include-ads`, `--no-write-sub`, `--no-write-auto-sub`, `--no-playlist-reverse`, `--no-restrict-filenames`, `--youtube-include-dash-manifest`, `--no-format-sort-force`, `--flat-videos`, `--no-list-formats-as-table`, `--no-sponskrub`, `--no-sponskrub-cut`, `--no-sponskrub-force` * Added `--no-ignore-dynamic-mpd`, `--no-allow-dynamic-mpd`, `--allow-dynamic-mpd`, `--youtube-include-hls-manifest`, `--no-youtube-include-hls-manifest`, `--no-youtube-skip-hls-manifest`, `--no-download`, `--no-download-archive`, `--resize-buffer`, `--part`, `--mtime`, `--no-keep-fragments`, `--no-cookies`, `--no-write-annotations`, `--no-write-info-json`, `--no-write-description`, `--no-write-thumbnail`, `--youtube-include-dash-manifest`, `--post-overwrites`, `--no-keep-video`, `--no-embed-subs`, `--no-embed-thumbnail`, `--no-add-metadata`, `--no-include-ads`, `--no-write-sub`, `--no-write-auto-sub`, `--no-playlist-reverse`, `--no-restrict-filenames`, `--youtube-include-dash-manifest`, `--no-format-sort-force`, `--flat-videos`, `--no-list-formats-as-table`, `--no-sponskrub`, `--no-sponskrub-cut`, `--no-sponskrub-force`
* Renamed: `--write-subs`, `--no-write-subs`, `--no-write-auto-subs`, `--write-auto-subs`. Note that these can still be used without the ending "s" * Renamed: `--write-subs`, `--no-write-subs`, `--no-write-auto-subs`, `--write-auto-subs`. Note that these can still be used without the ending "s"
* Relaxed validation for format filters so that any arbitrary field can be used * Relaxed validation for format filters so that any arbitrary field can be used
* Fix for embedding thumbnail in mp3 by @pauldubois98 * Fix for embedding thumbnail in mp3 by @pauldubois98
* Make Twitch Video ID output from Playlist and VOD extractor same. This is only a temporary fix * Make Twitch Video ID output from Playlist and VOD extractor same. This is only a temporary fix
* **Merge youtube-dl:** Upto [2020.01.03](https://github.com/ytdl-org/youtube-dl/commit/8e953dcbb10a1a42f4e12e4e132657cb0100a1f8) - See [blackjack4494/yt-dlc#280](https://github.com/blackjack4494/yt-dlc/pull/280) for details * **Merge youtube-dl:** Upto [2021.01.03](https://github.com/ytdl-org/youtube-dl/commit/8e953dcbb10a1a42f4e12e4e132657cb0100a1f8) - See [blackjack4494/yt-dlc#280](https://github.com/blackjack4494/yt-dlc/pull/280) for details
* Extractors [tiktok](https://github.com/ytdl-org/youtube-dl/commit/fb626c05867deab04425bad0c0b16b55473841a2) and [hotstar](https://github.com/ytdl-org/youtube-dl/commit/bb38a1215718cdf36d73ff0a7830a64cd9fa37cc) have not been merged * Extractors [tiktok](https://github.com/ytdl-org/youtube-dl/commit/fb626c05867deab04425bad0c0b16b55473841a2) and [hotstar](https://github.com/ytdl-org/youtube-dl/commit/bb38a1215718cdf36d73ff0a7830a64cd9fa37cc) have not been merged
* Cleaned up the fork for public use * Cleaned up the fork for public use
### Unreleased changes in [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc)
* Updated to youtube-dl release 2020.11.26
* [youtube]
* Implemented all Youtube Feeds (ytfav, ytwatchlater, ytsubs, ythistory, ytrec) and SearchURL
* Fix ytsearch not returning results sometimes due to promoted content
* Temporary fix for automatic captions - disable json3
* Fix some improper Youtube URLs
* Redirect channel home to /video
* Print youtube's warning message
* Multiple pages are handled better for feeds
* Add --break-on-existing by @gergesh
* Pre-check video IDs in the archive before downloading
* [bitwave.tv] New extractor
* [Gedi] Add extractor
* [Rcs] Add new extractor
* [skyit] Add support for multiple Sky Italia website and removed old skyitalia extractor
* [france.tv] Fix thumbnail URL
* [ina] support mobile links
* [instagram] Fix extractor
* [itv] BTCC new pages' URL update (articles instead of races)
* [SouthparkDe] Support for English URLs
* [spreaker] fix SpreakerShowIE test URL
* [Vlive] Fix playlist handling when downloading a channel
* [generic] Detect embedded bitchute videos
* [generic] Extract embedded youtube and twitter videos
* [ffmpeg] Ensure all streams are copied
* Fix for os.rename error when embedding thumbnail to video in a different drive
* make_win.bat: don't use UPX to pack vcruntime140.dll

View File

@@ -1,8 +1,10 @@
all: youtube-dlc README.md CONTRIBUTING.md README.txt issuetemplates youtube-dlc.1 youtube-dlc.bash-completion youtube-dlc.zsh youtube-dlc.fish supportedsites all: youtube-dlc doc man
doc: README.md CONTRIBUTING.md issuetemplates supportedsites doc: README.md CONTRIBUTING.md issuetemplates supportedsites
man: README.txt youtube-dlc.1 youtube-dlc.bash-completion youtube-dlc.zsh youtube-dlc.fish
clean: clean:
rm -rf youtube-dlc.1.temp.md youtube-dlc.1 youtube-dlc.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dlc.tar.gz youtube-dlc.zsh youtube-dlc.fish youtube_dlc/extractor/lazy_extractors.py *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png CONTRIBUTING.md.tmp youtube-dlc youtube-dlc.exe rm -rf youtube-dlc.1.temp.md youtube-dlc.1 youtube-dlc.bash-completion README.txt MANIFEST build/ dist/ .coverage cover/ youtube-dlc.tar.gz youtube-dlc.zsh youtube-dlc.fish youtube_dlc/extractor/lazy_extractors.py *.dump *.part* *.ytdl *.info.json *.mp4 *.m4a *.flv *.mp3 *.avi *.mkv *.webm *.3gp *.wav *.ape *.swf *.jpg *.png *.spec CONTRIBUTING.md.tmp youtube-dlc youtube-dlc.exe
find . -name "*.pyc" -delete find . -name "*.pyc" -delete
find . -name "*.class" -delete find . -name "*.class" -delete
@@ -10,7 +12,8 @@ PREFIX ?= /usr/local
BINDIR ?= $(PREFIX)/bin BINDIR ?= $(PREFIX)/bin
MANDIR ?= $(PREFIX)/man MANDIR ?= $(PREFIX)/man
SHAREDIR ?= $(PREFIX)/share SHAREDIR ?= $(PREFIX)/share
PYTHON ?= /usr/bin/env python # make_supportedsites.py doesnot work correctly in python2
PYTHON ?= /usr/bin/env python3
# set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local # set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi) SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
@@ -46,6 +49,7 @@ offlinetest: codetest
--exclude test_age_restriction.py \ --exclude test_age_restriction.py \
--exclude test_download.py \ --exclude test_download.py \
--exclude test_iqiyi_sdk_interpreter.py \ --exclude test_iqiyi_sdk_interpreter.py \
--exclude test_overwrites.py \
--exclude test_socks.py \ --exclude test_socks.py \
--exclude test_subtitles.py \ --exclude test_subtitles.py \
--exclude test_write_annotations.py \ --exclude test_write_annotations.py \

129
README.md
View File

@@ -1,17 +1,19 @@
[![Release Version](https://img.shields.io/badge/Release-2021.01.09-brightgreen)](https://github.com/pukkandan/yt-dlc/releases/latest) # YT-DLP
[![License: Unlicense](https://img.shields.io/badge/License-Unlicense-blue.svg)](https://github.com/pukkandan/yt-dlc/blob/master/LICENSE)
[![Core Status](https://github.com/pukkandan/yt-dlc/workflows/Core%20Test/badge.svg?branch=master)](https://github.com/pukkandan/yt-dlc/actions?query=workflow%3ACore)
[![CI Status](https://github.com/pukkandan/yt-dlc/workflows/Full%20Test/badge.svg?branch=master)](https://github.com/pukkandan/yt-dlc/actions?query=workflow%3AFull)
youtube-dlc - download videos from youtube.com and many other [video platforms](docs/supportedsites.md) <!-- See: https://github.com/marketplace/actions/dynamic-badges -->
[![Release Version](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/pukkandan/c69cb23c3c5b3316248e52022790aa57/raw/version.json&color=brightgreen)](https://github.com/pukkandan/yt-dlp/releases/latest)
[![License: Unlicense](https://img.shields.io/badge/License-Unlicense-blue.svg)](https://github.com/pukkandan/yt-dlp/blob/master/LICENSE)
[![Core Status](https://github.com/pukkandan/yt-dlp/workflows/Core%20Test/badge.svg?branch=master)](https://github.com/pukkandan/yt-dlp/actions?query=workflow%3ACore)
[![CI Status](https://github.com/pukkandan/yt-dlp/workflows/Full%20Test/badge.svg?branch=master)](https://github.com/pukkandan/yt-dlp/actions?query=workflow%3AFull)
A command-line program to download videos from youtube.com and many other [video platforms](docs/supportedsites.md)
This is a fork of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) which is inturn a fork of [youtube-dl](https://github.com/ytdl-org/youtube-dl) This is a fork of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) which is inturn a fork of [youtube-dl](https://github.com/ytdl-org/youtube-dl)
* [NEW FEATURES](#new-features) * [NEW FEATURES](#new-features)
* [INSTALLATION](#installation) * [INSTALLATION](#installation)
* [UPDATE](#update) * [Update](#update)
* [COMPILE](#compile) * [Compile](#compile)
* [YOUTUBE-DLC](#youtube-dlc)
* [DESCRIPTION](#description) * [DESCRIPTION](#description)
* [OPTIONS](#options) * [OPTIONS](#options)
* [Network Options](#network-options) * [Network Options](#network-options)
@@ -28,7 +30,7 @@ This is a fork of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) which i
* [Authentication Options](#authentication-options) * [Authentication Options](#authentication-options)
* [Adobe Pass Options](#adobe-pass-options) * [Adobe Pass Options](#adobe-pass-options)
* [Post-processing Options](#post-processing-options) * [Post-processing Options](#post-processing-options)
* [SponSkrub Options (SponsorBlock)](#sponSkrub-options-sponsorblock) * [SponSkrub Options (SponsorBlock)](#sponskrub-options-sponsorblock)
* [Extractor Options](#extractor-options) * [Extractor Options](#extractor-options)
* [CONFIGURATION](#configuration) * [CONFIGURATION](#configuration)
* [Authentication with .netrc file](#authentication-with-netrc-file) * [Authentication with .netrc file](#authentication-with-netrc-file)
@@ -39,35 +41,47 @@ This is a fork of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) which i
* [Filtering Formats](#filtering-formats) * [Filtering Formats](#filtering-formats)
* [Sorting Formats](#sorting-formats) * [Sorting Formats](#sorting-formats)
* [Format Selection examples](#format-selection-examples) * [Format Selection examples](#format-selection-examples)
* [VIDEO SELECTION](#video-selection-1)
* [MORE](#more) * [MORE](#more)
# NEW FEATURES # NEW FEATURES
The major new features are: The major new features from the latest release of [blackjack4494/yt-dlc](https://github.com/blackjack4494/yt-dlc) are:
* **[SponSkrub Integration](#sponSkrub-options-sponsorblock)** - You can use [SponSkrub](https://github.com/faissaloo/SponSkrub) to mark/remove sponsor sections in youtube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API * **[SponSkrub Integration](#sponSkrub-options-sponsorblock)**: You can use [SponSkrub](https://github.com/pukkandan/SponSkrub) to mark/remove sponsor sections in youtube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
* **[Format Sorting](#sorting-format)** - The default format sorting options have been changed so that higher resolution and better codecs will be now prefered instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection that what is possible by simply using `--format` ([examples](#format-selection-examples)) * **[Format Sorting](#sorting-formats)**: The default format sorting options have been changed so that higher resolution and better codecs will be now preferred instead of simply using larger bitrate. Furthermore, you can now specify the sort order using `-S`. This allows for much easier format selection that what is possible by simply using `--format` ([examples](#format-selection-examples))
* Merged with youtube-dl **v2020.01.08** - You get the new features and patches of [youtube-dl](https://github.com/ytdl-org/youtube-dl) in addition to all the features of [youtube-dlc](https://github.com/blackjack4494) * **Merged with youtube-dl v2021.01.08**: You get all the latest features and patches of [youtube-dl](https://github.com/ytdl-org/youtube-dl) in addition to all the features of [youtube-dlc](https://github.com/blackjack4494/yt-dlc)
* **New options** - `--list-formats-as-table`, `--write-link`, `--force-download-archive` etc * **Youtube improvements**:
* All Youtube Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`) works correctly and support downloading multiple pages of content
* Youtube search works correctly (`ytsearch:`, `ytsearchdate:`) along with Search URLs
* Redirect channel's home URL automatically to `/video` to preserve the old behaviour
and many other features and patches. See [changelog](Changelog.md) or [commits](https://github.com/pukkandan/yt-dlc/commits) for the full list of changes * **New extractors**: AnimeLab, Philo MSO, Rcs, Gedi, bitwave.tv
* **Fixed extractors**: archive.org, roosterteeth.com, skyit, instagram, itv, SouthparkDe, spreaker, Vlive, tiktok, akamai, ina
* **New options**: `--list-formats-as-table`, `--write-link`, `--force-download-archive`, `--force-overwrites` etc
and many other features and patches. See [changelog](Changelog.md) or [commits](https://github.com/pukkandan/yt-dlp/commits) for the full list of changes
**PS**: Some of these changes are already in youtube-dlc, but are still unreleased. See [this](Changelog.md#unreleased-changes-in-blackjack4494yt-dlc) for details
If you are coming from [youtube-dl](https://github.com/ytdl-org/youtube-dl), the amount of changes are very large. Compare [options](#options) and [supported sites](docs/supportedsites.md) with youtube-dl's to get an idea of the massive number of features/patches [youtube-dlc](https://github.com/blackjack4494/yt-dlc) has accumulated.
# INSTALLATION # INSTALLATION
To use the latest version, simply download and run the [latest release](https://github.com/pukkandan/yt-dlc/releases/latest). You can install yt-dlp using one of the following methods:
Currently, there is no support for any package managers. * Use [PyPI package](https://pypi.org/project/yt-dlp/): `python -m pip install --upgrade yt-dlp`
* Download the binary from the [latest release](https://github.com/pukkandan/yt-dlp/releases/latest)
If you want to install the current master branch * Use pip+git: `python -m pip install --upgrade git+https://github.com/pukkandan/yt-dlp.git@release`
* Install master branch: `python -m pip install --upgrade git+https://github.com/pukkandan/yt-dlp`
python -m pip install git+https://github.com/pukkandan/yt-dlc
### UPDATE ### UPDATE
**DO NOT UPDATE using `-U` !** instead download binaries again `-U` does not work. Simply repeat the install process to update.
### COMPILE ### COMPILE
@@ -92,9 +106,9 @@ Then simply type this
# DESCRIPTION # DESCRIPTION
**youtube-dlc** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like. **youtube-dlc** is a command-line program to download videos from youtube.com many other [video platforms](docs/supportedsites.md). It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.
youtube-dlc [OPTIONS] URL [URL...] youtube-dlc [OPTIONS] [--] URL [URL...]
# OPTIONS # OPTIONS
@@ -127,12 +141,14 @@ Then simply type this
an error. The default value "fixup_error" an error. The default value "fixup_error"
repairs broken URLs, but emits an error if repairs broken URLs, but emits an error if
this is not possible instead of searching. this is not possible instead of searching.
--ignore-config, --no-config Do not read configuration files. When given --ignore-config, --no-config Disable loading any configuration files
in the global configuration file except the one provided by --config-
/etc/youtube-dl.conf: Do not read the user location. When given inside a configuration
configuration in ~/.config/youtube- file, no further configuration files are
dl/config (%APPDATA%/youtube-dl/config.txt loaded. Additionally, (for backward
on Windows) compatibility) if this option is found
inside the system configuration file, the
user configuration is not loaded.
--config-location PATH Location of the configuration file; either --config-location PATH Location of the configuration file; either
the path to the config or its containing the path to the config or its containing
directory. directory.
@@ -234,8 +250,10 @@ Then simply type this
--download-archive FILE Download only videos not listed in the --download-archive FILE Download only videos not listed in the
archive file. Record the IDs of all archive file. Record the IDs of all
downloaded videos in it. downloaded videos in it.
--break-on-existing Stop the download process after attempting --break-on-existing Stop the download process when encountering
to download a file that's in the archive. a file that's in the archive.
--break-on-reject Stop the download process when encountering
a file that has been filtered out.
--no-download-archive Do not use archive file (default) --no-download-archive Do not use archive file (default)
--include-ads Download advertisements as well --include-ads Download advertisements as well
(experimental) (experimental)
@@ -303,7 +321,11 @@ Then simply type this
filenames filenames
--no-restrict-filenames Allow Unicode characters, "&" and spaces in --no-restrict-filenames Allow Unicode characters, "&" and spaces in
filenames (default) filenames (default)
-w, --no-overwrites Do not overwrite files -w, --no-overwrites Do not overwrite any files
--force-overwrites Overwrite all video and metadata files.
This option includes --no-continue
--no-force-overwrites Do not overwrite the video, but overwrite
related files (default)
-c, --continue Resume partially downloaded files (default) -c, --continue Resume partially downloaded files (default)
--no-continue Restart download of partially downloaded --no-continue Restart download of partially downloaded
files from beginning files from beginning
@@ -396,7 +418,8 @@ Then simply type this
files in the current directory to debug files in the current directory to debug
problems problems
--print-traffic Display sent and read HTTP traffic --print-traffic Display sent and read HTTP traffic
-C, --call-home Contact the youtube-dlc server for debugging -C, --call-home [Broken] Contact the youtube-dlc server for
debugging
--no-call-home Do not contact the youtube-dlc server for --no-call-home Do not contact the youtube-dlc server for
debugging (default) debugging (default)
@@ -535,9 +558,10 @@ Then simply type this
ExtractAudio, VideoRemuxer, VideoConvertor, ExtractAudio, VideoRemuxer, VideoConvertor,
EmbedSubtitle, Metadata, Merger, EmbedSubtitle, Metadata, Merger,
FixupStretched, FixupM4a, FixupM3u8, FixupStretched, FixupM4a, FixupM3u8,
SubtitlesConvertor, SponSkrub and Default. SubtitlesConvertor, EmbedThumbnail,
You can use this option multiple times to XAttrMetadata, SponSkrub and Default. You
give different arguments to different can use this option multiple times to give
different arguments to different
postprocessors postprocessors
-k, --keep-video Keep the intermediate video file on disk -k, --keep-video Keep the intermediate video file on disk
after post-processing after post-processing
@@ -584,7 +608,7 @@ Then simply type this
--convert-subs FORMAT Convert the subtitles to other format --convert-subs FORMAT Convert the subtitles to other format
(currently supported: srt|ass|vtt|lrc) (currently supported: srt|ass|vtt|lrc)
## [SponSkrub](https://github.com/faissaloo/SponSkrub) Options ([SponsorBlock](https://sponsor.ajay.app)): ## [SponSkrub](https://github.com/pukkandan/SponSkrub) Options ([SponsorBlock](https://sponsor.ajay.app)):
--sponskrub Use sponskrub to mark sponsored sections --sponskrub Use sponskrub to mark sponsored sections
with the data available in SponsorBlock with the data available in SponsorBlock
API. This is enabled by default if the API. This is enabled by default if the
@@ -610,9 +634,22 @@ Then simply type this
# CONFIGURATION # CONFIGURATION
You can configure youtube-dlc by placing any supported command line option to a configuration file. On Linux and macOS, the system wide configuration file is located at `/etc/youtube-dlc.conf` and the user wide configuration file at `~/.config/youtube-dlc/config`. On Windows, the user wide configuration file locations are `%APPDATA%\youtube-dlc\config.txt` or `C:\Users\<user name>\youtube-dlc.conf`. Note that by default configuration file may not exist so you may need to create it yourself. You can configure youtube-dlc by placing any supported command line option to a configuration file. The configuration is loaded from the following locations:
For example, with the following configuration file youtube-dlc will always extract the audio, not copy the mtime, use a proxy and save all videos under `Movies` directory in your home directory: 1. The file given by `--config-location`
1. **Portable Configuration**: `yt-dlp.conf` or `youtube-dlc.conf` in the same directory as the bundled binary. If you are running from source-code (`<root dir>/youtube_dlc/__main__.py`), the root directory is used instead.
1. **User Configuration**:
* `%XDG_CONFIG_HOME%/yt-dlp/config` (recommended on Linux/macOS)
* `%XDG_CONFIG_HOME%/yt-dlp.conf`
* `%APPDATA%/yt-dlp/config` (recommended on Windows)
* `%APPDATA%/yt-dlp/config.txt`
* `~/yt-dlp.conf`
* `~/yt-dlp.conf.txt`
If none of these files are found, the search is performed again by replacing `yt-dlp` with `youtube-dlc`. Note that `~` points to `C:\Users\<user name>` on windows. Also, `%XDG_CONFIG_HOME%` defaults to `~/.config` if undefined
1. **System Configuration**: `/etc/yt-dlp.conf` or `/etc/youtube-dlc.conf`
For example, with the following configuration file youtube-dlc will always extract the audio, not copy the mtime, use a proxy and save all videos under `YouTube` directory in your home directory:
``` ```
# Lines starting with # are comments # Lines starting with # are comments
@@ -625,15 +662,13 @@ For example, with the following configuration file youtube-dlc will always extra
# Use this proxy # Use this proxy
--proxy 127.0.0.1:3128 --proxy 127.0.0.1:3128
# Save all videos under Movies directory in your home directory # Save all videos under YouTube directory in your home directory
-o ~/Movies/%(title)s.%(ext)s -o ~/YouTube/%(title)s.%(ext)s
``` ```
Note that options in configuration file are just the same options aka switches used in regular command line calls thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`. Note that options in configuration file are just the same options aka switches used in regular command line calls; thus there **must be no whitespace** after `-` or `--`, e.g. `-o` or `--proxy` but not `- o` or `-- proxy`.
You can use `--ignore-config` if you want to disable the configuration file for a particular youtube-dlc run. You can use `--ignore-config` if you want to disable all configuration files for a particular youtube-dlc run. If `--ignore-config` is found inside any configuration file, no further configuration will be loaded. For example, having the option in the portable configuration file prevents loading of user and system configurations. Additionally, (for backward compatibility) if `--ignore-config` is found inside the system configuration file, the user configuration is not loaded.
You can also use `--config-location` if you want to use custom configuration file for a particular youtube-dlc run.
### Authentication with `.netrc` file ### Authentication with `.netrc` file

View File

@@ -1,7 +1,7 @@
#!/bin/bash #!/bin/bash
# Keep this list in sync with the `offlinetest` target in Makefile # Keep this list in sync with the `offlinetest` target in Makefile
DOWNLOAD_TESTS="age_restriction|download|iqiyi_sdk_interpreter|socks|subtitles|write_annotations|youtube_lists|youtube_signature|post_hooks" DOWNLOAD_TESTS="age_restriction|download|iqiyi_sdk_interpreter|overwrites|socks|subtitles|write_annotations|youtube_lists|youtube_signature|post_hooks"
test_set="" test_set=""
multiprocess_args="" multiprocess_args=""

View File

@@ -48,6 +48,8 @@
- **AMCNetworks** - **AMCNetworks**
- **AmericasTestKitchen** - **AmericasTestKitchen**
- **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl - **anderetijden**: npo.nl, ntr.nl, omroepwnl.nl, zapp.nl and npo3.nl
- **AnimeLab**
- **AnimeLabShows**
- **AnimeOnDemand** - **AnimeOnDemand**
- **Anvato** - **Anvato**
- **aol.com** - **aol.com**
@@ -58,7 +60,7 @@
- **ApplePodcasts** - **ApplePodcasts**
- **appletrailers** - **appletrailers**
- **appletrailers:section** - **appletrailers:section**
- **archive.org**: archive.org videos - **archive.org**: archive.org video and audio
- **ArcPublishing** - **ArcPublishing**
- **ARD** - **ARD**
- **ARD:mediathek** - **ARD:mediathek**
@@ -429,7 +431,8 @@
- **Katsomo** - **Katsomo**
- **KeezMovies** - **KeezMovies**
- **Ketnet** - **Ketnet**
- **KhanAcademy** - **khanacademy**
- **khanacademy:unit**
- **KickStarter** - **KickStarter**
- **KinjaEmbed** - **KinjaEmbed**
- **KinoPoisk** - **KinoPoisk**

View File

@@ -74,7 +74,7 @@ version_file = VSVersionInfo(
), ),
StringStruct("OriginalFilename", "youtube-dlc.exe"), StringStruct("OriginalFilename", "youtube-dlc.exe"),
StringStruct("ProductName", "Youtube-dlc"), StringStruct("ProductName", "Youtube-dlc"),
StringStruct("ProductVersion", version + " | git.io/JUGsM"), StringStruct("ProductVersion", version + " | git.io/JLh7K"),
], ],
) )
] ]

View File

@@ -11,8 +11,12 @@ from distutils.spawn import spawn
exec(compile(open('youtube_dlc/version.py').read(), exec(compile(open('youtube_dlc/version.py').read(),
'youtube_dlc/version.py', 'exec')) 'youtube_dlc/version.py', 'exec'))
DESCRIPTION = 'Media downloader supporting various sites such as youtube' DESCRIPTION = 'Command-line program to download videos from YouTube.com and many other other video platforms.'
LONG_DESCRIPTION = 'Command-line program to download videos from YouTube.com and other video sites. Based on a more active community fork.'
LONG_DESCRIPTION = '\n\n'.join((
'Official repository: <https://github.com/pukkandan/yt-dlp>',
'**PS**: Many links in this document will not work since this is a copy of the README.md from Github',
open("README.md", "r", encoding="utf-8").read()))
if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe': if len(sys.argv) >= 2 and sys.argv[1] == 'py2exe':
print("inv") print("inv")
@@ -59,19 +63,21 @@ class build_lazy_extractors(Command):
) )
setup( setup(
name="youtube_dlc", name="yt-dlp",
version=__version__, version=__version__,
maintainer="Tom-Oliver Heidel", maintainer="pukkandan",
maintainer_email="theidel@uni-bremen.de", maintainer_email="pukkandan@gmail.com",
description=DESCRIPTION, description=DESCRIPTION,
long_description=LONG_DESCRIPTION, long_description=LONG_DESCRIPTION,
# long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url="https://github.com/pukkandan/yt-dlc", url="https://github.com/pukkandan/yt-dlp",
packages=find_packages(exclude=("youtube_dl","test",)), packages=find_packages(exclude=("youtube_dl","test",)),
#packages=[ project_urls={
# 'youtube_dlc', 'Documentation': 'https://github.com/pukkandan/yt-dlp#yt-dlp',
# 'youtube_dlc.extractor', 'youtube_dlc.downloader', 'Source': 'https://github.com/pukkandan/yt-dlp',
# 'youtube_dlc.postprocessor'], 'Tracker': 'https://github.com/pukkandan/yt-dlp/issues',
#'Funding': 'https://donate.pypi.org',
},
classifiers=[ classifiers=[
"Topic :: Multimedia :: Video", "Topic :: Multimedia :: Video",
"Development Status :: 5 - Production/Stable", "Development Status :: 5 - Production/Stable",

View File

@@ -14,7 +14,7 @@
"logtostderr": false, "logtostderr": false,
"matchtitle": null, "matchtitle": null,
"max_downloads": null, "max_downloads": null,
"nooverwrites": false, "overwrites": null,
"nopart": false, "nopart": false,
"noprogress": false, "noprogress": false,
"outtmpl": "%(id)s.%(ext)s", "outtmpl": "%(id)s.%(ext)s",

View File

@@ -69,9 +69,9 @@ class TestAllURLsMatching(unittest.TestCase):
self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:tab']) self.assertMatch('https://www.youtube.com/feed/watch_later', ['youtube:tab'])
self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:tab']) self.assertMatch('https://www.youtube.com/feed/subscriptions', ['youtube:tab'])
# def test_youtube_search_matching(self): def test_youtube_search_matching(self):
# self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url']) self.assertMatch('http://www.youtube.com/results?search_query=making+mustard', ['youtube:search_url'])
# self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url']) self.assertMatch('https://www.youtube.com/results?baz=bar&search_query=youtube-dl+test+video&filters=video&lclk=video', ['youtube:search_url'])
def test_youtube_extract(self): def test_youtube_extract(self):
assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id) assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)

52
test/test_overwrites.py Normal file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env python
from __future__ import unicode_literals
import os
from os.path import join
import subprocess
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import try_rm
root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
download_file = join(root_dir, 'test.webm')
class TestOverwrites(unittest.TestCase):
def setUp(self):
# create an empty file
open(download_file, 'a').close()
def test_default_overwrites(self):
outp = subprocess.Popen(
[
sys.executable, 'youtube_dlc/__main__.py',
'-o', 'test.webm',
'https://www.youtube.com/watch?v=jNQXAC9IVRw'
], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'has already been downloaded' in sout)
# if the file has no content, it has not been redownloaded
self.assertTrue(os.path.getsize(download_file) < 1)
def test_yes_overwrites(self):
outp = subprocess.Popen(
[
sys.executable, 'youtube_dlc/__main__.py', '--yes-overwrites',
'-o', 'test.webm',
'https://www.youtube.com/watch?v=jNQXAC9IVRw'
], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate()
self.assertTrue(b'has already been downloaded' not in sout)
# if the file has no content, it has not been redownloaded
self.assertTrue(os.path.getsize(download_file) > 1)
def tearDown(self):
try_rm(join(root_dir, 'test.webm'))
if __name__ == '__main__':
unittest.main()

View File

@@ -1 +1 @@
py "%~dp0\youtube_dl\__main__.py" py "%~dp0youtube_dlc\__main__.py" %*

View File

@@ -58,6 +58,7 @@ from .utils import (
encode_compat_str, encode_compat_str,
encodeFilename, encodeFilename,
error_to_compat_str, error_to_compat_str,
ExistingVideoReached,
expand_path, expand_path,
ExtractorError, ExtractorError,
format_bytes, format_bytes,
@@ -81,6 +82,7 @@ from .utils import (
register_socks_protocols, register_socks_protocols,
render_table, render_table,
replace_extension, replace_extension,
RejectedVideoReached,
SameFileError, SameFileError,
sanitize_filename, sanitize_filename,
sanitize_path, sanitize_path,
@@ -179,9 +181,11 @@ class YoutubeDL(object):
outtmpl: Template for output names. outtmpl: Template for output names.
restrictfilenames: Do not allow "&" and spaces in file names. restrictfilenames: Do not allow "&" and spaces in file names.
trim_file_name: Limit length of filename (extension excluded). trim_file_name: Limit length of filename (extension excluded).
ignoreerrors: Do not stop on download errors. (Default False when running youtube-dlc, but True when directly accessing YoutubeDL class) ignoreerrors: Do not stop on download errors. (Default True when running youtube-dlc, but False when directly accessing YoutubeDL class)
force_generic_extractor: Force downloader to use the generic extractor force_generic_extractor: Force downloader to use the generic extractor
nooverwrites: Prevent overwriting files. overwrites: Overwrite all video and metadata files if True,
overwrite only non-video files if None
and don't overwrite any file if False
playliststart: Playlist item to start at. playliststart: Playlist item to start at.
playlistend: Playlist item to end at. playlistend: Playlist item to end at.
playlist_items: Specific indices of playlist to download. playlist_items: Specific indices of playlist to download.
@@ -230,6 +234,7 @@ class YoutubeDL(object):
again. again.
break_on_existing: Stop the download process after attempting to download a file that's break_on_existing: Stop the download process after attempting to download a file that's
in the archive. in the archive.
break_on_reject: Stop the download process when encountering a video that has been filtered out.
cookiefile: File name where cookies should be read from and dumped to. cookiefile: File name where cookies should be read from and dumped to.
nocheckcertificate:Do not verify SSL certificates nocheckcertificate:Do not verify SSL certificates
prefer_insecure: Use HTTP instead of HTTPS to retrieve information. prefer_insecure: Use HTTP instead of HTTPS to retrieve information.
@@ -364,6 +369,8 @@ class YoutubeDL(object):
_pps = [] _pps = []
_download_retcode = None _download_retcode = None
_num_downloads = None _num_downloads = None
_playlist_level = 0
_playlist_urls = set()
_screen_file = None _screen_file = None
def __init__(self, params=None, auto_init=True): def __init__(self, params=None, auto_init=True):
@@ -686,6 +693,13 @@ class YoutubeDL(object):
except UnicodeEncodeError: except UnicodeEncodeError:
self.to_screen('[download] The file has already been downloaded') self.to_screen('[download] The file has already been downloaded')
def report_file_delete(self, file_name):
"""Report that existing file will be deleted."""
try:
self.to_screen('Deleting already existent file %s' % file_name)
except UnicodeEncodeError:
self.to_screen('Deleting already existent file')
def prepare_filename(self, info_dict): def prepare_filename(self, info_dict):
"""Generate the output filename.""" """Generate the output filename."""
try: try:
@@ -788,44 +802,53 @@ class YoutubeDL(object):
def _match_entry(self, info_dict, incomplete): def _match_entry(self, info_dict, incomplete):
""" Returns None if the file should be downloaded """ """ Returns None if the file should be downloaded """
video_title = info_dict.get('title', info_dict.get('id', 'video')) def check_filter():
if 'title' in info_dict: video_title = info_dict.get('title', info_dict.get('id', 'video'))
# This can happen when we're just evaluating the playlist if 'title' in info_dict:
title = info_dict['title'] # This can happen when we're just evaluating the playlist
matchtitle = self.params.get('matchtitle', False) title = info_dict['title']
if matchtitle: matchtitle = self.params.get('matchtitle', False)
if not re.search(matchtitle, title, re.IGNORECASE): if matchtitle:
return '"' + title + '" title did not match pattern "' + matchtitle + '"' if not re.search(matchtitle, title, re.IGNORECASE):
rejecttitle = self.params.get('rejecttitle', False) return '"' + title + '" title did not match pattern "' + matchtitle + '"'
if rejecttitle: rejecttitle = self.params.get('rejecttitle', False)
if re.search(rejecttitle, title, re.IGNORECASE): if rejecttitle:
return '"' + title + '" title matched reject pattern "' + rejecttitle + '"' if re.search(rejecttitle, title, re.IGNORECASE):
date = info_dict.get('upload_date') return '"' + title + '" title matched reject pattern "' + rejecttitle + '"'
if date is not None: date = info_dict.get('upload_date')
dateRange = self.params.get('daterange', DateRange()) if date is not None:
if date not in dateRange: dateRange = self.params.get('daterange', DateRange())
return '%s upload date is not in range %s' % (date_from_str(date).isoformat(), dateRange) if date not in dateRange:
view_count = info_dict.get('view_count') return '%s upload date is not in range %s' % (date_from_str(date).isoformat(), dateRange)
if view_count is not None: view_count = info_dict.get('view_count')
min_views = self.params.get('min_views') if view_count is not None:
if min_views is not None and view_count < min_views: min_views = self.params.get('min_views')
return 'Skipping %s, because it has not reached minimum view count (%d/%d)' % (video_title, view_count, min_views) if min_views is not None and view_count < min_views:
max_views = self.params.get('max_views') return 'Skipping %s, because it has not reached minimum view count (%d/%d)' % (video_title, view_count, min_views)
if max_views is not None and view_count > max_views: max_views = self.params.get('max_views')
return 'Skipping %s, because it has exceeded the maximum view count (%d/%d)' % (video_title, view_count, max_views) if max_views is not None and view_count > max_views:
if age_restricted(info_dict.get('age_limit'), self.params.get('age_limit')): return 'Skipping %s, because it has exceeded the maximum view count (%d/%d)' % (video_title, view_count, max_views)
return 'Skipping "%s" because it is age restricted' % video_title if age_restricted(info_dict.get('age_limit'), self.params.get('age_limit')):
if self.in_download_archive(info_dict): return 'Skipping "%s" because it is age restricted' % video_title
return '%s has already been recorded in archive' % video_title if self.in_download_archive(info_dict):
return '%s has already been recorded in archive' % video_title
if not incomplete: if not incomplete:
match_filter = self.params.get('match_filter') match_filter = self.params.get('match_filter')
if match_filter is not None: if match_filter is not None:
ret = match_filter(info_dict) ret = match_filter(info_dict)
if ret is not None: if ret is not None:
return ret return ret
return None
return None reason = check_filter()
if reason is not None:
self.to_screen('[download] ' + reason)
if reason.endswith('has already been recorded in the archive') and self.params.get('break_on_existing', False):
raise ExistingVideoReached()
elif self.params.get('break_on_reject', False):
raise RejectedVideoReached()
return reason
@staticmethod @staticmethod
def add_extra_info(info_dict, extra_info): def add_extra_info(info_dict, extra_info):
@@ -886,7 +909,7 @@ class YoutubeDL(object):
self.report_error(msg) self.report_error(msg)
except ExtractorError as e: # An error we somewhat expected except ExtractorError as e: # An error we somewhat expected
self.report_error(compat_str(e), e.format_traceback()) self.report_error(compat_str(e), e.format_traceback())
except MaxDownloadsReached: except (MaxDownloadsReached, ExistingVideoReached, RejectedVideoReached):
raise raise
except Exception as e: except Exception as e:
if self.params.get('ignoreerrors', False): if self.params.get('ignoreerrors', False):
@@ -991,119 +1014,23 @@ class YoutubeDL(object):
return self.process_ie_result( return self.process_ie_result(
new_result, download=download, extra_info=extra_info) new_result, download=download, extra_info=extra_info)
elif result_type in ('playlist', 'multi_video'): elif result_type in ('playlist', 'multi_video'):
# We process each entry in the playlist # Protect from infinite recursion due to recursively nested playlists
playlist = ie_result.get('title') or ie_result.get('id') # (see https://github.com/ytdl-org/youtube-dl/issues/27833)
self.to_screen('[download] Downloading playlist: %s' % playlist) webpage_url = ie_result['webpage_url']
if webpage_url in self._playlist_urls:
playlist_results = []
playliststart = self.params.get('playliststart', 1) - 1
playlistend = self.params.get('playlistend')
# For backwards compatibility, interpret -1 as whole list
if playlistend == -1:
playlistend = None
playlistitems_str = self.params.get('playlist_items')
playlistitems = None
if playlistitems_str is not None:
def iter_playlistitems(format):
for string_segment in format.split(','):
if '-' in string_segment:
start, end = string_segment.split('-')
for item in range(int(start), int(end) + 1):
yield int(item)
else:
yield int(string_segment)
playlistitems = orderedSet(iter_playlistitems(playlistitems_str))
ie_entries = ie_result['entries']
def make_playlistitems_entries(list_ie_entries):
num_entries = len(list_ie_entries)
return [
list_ie_entries[i - 1] for i in playlistitems
if -num_entries <= i - 1 < num_entries]
def report_download(num_entries):
self.to_screen( self.to_screen(
'[%s] playlist %s: Downloading %d videos' % '[download] Skipping already downloaded playlist: %s'
(ie_result['extractor'], playlist, num_entries)) % ie_result.get('title') or ie_result.get('id'))
return
if isinstance(ie_entries, list): self._playlist_level += 1
n_all_entries = len(ie_entries) self._playlist_urls.add(webpage_url)
if playlistitems: try:
entries = make_playlistitems_entries(ie_entries) return self.__process_playlist(ie_result, download)
else: finally:
entries = ie_entries[playliststart:playlistend] self._playlist_level -= 1
n_entries = len(entries) if not self._playlist_level:
self.to_screen( self._playlist_urls.clear()
'[%s] playlist %s: Collected %d video ids (downloading %d of them)' %
(ie_result['extractor'], playlist, n_all_entries, n_entries))
elif isinstance(ie_entries, PagedList):
if playlistitems:
entries = []
for item in playlistitems:
entries.extend(ie_entries.getslice(
item - 1, item
))
else:
entries = ie_entries.getslice(
playliststart, playlistend)
n_entries = len(entries)
report_download(n_entries)
else: # iterable
if playlistitems:
entries = make_playlistitems_entries(list(itertools.islice(
ie_entries, 0, max(playlistitems))))
else:
entries = list(itertools.islice(
ie_entries, playliststart, playlistend))
n_entries = len(entries)
report_download(n_entries)
if self.params.get('playlistreverse', False):
entries = entries[::-1]
if self.params.get('playlistrandom', False):
random.shuffle(entries)
x_forwarded_for = ie_result.get('__x_forwarded_for_ip')
for i, entry in enumerate(entries, 1):
self.to_screen('[download] Downloading video %s of %s' % (i, n_entries))
# This __x_forwarded_for_ip thing is a bit ugly but requires
# minimal changes
if x_forwarded_for:
entry['__x_forwarded_for_ip'] = x_forwarded_for
extra = {
'n_entries': n_entries,
'playlist': playlist,
'playlist_id': ie_result.get('id'),
'playlist_title': ie_result.get('title'),
'playlist_uploader': ie_result.get('uploader'),
'playlist_uploader_id': ie_result.get('uploader_id'),
'playlist_index': playlistitems[i - 1] if playlistitems else i + playliststart,
'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'],
'webpage_url_basename': url_basename(ie_result['webpage_url']),
'extractor_key': ie_result['extractor_key'],
}
reason = self._match_entry(entry, incomplete=True)
if reason is not None:
if reason.endswith('has already been recorded in the archive') and self.params.get('break_on_existing'):
print('[download] tried downloading a file that\'s already in the archive, stopping since --break-on-existing is set.')
break
else:
self.to_screen('[download] ' + reason)
continue
entry_result = self.__process_iterable_entry(entry, download, extra)
# TODO: skip failed (empty) entries?
playlist_results.append(entry_result)
ie_result['entries'] = playlist_results
self.to_screen('[download] Finished downloading playlist: %s' % playlist)
return ie_result
elif result_type == 'compat_list': elif result_type == 'compat_list':
self.report_warning( self.report_warning(
'Extractor %s returned a compat_list result. ' 'Extractor %s returned a compat_list result. '
@@ -1128,6 +1055,115 @@ class YoutubeDL(object):
else: else:
raise Exception('Invalid result type: %s' % result_type) raise Exception('Invalid result type: %s' % result_type)
def __process_playlist(self, ie_result, download):
# We process each entry in the playlist
playlist = ie_result.get('title') or ie_result.get('id')
self.to_screen('[download] Downloading playlist: %s' % playlist)
playlist_results = []
playliststart = self.params.get('playliststart', 1) - 1
playlistend = self.params.get('playlistend')
# For backwards compatibility, interpret -1 as whole list
if playlistend == -1:
playlistend = None
playlistitems_str = self.params.get('playlist_items')
playlistitems = None
if playlistitems_str is not None:
def iter_playlistitems(format):
for string_segment in format.split(','):
if '-' in string_segment:
start, end = string_segment.split('-')
for item in range(int(start), int(end) + 1):
yield int(item)
else:
yield int(string_segment)
playlistitems = orderedSet(iter_playlistitems(playlistitems_str))
ie_entries = ie_result['entries']
def make_playlistitems_entries(list_ie_entries):
num_entries = len(list_ie_entries)
return [
list_ie_entries[i - 1] for i in playlistitems
if -num_entries <= i - 1 < num_entries]
def report_download(num_entries):
self.to_screen(
'[%s] playlist %s: Downloading %d videos' %
(ie_result['extractor'], playlist, num_entries))
if isinstance(ie_entries, list):
n_all_entries = len(ie_entries)
if playlistitems:
entries = make_playlistitems_entries(ie_entries)
else:
entries = ie_entries[playliststart:playlistend]
n_entries = len(entries)
self.to_screen(
'[%s] playlist %s: Collected %d video ids (downloading %d of them)' %
(ie_result['extractor'], playlist, n_all_entries, n_entries))
elif isinstance(ie_entries, PagedList):
if playlistitems:
entries = []
for item in playlistitems:
entries.extend(ie_entries.getslice(
item - 1, item
))
else:
entries = ie_entries.getslice(
playliststart, playlistend)
n_entries = len(entries)
report_download(n_entries)
else: # iterable
if playlistitems:
entries = make_playlistitems_entries(list(itertools.islice(
ie_entries, 0, max(playlistitems))))
else:
entries = list(itertools.islice(
ie_entries, playliststart, playlistend))
n_entries = len(entries)
report_download(n_entries)
if self.params.get('playlistreverse', False):
entries = entries[::-1]
if self.params.get('playlistrandom', False):
random.shuffle(entries)
x_forwarded_for = ie_result.get('__x_forwarded_for_ip')
for i, entry in enumerate(entries, 1):
self.to_screen('[download] Downloading video %s of %s' % (i, n_entries))
# This __x_forwarded_for_ip thing is a bit ugly but requires
# minimal changes
if x_forwarded_for:
entry['__x_forwarded_for_ip'] = x_forwarded_for
extra = {
'n_entries': n_entries,
'playlist': playlist,
'playlist_id': ie_result.get('id'),
'playlist_title': ie_result.get('title'),
'playlist_uploader': ie_result.get('uploader'),
'playlist_uploader_id': ie_result.get('uploader_id'),
'playlist_index': playlistitems[i - 1] if playlistitems else i + playliststart,
'extractor': ie_result['extractor'],
'webpage_url': ie_result['webpage_url'],
'webpage_url_basename': url_basename(ie_result['webpage_url']),
'extractor_key': ie_result['extractor_key'],
}
if self._match_entry(entry, incomplete=True) is not None:
continue
entry_result = self.__process_iterable_entry(entry, download, extra)
# TODO: skip failed (empty) entries?
playlist_results.append(entry_result)
ie_result['entries'] = playlist_results
self.to_screen('[download] Finished downloading playlist: %s' % playlist)
return ie_result
@__handle_extraction_exceptions @__handle_extraction_exceptions
def __process_iterable_entry(self, entry, download, extra_info): def __process_iterable_entry(self, entry, download, extra_info):
return self.process_ie_result( return self.process_ie_result(
@@ -1861,9 +1897,7 @@ class YoutubeDL(object):
if 'format' not in info_dict: if 'format' not in info_dict:
info_dict['format'] = info_dict['ext'] info_dict['format'] = info_dict['ext']
reason = self._match_entry(info_dict, incomplete=False) if self._match_entry(info_dict, incomplete=False) is not None:
if reason is not None:
self.to_screen('[download] ' + reason)
return return
self._num_downloads += 1 self._num_downloads += 1
@@ -1898,7 +1932,7 @@ class YoutubeDL(object):
if self.params.get('writedescription', False): if self.params.get('writedescription', False):
descfn = replace_extension(filename, 'description', info_dict.get('ext')) descfn = replace_extension(filename, 'description', info_dict.get('ext'))
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(descfn)): if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(descfn)):
self.to_screen('[info] Video description is already present') self.to_screen('[info] Video description is already present')
elif info_dict.get('description') is None: elif info_dict.get('description') is None:
self.report_warning('There\'s no description to write.') self.report_warning('There\'s no description to write.')
@@ -1913,7 +1947,7 @@ class YoutubeDL(object):
if self.params.get('writeannotations', False): if self.params.get('writeannotations', False):
annofn = replace_extension(filename, 'annotations.xml', info_dict.get('ext')) annofn = replace_extension(filename, 'annotations.xml', info_dict.get('ext'))
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(annofn)): if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(annofn)):
self.to_screen('[info] Video annotations are already present') self.to_screen('[info] Video annotations are already present')
elif not info_dict.get('annotations'): elif not info_dict.get('annotations'):
self.report_warning('There are no annotations to write.') self.report_warning('There are no annotations to write.')
@@ -1947,7 +1981,7 @@ class YoutubeDL(object):
for sub_lang, sub_info in subtitles.items(): for sub_lang, sub_info in subtitles.items():
sub_format = sub_info['ext'] sub_format = sub_info['ext']
sub_filename = subtitles_filename(filename, sub_lang, sub_format, info_dict.get('ext')) sub_filename = subtitles_filename(filename, sub_lang, sub_format, info_dict.get('ext'))
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(sub_filename)): if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(sub_filename)):
self.to_screen('[info] Video subtitle %s.%s is already present' % (sub_lang, sub_format)) self.to_screen('[info] Video subtitle %s.%s is already present' % (sub_lang, sub_format))
else: else:
self.to_screen('[info] Writing video subtitles to: ' + sub_filename) self.to_screen('[info] Writing video subtitles to: ' + sub_filename)
@@ -2002,7 +2036,7 @@ class YoutubeDL(object):
if self.params.get('writeinfojson', False): if self.params.get('writeinfojson', False):
infofn = replace_extension(filename, 'info.json', info_dict.get('ext')) infofn = replace_extension(filename, 'info.json', info_dict.get('ext'))
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(infofn)): if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(infofn)):
self.to_screen('[info] Video description metadata is already present') self.to_screen('[info] Video description metadata is already present')
else: else:
self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn) self.to_screen('[info] Writing video description metadata as JSON to: ' + infofn)
@@ -2110,11 +2144,15 @@ class YoutubeDL(object):
'Requested formats are incompatible for merge and will be merged into mkv.') 'Requested formats are incompatible for merge and will be merged into mkv.')
# Ensure filename always has a correct extension for successful merge # Ensure filename always has a correct extension for successful merge
filename = '%s.%s' % (filename_wo_ext, info_dict['ext']) filename = '%s.%s' % (filename_wo_ext, info_dict['ext'])
if os.path.exists(encodeFilename(filename)): file_exists = os.path.exists(encodeFilename(filename))
if not self.params.get('overwrites', False) and file_exists:
self.to_screen( self.to_screen(
'[download] %s has already been downloaded and ' '[download] %s has already been downloaded and '
'merged' % filename) 'merged' % filename)
else: else:
if file_exists:
self.report_file_delete(filename)
os.remove(encodeFilename(filename))
for f in requested_formats: for f in requested_formats:
new_info = dict(info_dict) new_info = dict(info_dict)
new_info.update(f) new_info.update(f)
@@ -2131,6 +2169,11 @@ class YoutubeDL(object):
# Even if there were no downloads, it is being merged only now # Even if there were no downloads, it is being merged only now
info_dict['__real_download'] = True info_dict['__real_download'] = True
else: else:
# Delete existing file with --yes-overwrites
if self.params.get('overwrites', False):
if os.path.exists(encodeFilename(filename)):
self.report_file_delete(filename)
os.remove(encodeFilename(filename))
# Just a single file # Just a single file
success, real_download = dl(filename, info_dict) success, real_download = dl(filename, info_dict)
info_dict['__real_download'] = real_download info_dict['__real_download'] = real_download
@@ -2242,7 +2285,13 @@ class YoutubeDL(object):
except UnavailableVideoError: except UnavailableVideoError:
self.report_error('unable to download video') self.report_error('unable to download video')
except MaxDownloadsReached: except MaxDownloadsReached:
self.to_screen('[info] Maximum number of downloaded files reached.') self.to_screen('[info] Maximum number of downloaded files reached')
raise
except ExistingVideoReached:
self.to_screen('[info] Encountered a file that is already in the archive, stopping due to --break-on-existing')
raise
except RejectedVideoReached:
self.to_screen('[info] Encountered a file that did not match filter, stopping due to --break-on-reject')
raise raise
else: else:
if self.params.get('dump_single_json', False): if self.params.get('dump_single_json', False):
@@ -2514,7 +2563,7 @@ class YoutubeDL(object):
self.get_encoding())) self.get_encoding()))
write_string(encoding_str, encoding=None) write_string(encoding_str, encoding=None)
self._write_string('[debug] youtube-dlc version ' + __version__ + '\n') self._write_string('[debug] yt-dlp version ' + __version__ + '\n')
if _LAZY_LOADER: if _LAZY_LOADER:
self._write_string('[debug] Lazy loading extractors enabled' + '\n') self._write_string('[debug] Lazy loading extractors enabled' + '\n')
try: try:
@@ -2563,6 +2612,7 @@ class YoutubeDL(object):
if self.params.get('call_home', False): if self.params.get('call_home', False):
ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode('utf-8') ipaddr = self.urlopen('https://yt-dl.org/ip').read().decode('utf-8')
self._write_string('[debug] Public IP address: %s\n' % ipaddr) self._write_string('[debug] Public IP address: %s\n' % ipaddr)
return
latest_version = self.urlopen( latest_version = self.urlopen(
'https://yt-dl.org/latest/version').read().decode('utf-8') 'https://yt-dl.org/latest/version').read().decode('utf-8')
if version_tuple(latest_version) > version_tuple(__version__): if version_tuple(latest_version) > version_tuple(__version__):
@@ -2660,7 +2710,7 @@ class YoutubeDL(object):
thumb_display_id = '%s ' % t['id'] if len(thumbnails) > 1 else '' thumb_display_id = '%s ' % t['id'] if len(thumbnails) > 1 else ''
t['filename'] = thumb_filename = replace_extension(filename + suffix, thumb_ext, info_dict.get('ext')) t['filename'] = thumb_filename = replace_extension(filename + suffix, thumb_ext, info_dict.get('ext'))
if self.params.get('nooverwrites', False) and os.path.exists(encodeFilename(thumb_filename)): if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(thumb_filename)):
self.to_screen('[%s] %s: Thumbnail %sis already present' % self.to_screen('[%s] %s: Thumbnail %sis already present' %
(info_dict['extractor'], info_dict['id'], thumb_display_id)) (info_dict['extractor'], info_dict['id'], thumb_display_id))
else: else:

View File

@@ -26,11 +26,13 @@ from .utils import (
decodeOption, decodeOption,
DEFAULT_OUTTMPL, DEFAULT_OUTTMPL,
DownloadError, DownloadError,
ExistingVideoReached,
expand_path, expand_path,
match_filter_func, match_filter_func,
MaxDownloadsReached, MaxDownloadsReached,
preferredencoding, preferredencoding,
read_batch_urls, read_batch_urls,
RejectedVideoReached,
SameFileError, SameFileError,
setproctitle, setproctitle,
std_headers, std_headers,
@@ -176,6 +178,9 @@ def _real_main(argv=None):
opts.max_sleep_interval = opts.sleep_interval opts.max_sleep_interval = opts.sleep_interval
if opts.ap_mso and opts.ap_mso not in MSO_INFO: if opts.ap_mso and opts.ap_mso not in MSO_INFO:
parser.error('Unsupported TV Provider, use --ap-list-mso to get a list of supported TV Providers') parser.error('Unsupported TV Provider, use --ap-list-mso to get a list of supported TV Providers')
if opts.overwrites:
# --yes-overwrites implies --no-continue
opts.continue_dl = False
def parse_retries(retries): def parse_retries(retries):
if retries in ('inf', 'infinite'): if retries in ('inf', 'infinite'):
@@ -391,7 +396,7 @@ def _real_main(argv=None):
'ignoreerrors': opts.ignoreerrors, 'ignoreerrors': opts.ignoreerrors,
'force_generic_extractor': opts.force_generic_extractor, 'force_generic_extractor': opts.force_generic_extractor,
'ratelimit': opts.ratelimit, 'ratelimit': opts.ratelimit,
'nooverwrites': opts.nooverwrites, 'overwrites': opts.overwrites,
'retries': opts.retries, 'retries': opts.retries,
'fragment_retries': opts.fragment_retries, 'fragment_retries': opts.fragment_retries,
'skip_unavailable_fragments': opts.skip_unavailable_fragments, 'skip_unavailable_fragments': opts.skip_unavailable_fragments,
@@ -446,6 +451,7 @@ def _real_main(argv=None):
'age_limit': opts.age_limit, 'age_limit': opts.age_limit,
'download_archive': download_archive_fn, 'download_archive': download_archive_fn,
'break_on_existing': opts.break_on_existing, 'break_on_existing': opts.break_on_existing,
'break_on_reject': opts.break_on_reject,
'cookiefile': opts.cookiefile, 'cookiefile': opts.cookiefile,
'nocheckcertificate': opts.no_check_certificate, 'nocheckcertificate': opts.no_check_certificate,
'prefer_insecure': opts.prefer_insecure, 'prefer_insecure': opts.prefer_insecure,
@@ -516,8 +522,8 @@ def _real_main(argv=None):
retcode = ydl.download_with_info_file(expand_path(opts.load_info_filename)) retcode = ydl.download_with_info_file(expand_path(opts.load_info_filename))
else: else:
retcode = ydl.download(all_urls) retcode = ydl.download(all_urls)
except MaxDownloadsReached: except (MaxDownloadsReached, ExistingVideoReached, RejectedVideoReached):
ydl.to_screen('--max-download limit reached, aborting.') ydl.to_screen('Aborting remaining downloads')
retcode = 101 retcode = 101
sys.exit(retcode) sys.exit(retcode)

View File

@@ -332,7 +332,7 @@ class FileDownloader(object):
""" """
nooverwrites_and_exists = ( nooverwrites_and_exists = (
self.params.get('nooverwrites', False) not self.params.get('overwrites', True)
and os.path.exists(encodeFilename(filename)) and os.path.exists(encodeFilename(filename))
) )

View File

@@ -10,6 +10,7 @@ import random
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_cbc_decrypt from ..aes import aes_cbc_decrypt
from ..compat import ( from ..compat import (
compat_HTTPError,
compat_b64decode, compat_b64decode,
compat_ord, compat_ord,
) )
@@ -18,11 +19,13 @@ from ..utils import (
bytes_to_long, bytes_to_long,
ExtractorError, ExtractorError,
float_or_none, float_or_none,
int_or_none,
intlist_to_bytes, intlist_to_bytes,
long_to_bytes, long_to_bytes,
pkcs1pad, pkcs1pad,
strip_or_none, strip_or_none,
urljoin, try_get,
unified_strdate,
) )
@@ -31,16 +34,27 @@ class ADNIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?animedigitalnetwork\.fr/video/[^/]+/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?animedigitalnetwork\.fr/video/[^/]+/(?P<id>\d+)'
_TEST = { _TEST = {
'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites', 'url': 'http://animedigitalnetwork.fr/video/blue-exorcist-kyoto-saga/7778-episode-1-debut-des-hostilites',
'md5': 'e497370d847fd79d9d4c74be55575c7a', 'md5': '0319c99885ff5547565cacb4f3f9348d',
'info_dict': { 'info_dict': {
'id': '7778', 'id': '7778',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Blue Exorcist - Kyôto Saga - Épisode 1', 'title': 'Blue Exorcist - Kyôto Saga - Episode 1',
'description': 'md5:2f7b5aa76edbc1a7a92cedcda8a528d5', 'description': 'md5:2f7b5aa76edbc1a7a92cedcda8a528d5',
'series': 'Blue Exorcist - Kyôto Saga',
'duration': 1467,
'release_date': '20170106',
'comment_count': int,
'average_rating': float,
'season_number': 2,
'episode': 'Début des hostilités',
'episode_number': 1,
} }
} }
_BASE_URL = 'http://animedigitalnetwork.fr' _BASE_URL = 'http://animedigitalnetwork.fr'
_RSA_KEY = (0xc35ae1e4356b65a73b551493da94b8cb443491c0aa092a357a5aee57ffc14dda85326f42d716e539a34542a0d3f363adf16c5ec222d713d5997194030ee2e4f0d1fb328c01a81cf6868c090d50de8e169c6b13d1675b9eeed1cbc51e1fffca9b38af07f37abd790924cd3bee59d0257cfda4fe5f3f0534877e21ce5821447d1b, 65537) _API_BASE_URL = 'https://gw.api.animedigitalnetwork.fr/'
_PLAYER_BASE_URL = _API_BASE_URL + 'player/'
_RSA_KEY = (0x9B42B08905199A5CCE2026274399CA560ECB209EE9878A708B1C0812E1BB8CB5D1FB7441861147C1A1F2F3A0476DD63A9CAC20D3E983613346850AA6CB38F16DC7D720FD7D86FC6E5B3D5BBC72E14CD0BF9E869F2CEA2CCAD648F1DCE38F1FF916CEFB2D339B64AA0264372344BC775E265E8A852F88144AB0BD9AA06C1A4ABB, 65537)
_POS_ALIGN_MAP = { _POS_ALIGN_MAP = {
'start': 1, 'start': 1,
'end': 3, 'end': 3,
@@ -54,26 +68,24 @@ class ADNIE(InfoExtractor):
def _ass_subtitles_timecode(seconds): def _ass_subtitles_timecode(seconds):
return '%01d:%02d:%02d.%02d' % (seconds / 3600, (seconds % 3600) / 60, seconds % 60, (seconds % 1) * 100) return '%01d:%02d:%02d.%02d' % (seconds / 3600, (seconds % 3600) / 60, seconds % 60, (seconds % 1) * 100)
def _get_subtitles(self, sub_path, video_id): def _get_subtitles(self, sub_url, video_id):
if not sub_path: if not sub_url:
return None return None
enc_subtitles = self._download_webpage( enc_subtitles = self._download_webpage(
urljoin(self._BASE_URL, sub_path), sub_url, video_id, 'Downloading subtitles location', fatal=False) or '{}'
video_id, 'Downloading subtitles location', fatal=False) or '{}'
subtitle_location = (self._parse_json(enc_subtitles, video_id, fatal=False) or {}).get('location') subtitle_location = (self._parse_json(enc_subtitles, video_id, fatal=False) or {}).get('location')
if subtitle_location: if subtitle_location:
enc_subtitles = self._download_webpage( enc_subtitles = self._download_webpage(
urljoin(self._BASE_URL, subtitle_location), subtitle_location, video_id, 'Downloading subtitles data',
video_id, 'Downloading subtitles data', fatal=False, fatal=False, headers={'Origin': 'https://animedigitalnetwork.fr'})
headers={'Origin': 'https://animedigitalnetwork.fr'})
if not enc_subtitles: if not enc_subtitles:
return None return None
# http://animedigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js # http://animedigitalnetwork.fr/components/com_vodvideo/videojs/adn-vjs.min.js
dec_subtitles = intlist_to_bytes(aes_cbc_decrypt( dec_subtitles = intlist_to_bytes(aes_cbc_decrypt(
bytes_to_intlist(compat_b64decode(enc_subtitles[24:])), bytes_to_intlist(compat_b64decode(enc_subtitles[24:])),
bytes_to_intlist(binascii.unhexlify(self._K + '4b8ef13ec1872730')), bytes_to_intlist(binascii.unhexlify(self._K + 'ab9f52f5baae7c72')),
bytes_to_intlist(compat_b64decode(enc_subtitles[:24])) bytes_to_intlist(compat_b64decode(enc_subtitles[:24]))
)) ))
subtitles_json = self._parse_json( subtitles_json = self._parse_json(
@@ -119,59 +131,76 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) video_base_url = self._PLAYER_BASE_URL + 'video/%s/' % video_id
player_config = self._parse_json(self._search_regex( player = self._download_json(
r'playerConfig\s*=\s*({.+});', webpage, video_base_url + 'configuration', video_id,
'player config', default='{}'), video_id, fatal=False) 'Downloading player config JSON metadata')['player']
if not player_config: options = player['options']
config_url = urljoin(self._BASE_URL, self._search_regex(
r'(?:id="player"|class="[^"]*adn-player-container[^"]*")[^>]+data-url="([^"]+)"',
webpage, 'config url'))
player_config = self._download_json(
config_url, video_id,
'Downloading player config JSON metadata')['player']
video_info = {} user = options['user']
video_info_str = self._search_regex( if not user.get('hasAccess'):
r'videoInfo\s*=\s*({.+});', webpage, raise ExtractorError(
'video info', fatal=False) 'This video is only available for paying users', expected=True)
if video_info_str: # self.raise_login_required() # FIXME: Login is not implemented
video_info = self._parse_json(
video_info_str, video_id, fatal=False) or {}
options = player_config.get('options') or {} token = self._download_json(
metas = options.get('metas') or {} user.get('refreshTokenUrl') or (self._PLAYER_BASE_URL + 'refresh/token'),
links = player_config.get('links') or {} video_id, 'Downloading access token', headers={
sub_path = player_config.get('subtitles') 'x-player-refresh-token': user['refreshToken']
error = None }, data=b'')['token']
if not links:
links_url = player_config.get('linksurl') or options['videoUrl'] links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link')
token = options['token'] self._K = ''.join([random.choice('0123456789abcdef') for _ in range(16)])
self._K = ''.join([random.choice('0123456789abcdef') for _ in range(16)]) message = bytes_to_intlist(json.dumps({
message = bytes_to_intlist(json.dumps({ 'k': self._K,
'k': self._K, 't': token,
'e': 60, }))
't': token,
})) # Sometimes authentication fails for no good reason, retry with
# a different random padding
links_data = None
for _ in range(3):
padded_message = intlist_to_bytes(pkcs1pad(message, 128)) padded_message = intlist_to_bytes(pkcs1pad(message, 128))
n, e = self._RSA_KEY n, e = self._RSA_KEY
encrypted_message = long_to_bytes(pow(bytes_to_long(padded_message), e, n)) encrypted_message = long_to_bytes(pow(bytes_to_long(padded_message), e, n))
authorization = base64.b64encode(encrypted_message).decode() authorization = base64.b64encode(encrypted_message).decode()
links_data = self._download_json(
urljoin(self._BASE_URL, links_url), video_id, try:
'Downloading links JSON metadata', headers={ links_data = self._download_json(
'Authorization': 'Bearer ' + authorization, links_url, video_id, 'Downloading links JSON metadata', headers={
}) 'X-Player-Token': authorization
links = links_data.get('links') or {} }, query={
metas = metas or links_data.get('meta') or {} 'freeWithAds': 'true',
sub_path = sub_path or links_data.get('subtitles') or \ 'adaptive': 'false',
'index.php?option=com_vodapi&task=subtitles.getJSON&format=json&id=' + video_id 'withMetadata': 'true',
sub_path += '&token=' + token 'source': 'Web'
error = links_data.get('error') })
title = metas.get('title') or video_info['title'] break
except ExtractorError as e:
if not isinstance(e.cause, compat_HTTPError):
raise e
if e.cause.code == 401:
# This usually goes away with a different random pkcs1pad, so retry
continue
error = self._parse_json(e.cause.read(), video_id)
message = error.get('message')
if e.cause.code == 403 and error.get('code') == 'player-bad-geolocation-country':
self.raise_geo_restricted(msg=message)
else:
raise ExtractorError(message)
else:
raise ExtractorError('Giving up retrying')
links = links_data.get('links') or {}
metas = links_data.get('metadata') or {}
sub_url = (links.get('subtitles') or {}).get('all')
video_info = links_data.get('video') or {}
title = metas['title']
formats = [] formats = []
for format_id, qualities in links.items(): for format_id, qualities in (links.get('streaming') or {}).items():
if not isinstance(qualities, dict): if not isinstance(qualities, dict):
continue continue
for quality, load_balancer_url in qualities.items(): for quality, load_balancer_url in qualities.items():
@@ -189,19 +218,26 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
for f in m3u8_formats: for f in m3u8_formats:
f['language'] = 'fr' f['language'] = 'fr'
formats.extend(m3u8_formats) formats.extend(m3u8_formats)
if not error:
error = options.get('error')
if not formats and error:
raise ExtractorError('%s said: %s' % (self.IE_NAME, error), expected=True)
self._sort_formats(formats) self._sort_formats(formats)
video = (self._download_json(
self._API_BASE_URL + 'video/%s' % video_id, video_id,
'Downloading additional video metadata', fatal=False) or {}).get('video') or {}
show = video.get('show') or {}
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': title,
'description': strip_or_none(metas.get('summary') or video_info.get('resume')), 'description': strip_or_none(metas.get('summary') or video.get('summary')),
'thumbnail': video_info.get('image'), 'thumbnail': video_info.get('image') or player.get('image'),
'formats': formats, 'formats': formats,
'subtitles': self.extract_subtitles(sub_path, video_id), 'subtitles': self.extract_subtitles(sub_url, video_id),
'episode': metas.get('subtitle') or video_info.get('videoTitle'), 'episode': metas.get('subtitle') or video.get('name'),
'series': video_info.get('playlistTitle'), 'episode_number': int_or_none(video.get('shortNumber')),
'series': show.get('title'),
'season_number': int_or_none(video.get('season')),
'duration': int_or_none(video_info.get('duration') or video.get('duration')),
'release_date': unified_strdate(video.get('releaseDate')),
'average_rating': float_or_none(video.get('rating') or metas.get('rating')),
'comment_count': int_or_none(video.get('commentsCount')),
} }

View File

@@ -9,6 +9,7 @@ from .common import InfoExtractor
from ..compat import ( from ..compat import (
compat_kwargs, compat_kwargs,
compat_urlparse, compat_urlparse,
compat_getpass
) )
from ..utils import ( from ..utils import (
unescapeHTML, unescapeHTML,
@@ -60,6 +61,10 @@ MSO_INFO = {
'username_field': 'IDToken1', 'username_field': 'IDToken1',
'password_field': 'IDToken2', 'password_field': 'IDToken2',
}, },
'Philo': {
'name': 'Philo',
'username_field': 'ident'
},
'Verizon': { 'Verizon': {
'name': 'Verizon FiOS', 'name': 'Verizon FiOS',
'username_field': 'IDToken1', 'username_field': 'IDToken1',
@@ -1467,6 +1472,22 @@ class AdobePassIE(InfoExtractor):
mvpd_confirm_page, urlh = mvpd_confirm_page_res mvpd_confirm_page, urlh = mvpd_confirm_page_res
if '<button class="submit" value="Resume">Resume</button>' in mvpd_confirm_page: if '<button class="submit" value="Resume">Resume</button>' in mvpd_confirm_page:
post_form(mvpd_confirm_page_res, 'Confirming Login') post_form(mvpd_confirm_page_res, 'Confirming Login')
elif mso_id == 'Philo':
# Philo has very unique authentication method
self._download_webpage(
'https://idp.philo.com/auth/init/login_code', video_id, 'Requesting auth code', data=urlencode_postdata({
'ident': username,
'device': 'web',
'send_confirm_link': False,
'send_token': True
}))
philo_code = compat_getpass('Type auth code you have received [Return]: ')
self._download_webpage(
'https://idp.philo.com/auth/update/login_code', video_id, 'Submitting token', data=urlencode_postdata({
'token': philo_code
}))
mvpd_confirm_page_res = self._download_webpage_handle('https://idp.philo.com/idp/submit', video_id, 'Confirming Philo Login')
post_form(mvpd_confirm_page_res, 'Confirming Login')
elif mso_id == 'Verizon': elif mso_id == 'Verizon':
# In general, if you're connecting from a Verizon-assigned IP, # In general, if you're connecting from a Verizon-assigned IP,
# you will not actually pass your credentials. # you will not actually pass your credentials.

View File

@@ -0,0 +1,285 @@
# coding: utf-8
from __future__ import unicode_literals
from .common import InfoExtractor
from ..utils import (
ExtractorError,
urlencode_postdata,
int_or_none,
str_or_none,
determine_ext,
)
from ..compat import compat_HTTPError
class AnimeLabBaseIE(InfoExtractor):
_LOGIN_REQUIRED = True
_LOGIN_URL = 'https://www.animelab.com/login'
_NETRC_MACHINE = 'animelab'
def _login(self):
def is_logged_in(login_webpage):
return 'Sign In' not in login_webpage
login_page = self._download_webpage(
self._LOGIN_URL, None, 'Downloading login page')
# Check if already logged in
if is_logged_in(login_page):
return
(username, password) = self._get_login_info()
if username is None and self._LOGIN_REQUIRED:
self.raise_login_required('Login is required to access any AnimeLab content')
login_form = {
'email': username,
'password': password,
}
try:
response = self._download_webpage(
self._LOGIN_URL, None, 'Logging in', 'Wrong login info',
data=urlencode_postdata(login_form),
headers={'Content-Type': 'application/x-www-form-urlencoded'})
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 400:
raise ExtractorError('Unable to log in (wrong credentials?)', expected=True)
else:
raise
# if login was successful
if is_logged_in(response):
return
raise ExtractorError('Unable to login (cannot verify if logged in)')
def _real_initialize(self):
self._login()
class AnimeLabIE(AnimeLabBaseIE):
_VALID_URL = r'https?://(?:www\.)?animelab\.com/player/(?P<id>[^/]+)'
# the following tests require authentication, but a free account will suffice
# just set 'usenetrc' to true in test/local_parameters.json if you use a .netrc file
# or you can set 'username' and 'password' there
# the tests also select a specific format so that the same video is downloaded
# regardless of whether the user is premium or not (needs testing on a premium account)
_TEST = {
'url': 'https://www.animelab.com/player/fullmetal-alchemist-brotherhood-episode-42',
'md5': '05bde4b91a5d1ff46ef5b94df05b0f7f',
'info_dict': {
'id': '383',
'ext': 'mp4',
'display_id': 'fullmetal-alchemist-brotherhood-episode-42',
'title': 'Fullmetal Alchemist: Brotherhood - Episode 42 - Signs of a Counteroffensive',
'description': 'md5:103eb61dd0a56d3dfc5dbf748e5e83f4',
'series': 'Fullmetal Alchemist: Brotherhood',
'episode': 'Signs of a Counteroffensive',
'episode_number': 42,
'duration': 1469,
'season': 'Season 1',
'season_number': 1,
'season_id': '38',
},
'params': {
'format': '[format_id=21711_yeshardsubbed_ja-JP][height=480]',
},
'skip': 'All AnimeLab content requires authentication',
}
def _real_extract(self, url):
display_id = self._match_id(url)
# unfortunately we can get different URLs for the same formats
# e.g. if we are using a "free" account so no dubs available
# (so _remove_duplicate_formats is not effective)
# so we use a dictionary as a workaround
formats = {}
for language_option_url in ('https://www.animelab.com/player/%s/subtitles',
'https://www.animelab.com/player/%s/dubbed'):
actual_url = language_option_url % display_id
webpage = self._download_webpage(actual_url, display_id, 'Downloading URL ' + actual_url)
video_collection = self._parse_json(self._search_regex(r'new\s+?AnimeLabApp\.VideoCollection\s*?\((.*?)\);', webpage, 'AnimeLab VideoCollection'), display_id)
position = int_or_none(self._search_regex(r'playlistPosition\s*?=\s*?(\d+)', webpage, 'Playlist Position'))
raw_data = video_collection[position]['videoEntry']
video_id = str_or_none(raw_data['id'])
# create a title from many sources (while grabbing other info)
# TODO use more fallback sources to get some of these
series = raw_data.get('showTitle')
video_type = raw_data.get('videoEntryType', {}).get('name')
episode_number = raw_data.get('episodeNumber')
episode_name = raw_data.get('name')
title_parts = (series, video_type, episode_number, episode_name)
if None not in title_parts:
title = '%s - %s %s - %s' % title_parts
else:
title = episode_name
description = raw_data.get('synopsis') or self._og_search_description(webpage, default=None)
duration = int_or_none(raw_data.get('duration'))
thumbnail_data = raw_data.get('images', [])
thumbnails = []
for thumbnail in thumbnail_data:
for instance in thumbnail['imageInstances']:
image_data = instance.get('imageInfo', {})
thumbnails.append({
'id': str_or_none(image_data.get('id')),
'url': image_data.get('fullPath'),
'width': image_data.get('width'),
'height': image_data.get('height'),
})
season_data = raw_data.get('season', {}) or {}
season = str_or_none(season_data.get('name'))
season_number = int_or_none(season_data.get('seasonNumber'))
season_id = str_or_none(season_data.get('id'))
for video_data in raw_data['videoList']:
current_video_list = {}
current_video_list['language'] = video_data.get('language', {}).get('languageCode')
is_hardsubbed = video_data.get('hardSubbed')
for video_instance in video_data['videoInstances']:
httpurl = video_instance.get('httpUrl')
url = httpurl if httpurl else video_instance.get('rtmpUrl')
if url is None:
# this video format is unavailable to the user (not premium etc.)
continue
current_format = current_video_list.copy()
format_id_parts = []
format_id_parts.append(str_or_none(video_instance.get('id')))
if is_hardsubbed is not None:
if is_hardsubbed:
format_id_parts.append('yeshardsubbed')
else:
format_id_parts.append('nothardsubbed')
format_id_parts.append(current_format['language'])
format_id = '_'.join([x for x in format_id_parts if x is not None])
ext = determine_ext(url)
if ext == 'm3u8':
for format_ in self._extract_m3u8_formats(
url, video_id, m3u8_id=format_id, fatal=False):
formats[format_['format_id']] = format_
continue
elif ext == 'mpd':
for format_ in self._extract_mpd_formats(
url, video_id, mpd_id=format_id, fatal=False):
formats[format_['format_id']] = format_
continue
current_format['url'] = url
quality_data = video_instance.get('videoQuality')
if quality_data:
quality = quality_data.get('name') or quality_data.get('description')
else:
quality = None
height = None
if quality:
height = int_or_none(self._search_regex(r'(\d+)p?$', quality, 'Video format height', default=None))
if height is None:
self.report_warning('Could not get height of video')
else:
current_format['height'] = height
current_format['format_id'] = format_id
formats[current_format['format_id']] = current_format
formats = list(formats.values())
self._sort_formats(formats)
return {
'id': video_id,
'display_id': display_id,
'title': title,
'description': description,
'series': series,
'episode': episode_name,
'episode_number': int_or_none(episode_number),
'thumbnails': thumbnails,
'duration': duration,
'formats': formats,
'season': season,
'season_number': season_number,
'season_id': season_id,
}
class AnimeLabShowsIE(AnimeLabBaseIE):
_VALID_URL = r'https?://(?:www\.)?animelab\.com/shows/(?P<id>[^/]+)'
_TEST = {
'url': 'https://www.animelab.com/shows/attack-on-titan',
'info_dict': {
'id': '45',
'title': 'Attack on Titan',
'description': 'md5:989d95a2677e9309368d5cf39ba91469',
},
'playlist_count': 59,
'skip': 'All AnimeLab content requires authentication',
}
def _real_extract(self, url):
_BASE_URL = 'http://www.animelab.com'
_SHOWS_API_URL = '/api/videoentries/show/videos/'
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id, 'Downloading requested URL')
show_data_str = self._search_regex(r'({"id":.*}),\svideoEntry', webpage, 'AnimeLab show data')
show_data = self._parse_json(show_data_str, display_id)
show_id = str_or_none(show_data.get('id'))
title = show_data.get('name')
description = show_data.get('shortSynopsis') or show_data.get('longSynopsis')
entries = []
for season in show_data['seasons']:
season_id = season['id']
get_data = urlencode_postdata({
'seasonId': season_id,
'limit': 1000,
})
# despite using urlencode_postdata, we are sending a GET request
target_url = _BASE_URL + _SHOWS_API_URL + show_id + "?" + get_data.decode('utf-8')
response = self._download_webpage(
target_url,
None, 'Season id %s' % season_id)
season_data = self._parse_json(response, display_id)
for video_data in season_data['list']:
entries.append(self.url_result(
_BASE_URL + '/player/' + video_data['slug'], 'AnimeLab',
str_or_none(video_data.get('id')), video_data.get('name')
))
return {
'_type': 'playlist',
'id': show_id,
'title': title,
'description': description,
'entries': entries,
}
# TODO implement myqueue

View File

@@ -116,8 +116,6 @@ class AnimeOnDemandIE(InfoExtractor):
r'(?s)<div[^>]+itemprop="description"[^>]*>(.+?)</div>', r'(?s)<div[^>]+itemprop="description"[^>]*>(.+?)</div>',
webpage, 'anime description', default=None) webpage, 'anime description', default=None)
entries = []
def extract_info(html, video_id, num=None): def extract_info(html, video_id, num=None):
title, description = [None] * 2 title, description = [None] * 2
formats = [] formats = []
@@ -233,7 +231,7 @@ class AnimeOnDemandIE(InfoExtractor):
self._sort_formats(info['formats']) self._sort_formats(info['formats'])
f = common_info.copy() f = common_info.copy()
f.update(info) f.update(info)
entries.append(f) yield f
# Extract teaser/trailer only when full episode is not available # Extract teaser/trailer only when full episode is not available
if not info['formats']: if not info['formats']:
@@ -247,7 +245,7 @@ class AnimeOnDemandIE(InfoExtractor):
'title': m.group('title'), 'title': m.group('title'),
'url': urljoin(url, m.group('href')), 'url': urljoin(url, m.group('href')),
}) })
entries.append(f) yield f
def extract_episodes(html): def extract_episodes(html):
for num, episode_html in enumerate(re.findall( for num, episode_html in enumerate(re.findall(
@@ -275,7 +273,8 @@ class AnimeOnDemandIE(InfoExtractor):
'episode_number': episode_number, 'episode_number': episode_number,
} }
extract_entries(episode_html, video_id, common_info) for e in extract_entries(episode_html, video_id, common_info):
yield e
def extract_film(html, video_id): def extract_film(html, video_id):
common_info = { common_info = {
@@ -283,11 +282,18 @@ class AnimeOnDemandIE(InfoExtractor):
'title': anime_title, 'title': anime_title,
'description': anime_description, 'description': anime_description,
} }
extract_entries(html, video_id, common_info) for e in extract_entries(html, video_id, common_info):
yield e
extract_episodes(webpage) def entries():
has_episodes = False
for e in extract_episodes(webpage):
has_episodes = True
yield e
if not entries: if not has_episodes:
extract_film(webpage, anime_id) for e in extract_film(webpage, anime_id):
yield e
return self.playlist_result(entries, anime_id, anime_title, anime_description) return self.playlist_result(
entries(), anime_id, anime_title, anime_description)

View File

@@ -1,27 +1,43 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re
import json
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urllib_parse_unquote_plus
from ..utils import ( from ..utils import (
KNOWN_EXTENSIONS,
extract_attributes,
unified_strdate, unified_strdate,
unified_timestamp,
clean_html, clean_html,
dict_get,
parse_duration,
int_or_none,
str_or_none,
merge_dicts,
) )
class ArchiveOrgIE(InfoExtractor): class ArchiveOrgIE(InfoExtractor):
IE_NAME = 'archive.org' IE_NAME = 'archive.org'
IE_DESC = 'archive.org videos' IE_DESC = 'archive.org video and audio'
_VALID_URL = r'https?://(?:www\.)?archive\.org/(?:details|embed)/(?P<id>[^/?#]+)(?:[?].*)?$' _VALID_URL = r'https?://(?:www\.)?archive\.org/(?:details|embed)/(?P<id>[^?#]+)(?:[?].*)?$'
_TESTS = [{ _TESTS = [{
'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'url': 'http://archive.org/details/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'md5': '8af1d4cf447933ed3c7f4871162602db', 'md5': '8af1d4cf447933ed3c7f4871162602db',
'info_dict': { 'info_dict': {
'id': 'XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'id': 'XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'ext': 'ogg', 'ext': 'ogv',
'title': '1968 Demo - FJCC Conference Presentation Reel #1', 'title': '1968 Demo - FJCC Conference Presentation Reel #1',
'description': 'md5:da45c349df039f1cc8075268eb1b5c25', 'description': 'md5:da45c349df039f1cc8075268eb1b5c25',
'upload_date': '19681210', 'release_date': '19681210',
'uploader': 'SRI International' 'timestamp': 1268695290,
} 'upload_date': '20100315',
'creator': 'SRI International',
'uploader': 'laura@archive.org',
},
}, { }, {
'url': 'https://archive.org/details/Cops1922', 'url': 'https://archive.org/details/Cops1922',
'md5': '0869000b4ce265e8ca62738b336b268a', 'md5': '0869000b4ce265e8ca62738b336b268a',
@@ -29,37 +45,199 @@ class ArchiveOrgIE(InfoExtractor):
'id': 'Cops1922', 'id': 'Cops1922',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Buster Keaton\'s "Cops" (1922)', 'title': 'Buster Keaton\'s "Cops" (1922)',
'description': 'md5:89e7c77bf5d965dd5c0372cfb49470f6', 'description': 'md5:43a603fd6c5b4b90d12a96b921212b9c',
} 'uploader': 'yorkmba99@hotmail.com',
'timestamp': 1387699629,
'upload_date': "20131222",
},
}, { }, {
'url': 'http://archive.org/embed/XD300-23_68HighlightsAResearchCntAugHumanIntellect', 'url': 'http://archive.org/embed/XD300-23_68HighlightsAResearchCntAugHumanIntellect',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://archive.org/details/Election_Ads',
'md5': '284180e857160cf866358700bab668a3',
'info_dict': {
'id': 'Election_Ads/Commercial-JFK1960ElectionAdCampaignJingle.mpg',
'title': 'Commercial-JFK1960ElectionAdCampaignJingle.mpg',
'ext': 'mp4',
},
}, {
'url': 'https://archive.org/details/Election_Ads/Commercial-Nixon1960ElectionAdToughonDefense.mpg',
'md5': '7915213ef02559b5501fe630e1a53f59',
'info_dict': {
'id': 'Election_Ads/Commercial-Nixon1960ElectionAdToughonDefense.mpg',
'title': 'Commercial-Nixon1960ElectionAdToughonDefense.mpg',
'ext': 'mp4',
'timestamp': 1205588045,
'uploader': 'mikedavisstripmaster@yahoo.com',
'description': '1960 Presidential Campaign Election Commercials John F Kennedy, Richard M Nixon',
'upload_date': '20080315',
},
}, {
'url': 'https://archive.org/details/gd1977-05-08.shure57.stevenson.29303.flac16',
'md5': '7d07ffb42aba6537c28e053efa4b54c9',
'info_dict': {
'id': 'gd1977-05-08.shure57.stevenson.29303.flac16/gd1977-05-08d01t01.flac',
'title': 'Turning',
'ext': 'flac',
},
}, {
'url': 'https://archive.org/details/gd1977-05-08.shure57.stevenson.29303.flac16/gd1977-05-08d01t07.flac',
'md5': 'a07cd8c6ab4ee1560f8a0021717130f3',
'info_dict': {
'id': 'gd1977-05-08.shure57.stevenson.29303.flac16/gd1977-05-08d01t07.flac',
'title': 'Deal',
'ext': 'flac',
'timestamp': 1205895624,
'uploader': 'mvernon54@yahoo.com',
'description': 'md5:6a31f1996db0aa0fc9da6d6e708a1bb0',
'upload_date': '20080319',
'location': 'Barton Hall - Cornell University',
},
}, {
'url': 'https://archive.org/details/lp_the-music-of-russia_various-artists-a-askaryan-alexander-melik',
'md5': '7cb019baa9b332e82ea7c10403acd180',
'info_dict': {
'id': 'lp_the-music-of-russia_various-artists-a-askaryan-alexander-melik/disc1/01.01. Bells Of Rostov.mp3',
'title': 'Bells Of Rostov',
'ext': 'mp3',
},
}, {
'url': 'https://archive.org/details/lp_the-music-of-russia_various-artists-a-askaryan-alexander-melik/disc1/02.02.+Song+And+Chorus+In+The+Polovetsian+Camp+From+%22Prince+Igor%22+(Act+2%2C+Scene+1).mp3',
'md5': '1d0aabe03edca83ca58d9ed3b493a3c3',
'info_dict': {
'id': 'lp_the-music-of-russia_various-artists-a-askaryan-alexander-melik/disc1/02.02. Song And Chorus In The Polovetsian Camp From "Prince Igor" (Act 2, Scene 1).mp3',
'title': 'Song And Chorus In The Polovetsian Camp From "Prince Igor" (Act 2, Scene 1)',
'ext': 'mp3',
'timestamp': 1569662587,
'uploader': 'associate-joygen-odiongan@archive.org',
'description': 'md5:012b2d668ae753be36896f343d12a236',
'upload_date': '20190928',
},
}] }]
def _real_extract(self, url): @staticmethod
video_id = self._match_id(url) def _playlist_data(webpage):
webpage = self._download_webpage( element = re.findall(r'''(?xs)
'http://archive.org/embed/' + video_id, video_id) <input
jwplayer_playlist = self._parse_json(self._search_regex( (?:\s+[a-zA-Z0-9:._-]+(?:=[a-zA-Z0-9:._-]*|="[^"]*"|='[^']*'|))*?
r"(?s)Play\('[^']+'\s*,\s*(\[.+\])\s*,\s*{.*?}\)", \s+class=['"]?js-play8-playlist['"]?
webpage, 'jwplayer playlist'), video_id) (?:\s+[a-zA-Z0-9:._-]+(?:=[a-zA-Z0-9:._-]*|="[^"]*"|='[^']*'|))*?
info = self._parse_jwplayer_data( \s*/>
{'playlist': jwplayer_playlist}, video_id, base_url=url) ''', webpage)[0]
def get_optional(metadata, field): return json.loads(extract_attributes(element)['value'])
return metadata.get(field, [None])[0]
def _real_extract(self, url):
video_id = compat_urllib_parse_unquote_plus(self._match_id(url))
identifier, entry_id = (video_id.split('/', 1) + [None])[:2]
# Archive.org metadata API doesn't clearly demarcate playlist entries
# or subtitle tracks, so we get them from the embeddable player.
embed_page = self._download_webpage(
'https://archive.org/embed/' + identifier, identifier)
playlist = self._playlist_data(embed_page)
entries = {}
for p in playlist:
# If the user specified a playlist entry in the URL, ignore the
# rest of the playlist.
if entry_id and p['orig'] != entry_id:
continue
entries[p['orig']] = {
'formats': [],
'thumbnails': [],
'artist': p.get('artist'),
'track': p.get('title'),
'subtitles': {}}
for track in p.get('tracks', []):
if track['kind'] != 'subtitles':
continue
entries[p['orig']][track['label']] = {
'url': 'https://archive.org/' + track['file'].lstrip('/')}
metadata = self._download_json( metadata = self._download_json(
'http://archive.org/details/' + video_id, video_id, query={ 'http://archive.org/metadata/' + identifier, identifier)
'output': 'json', m = metadata['metadata']
})['metadata'] identifier = m['identifier']
info.update({
'title': get_optional(metadata, 'title') or info.get('title'), info = {
'description': clean_html(get_optional(metadata, 'description')), 'id': identifier,
}) 'title': m['title'],
if info.get('_type') != 'playlist': 'description': clean_html(m.get('description')),
info.update({ 'uploader': dict_get(m, ['uploader', 'adder']),
'uploader': get_optional(metadata, 'creator'), 'creator': m.get('creator'),
'upload_date': unified_strdate(get_optional(metadata, 'date')), 'license': m.get('licenseurl'),
}) 'release_date': unified_strdate(m.get('date')),
'timestamp': unified_timestamp(dict_get(m, ['publicdate', 'addeddate'])),
'webpage_url': 'https://archive.org/details/' + identifier,
'location': m.get('venue'),
'release_year': int_or_none(m.get('year'))}
for f in metadata['files']:
if f['name'] in entries:
entries[f['name']] = merge_dicts(entries[f['name']], {
'id': identifier + '/' + f['name'],
'title': f.get('title') or f['name'],
'display_id': f['name'],
'description': clean_html(f.get('description')),
'creator': f.get('creator'),
'duration': parse_duration(f.get('length')),
'track_number': int_or_none(f.get('track')),
'album': f.get('album'),
'discnumber': int_or_none(f.get('disc')),
'release_year': int_or_none(f.get('year'))})
entry = entries[f['name']]
elif f.get('original') in entries:
entry = entries[f['original']]
else:
continue
if f.get('format') == 'Thumbnail':
entry['thumbnails'].append({
'id': f['name'],
'url': 'https://archive.org/download/' + identifier + '/' + f['name'],
'width': int_or_none(f.get('width')),
'height': int_or_none(f.get('width')),
'filesize': int_or_none(f.get('size'))})
extension = (f['name'].rsplit('.', 1) + [None])[1]
if extension in KNOWN_EXTENSIONS:
entry['formats'].append({
'url': 'https://archive.org/download/' + identifier + '/' + f['name'],
'format': f.get('format'),
'width': int_or_none(f.get('width')),
'height': int_or_none(f.get('height')),
'filesize': int_or_none(f.get('size')),
'protocol': 'https'})
# Sort available formats by filesize
for entry in entries.values():
entry['formats'] = list(sorted(entry['formats'], key=lambda x: x.get('filesize', -1)))
if len(entries) == 1:
# If there's only one item, use it as the main info dict
only_video = entries[list(entries.keys())[0]]
if entry_id:
info = merge_dicts(only_video, info)
else:
info = merge_dicts(info, only_video)
else:
# Otherwise, we have a playlist.
info['_type'] = 'playlist'
info['entries'] = list(entries.values())
if metadata.get('reviews'):
info['comments'] = []
for review in metadata['reviews']:
info['comments'].append({
'id': review.get('review_id'),
'author': review.get('reviewer'),
'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'),
'timestamp': unified_timestamp(review.get('createdate')),
'parent': 'root'})
return info return info

View File

@@ -8,11 +8,14 @@ from ..utils import (
ExtractorError, ExtractorError,
extract_attributes, extract_attributes,
find_xpath_attr, find_xpath_attr,
get_element_by_attribute,
get_element_by_class, get_element_by_class,
int_or_none, int_or_none,
js_to_json, js_to_json,
merge_dicts, merge_dicts,
parse_iso8601,
smuggle_url, smuggle_url,
str_to_int,
unescapeHTML, unescapeHTML,
) )
from .senateisvp import SenateISVPIE from .senateisvp import SenateISVPIE
@@ -116,8 +119,30 @@ class CSpanIE(InfoExtractor):
jwsetup, video_id, require_title=False, m3u8_id='hls', jwsetup, video_id, require_title=False, m3u8_id='hls',
base_url=url) base_url=url)
add_referer(info['formats']) add_referer(info['formats'])
for subtitles in info['subtitles'].values():
for subtitle in subtitles:
ext = determine_ext(subtitle['url'])
if ext == 'php':
ext = 'vtt'
subtitle['ext'] = ext
ld_info = self._search_json_ld(webpage, video_id, default={}) ld_info = self._search_json_ld(webpage, video_id, default={})
return merge_dicts(info, ld_info) title = get_element_by_class('video-page-title', webpage) or \
self._og_search_title(webpage)
description = get_element_by_attribute('itemprop', 'description', webpage) or \
self._html_search_meta(['og:description', 'description'], webpage)
return merge_dicts(info, ld_info, {
'title': title,
'thumbnail': get_element_by_attribute('itemprop', 'thumbnailUrl', webpage),
'description': description,
'timestamp': parse_iso8601(get_element_by_attribute('itemprop', 'uploadDate', webpage)),
'location': get_element_by_attribute('itemprop', 'contentLocation', webpage),
'duration': int_or_none(self._search_regex(
r'jwsetup\.seclength\s*=\s*(\d+);',
webpage, 'duration', fatal=False)),
'view_count': str_to_int(self._search_regex(
r"<span[^>]+class='views'[^>]*>([\d,]+)\s+Views</span>",
webpage, 'views', fatal=False)),
})
# Obsolete # Obsolete
# We first look for clipid, because clipprog always appears before # We first look for clipid, because clipprog always appears before

View File

@@ -46,6 +46,10 @@ from .alura import (
AluraCourseIE AluraCourseIE
) )
from .amcnetworks import AMCNetworksIE from .amcnetworks import AMCNetworksIE
from .animelab import (
AnimeLabIE,
AnimeLabShowsIE,
)
from .americastestkitchen import AmericasTestKitchenIE from .americastestkitchen import AmericasTestKitchenIE
from .animeondemand import AnimeOnDemandIE from .animeondemand import AnimeOnDemandIE
from .anvato import AnvatoIE from .anvato import AnvatoIE
@@ -547,7 +551,10 @@ from .karaoketv import KaraoketvIE
from .karrierevideos import KarriereVideosIE from .karrierevideos import KarriereVideosIE
from .keezmovies import KeezMoviesIE from .keezmovies import KeezMoviesIE
from .ketnet import KetnetIE from .ketnet import KetnetIE
from .khanacademy import KhanAcademyIE from .khanacademy import (
KhanAcademyIE,
KhanAcademyUnitIE,
)
from .kickstarter import KickStarterIE from .kickstarter import KickStarterIE
from .kinja import KinjaEmbedIE from .kinja import KinjaEmbedIE
from .kinopoisk import KinoPoiskIE from .kinopoisk import KinoPoiskIE

View File

@@ -1,82 +1,107 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import re import json
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
unified_strdate, int_or_none,
parse_iso8601,
try_get,
) )
class KhanAcademyIE(InfoExtractor): class KhanAcademyBaseIE(InfoExtractor):
_VALID_URL = r'^https?://(?:(?:www|api)\.)?khanacademy\.org/(?P<key>[^/]+)/(?:[^/]+/){,2}(?P<id>[^?#/]+)(?:$|[?#])' _VALID_URL_TEMPL = r'https?://(?:www\.)?khanacademy\.org/(?P<id>(?:[^/]+/){%s}%s[^?#/&]+)'
IE_NAME = 'KhanAcademy'
_TESTS = [{ def _parse_video(self, video):
'url': 'http://www.khanacademy.org/video/one-time-pad', return {
'md5': '7b391cce85e758fb94f763ddc1bbb979', '_type': 'url_transparent',
'url': video['youtubeId'],
'id': video.get('slug'),
'title': video.get('title'),
'thumbnail': video.get('imageUrl') or video.get('thumbnailUrl'),
'duration': int_or_none(video.get('duration')),
'description': video.get('description'),
'ie_key': 'Youtube',
}
def _real_extract(self, url):
display_id = self._match_id(url)
component_props = self._parse_json(self._download_json(
'https://www.khanacademy.org/api/internal/graphql',
display_id, query={
'hash': 1604303425,
'variables': json.dumps({
'path': display_id,
'queryParams': '',
}),
})['data']['contentJson'], display_id)['componentProps']
return self._parse_component_props(component_props)
class KhanAcademyIE(KhanAcademyBaseIE):
IE_NAME = 'khanacademy'
_VALID_URL = KhanAcademyBaseIE._VALID_URL_TEMPL % ('4', 'v/')
_TEST = {
'url': 'https://www.khanacademy.org/computing/computer-science/cryptography/crypt/v/one-time-pad',
'md5': '9c84b7b06f9ebb80d22a5c8dedefb9a0',
'info_dict': { 'info_dict': {
'id': 'one-time-pad', 'id': 'FlIG3TvQCBQ',
'ext': 'webm', 'ext': 'mp4',
'title': 'The one-time pad', 'title': 'The one-time pad',
'description': 'The perfect cipher', 'description': 'The perfect cipher',
'duration': 176, 'duration': 176,
'uploader': 'Brit Cruise', 'uploader': 'Brit Cruise',
'uploader_id': 'khanacademy', 'uploader_id': 'khanacademy',
'upload_date': '20120411', 'upload_date': '20120411',
'timestamp': 1334170113,
'license': 'cc-by-nc-sa',
}, },
'add_ie': ['Youtube'], 'add_ie': ['Youtube'],
}, { }
'url': 'https://www.khanacademy.org/math/applied-math/cryptography',
def _parse_component_props(self, component_props):
video = component_props['tutorialPageData']['contentModel']
info = self._parse_video(video)
author_names = video.get('authorNames')
info.update({
'uploader': ', '.join(author_names) if author_names else None,
'timestamp': parse_iso8601(video.get('dateAdded')),
'license': video.get('kaUserLicense'),
})
return info
class KhanAcademyUnitIE(KhanAcademyBaseIE):
IE_NAME = 'khanacademy:unit'
_VALID_URL = (KhanAcademyBaseIE._VALID_URL_TEMPL % ('2', '')) + '/?(?:[?#&]|$)'
_TEST = {
'url': 'https://www.khanacademy.org/computing/computer-science/cryptography',
'info_dict': { 'info_dict': {
'id': 'cryptography', 'id': 'cryptography',
'title': 'Journey into cryptography', 'title': 'Cryptography',
'description': 'How have humans protected their secret messages through history? What has changed today?', 'description': 'How have humans protected their secret messages through history? What has changed today?',
}, },
'playlist_mincount': 3, 'playlist_mincount': 31,
}] }
def _real_extract(self, url): def _parse_component_props(self, component_props):
m = re.match(self._VALID_URL, url) curation = component_props['curation']
video_id = m.group('id')
if m.group('key') == 'video': entries = []
data = self._download_json( tutorials = try_get(curation, lambda x: x['tabs'][0]['modules'][0]['tutorials'], list) or []
'http://api.khanacademy.org/api/v1/videos/' + video_id, for tutorial_number, tutorial in enumerate(tutorials, 1):
video_id, 'Downloading video info') chapter_info = {
'chapter': tutorial.get('title'),
upload_date = unified_strdate(data['date_added']) 'chapter_number': tutorial_number,
uploader = ', '.join(data['author_names']) 'chapter_id': tutorial.get('id'),
return {
'_type': 'url_transparent',
'url': data['url'],
'id': video_id,
'title': data['title'],
'thumbnail': data['image_url'],
'duration': data['duration'],
'description': data['description'],
'uploader': uploader,
'upload_date': upload_date,
} }
else: for content_item in (tutorial.get('contentItems') or []):
# topic if content_item.get('kind') == 'Video':
data = self._download_json( info = self._parse_video(content_item)
'http://api.khanacademy.org/api/v1/topic/' + video_id, info.update(chapter_info)
video_id, 'Downloading topic info') entries.append(info)
entries = [ return self.playlist_result(
{ entries, curation.get('unit'), curation.get('title'),
'_type': 'url', curation.get('description'))
'url': c['url'],
'id': c['id'],
'title': c['title'],
}
for c in data['children'] if c['kind'] in ('Video', 'Topic')]
return {
'_type': 'playlist',
'id': video_id,
'title': data['title'],
'description': data['description'],
'entries': entries,
}

View File

@@ -251,8 +251,11 @@ class MixcloudPlaylistBaseIE(MixcloudBaseIE):
cloudcast_url = cloudcast.get('url') cloudcast_url = cloudcast.get('url')
if not cloudcast_url: if not cloudcast_url:
continue continue
slug = try_get(cloudcast, lambda x: x['slug'], compat_str)
owner_username = try_get(cloudcast, lambda x: x['owner']['username'], compat_str)
video_id = '%s_%s' % (owner_username, slug) if slug and owner_username else None
entries.append(self.url_result( entries.append(self.url_result(
cloudcast_url, MixcloudIE.ie_key(), cloudcast.get('slug'))) cloudcast_url, MixcloudIE.ie_key(), video_id))
page_info = items['pageInfo'] page_info = items['pageInfo']
has_next_page = page_info['hasNextPage'] has_next_page = page_info['hasNextPage']
@@ -321,7 +324,8 @@ class MixcloudUserIE(MixcloudPlaylistBaseIE):
_DESCRIPTION_KEY = 'biog' _DESCRIPTION_KEY = 'biog'
_ROOT_TYPE = 'user' _ROOT_TYPE = 'user'
_NODE_TEMPLATE = '''slug _NODE_TEMPLATE = '''slug
url''' url
owner { username }'''
def _get_playlist_title(self, title, slug): def _get_playlist_title(self, title, slug):
return '%s (%s)' % (title, slug) return '%s (%s)' % (title, slug)
@@ -345,6 +349,7 @@ class MixcloudPlaylistIE(MixcloudPlaylistBaseIE):
_NODE_TEMPLATE = '''cloudcast { _NODE_TEMPLATE = '''cloudcast {
slug slug
url url
owner { username }
}''' }'''
def _get_cloudcast(self, node): def _get_cloudcast(self, node):

View File

@@ -450,6 +450,18 @@ class PeerTubeIE(InfoExtractor):
'tags': ['framasoft', 'peertube'], 'tags': ['framasoft', 'peertube'],
'categories': ['Science & Technology'], 'categories': ['Science & Technology'],
} }
}, {
# Issue #26002
'url': 'peertube:spacepub.space:d8943b2d-8280-497b-85ec-bc282ec2afdc',
'info_dict': {
'id': 'd8943b2d-8280-497b-85ec-bc282ec2afdc',
'ext': 'mp4',
'title': 'Dot matrix printer shell demo',
'uploader_id': '3',
'timestamp': 1587401293,
'upload_date': '20200420',
'uploader': 'Drew DeVault',
}
}, { }, {
'url': 'https://peertube.tamanoir.foucry.net/videos/watch/0b04f13d-1e18-4f1d-814e-4979aa7c9c44', 'url': 'https://peertube.tamanoir.foucry.net/videos/watch/0b04f13d-1e18-4f1d-814e-4979aa7c9c44',
'only_matching': True, 'only_matching': True,
@@ -526,7 +538,15 @@ class PeerTubeIE(InfoExtractor):
title = video['name'] title = video['name']
formats = [] formats = []
for file_ in video['files']: files = video.get('files') or []
for playlist in (video.get('streamingPlaylists') or []):
if not isinstance(playlist, dict):
continue
playlist_files = playlist.get('files')
if not (playlist_files and isinstance(playlist_files, list)):
continue
files.extend(playlist_files)
for file_ in files:
if not isinstance(file_, dict): if not isinstance(file_, dict):
continue continue
file_url = url_or_none(file_.get('fileUrl')) file_url = url_or_none(file_.get('fileUrl'))

View File

@@ -30,6 +30,19 @@ class RoosterTeethIE(InfoExtractor):
'series': 'Million Dollars, But...', 'series': 'Million Dollars, But...',
'episode': 'Million Dollars, But... The Game Announcement', 'episode': 'Million Dollars, But... The Game Announcement',
}, },
}, {
'url': 'https://roosterteeth.com/watch/rwby-bonus-25',
'md5': 'fe8d9d976b272c18a24fe7f1f5830084',
'info_dict': {
'id': '31',
'display_id': 'rwby-bonus-25',
'title': 'Volume 2, World of Remnant 3',
'description': 'md5:8d58d3270292ea11da00ea712bbfb009',
'episode': 'Volume 2, World of Remnant 3',
'channel_id': 'fab60c1c-29cb-43bc-9383-5c3538d9e246',
'thumbnail': r're:^https?://.*\.(png|jpe?g)$',
'ext': 'mp4',
},
}, { }, {
'url': 'http://achievementhunter.roosterteeth.com/episode/off-topic-the-achievement-hunter-podcast-2016-i-didn-t-think-it-would-pass-31', 'url': 'http://achievementhunter.roosterteeth.com/episode/off-topic-the-achievement-hunter-podcast-2016-i-didn-t-think-it-would-pass-31',
'only_matching': True, 'only_matching': True,
@@ -50,7 +63,7 @@ class RoosterTeethIE(InfoExtractor):
'url': 'https://roosterteeth.com/watch/million-dollars-but-season-2-million-dollars-but-the-game-announcement', 'url': 'https://roosterteeth.com/watch/million-dollars-but-season-2-million-dollars-but-the-game-announcement',
'only_matching': True, 'only_matching': True,
}] }]
_EPISODE_BASE_URL = 'https://svod-be.roosterteeth.com/api/v1/episodes/' _EPISODE_BASE_URL = 'https://svod-be.roosterteeth.com/api/v1/watch/'
def _login(self): def _login(self):
username, password = self._get_login_info() username, password = self._get_login_info()
@@ -86,9 +99,11 @@ class RoosterTeethIE(InfoExtractor):
api_episode_url = self._EPISODE_BASE_URL + display_id api_episode_url = self._EPISODE_BASE_URL + display_id
try: try:
m3u8_url = self._download_json( video_data = self._download_json(
api_episode_url + '/videos', display_id, api_episode_url + '/videos', display_id,
'Downloading video JSON metadata')['data'][0]['attributes']['url'] 'Downloading video JSON metadata')['data'][0]
m3u8_url = video_data['attributes']['url']
subtitle_m3u8_url = video_data['links']['download']
except ExtractorError as e: except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 403: if isinstance(e.cause, compat_HTTPError) and e.cause.code == 403:
if self._parse_json(e.cause.read().decode(), display_id).get('access') is False: if self._parse_json(e.cause.read().decode(), display_id).get('access') is False:
@@ -109,7 +124,7 @@ class RoosterTeethIE(InfoExtractor):
thumbnails = [] thumbnails = []
for image in episode.get('included', {}).get('images', []): for image in episode.get('included', {}).get('images', []):
if image.get('type') == 'episode_image': if image.get('type') in ('episode_image', 'bonus_feature_image'):
img_attributes = image.get('attributes') or {} img_attributes = image.get('attributes') or {}
for k in ('thumb', 'small', 'medium', 'large'): for k in ('thumb', 'small', 'medium', 'large'):
img_url = img_attributes.get(k) img_url = img_attributes.get(k)
@@ -119,6 +134,33 @@ class RoosterTeethIE(InfoExtractor):
'url': img_url, 'url': img_url,
}) })
subtitles = {}
res = self._download_webpage_handle(
subtitle_m3u8_url, display_id,
'Downloading m3u8 information',
'Failed to download m3u8 information',
fatal=True, data=None, headers={}, query={})
if res is not False:
subtitle_m3u8_doc, _ = res
for line in subtitle_m3u8_doc.split('\n'):
if 'EXT-X-MEDIA:TYPE=SUBTITLES' in line:
parts = line.split(',')
for part in parts:
if 'LANGUAGE' in part:
lang = part[part.index('=') + 2:-1]
elif 'URI' in part:
uri = part[part.index('=') + 2:-1]
res = self._download_webpage_handle(
uri, display_id,
'Downloading m3u8 information',
'Failed to download m3u8 information',
fatal=True, data=None, headers={}, query={})
doc, _ = res
for l in doc.split('\n'):
if not l.startswith('#'):
subtitles[lang] = [{'url': uri[:-uri[::-1].index('/')] + l}]
break
return { return {
'id': video_id, 'id': video_id,
'display_id': display_id, 'display_id': display_id,
@@ -134,4 +176,5 @@ class RoosterTeethIE(InfoExtractor):
'formats': formats, 'formats': formats,
'channel_id': attributes.get('channel_id'), 'channel_id': attributes.get('channel_id'),
'duration': int_or_none(attributes.get('length')), 'duration': int_or_none(attributes.get('length')),
'subtitles': subtitles
} }

View File

@@ -50,9 +50,15 @@ class ParamountNetworkIE(MTVServicesInfoExtractor):
}, },
}] }]
_FEED_URL = 'http://www.paramountnetwork.com/feeds/mrss/' _FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_GEO_COUNTRIES = ['US'] _GEO_COUNTRIES = ['US']
def _get_feed_query(self, uri):
return {
'arcEp': 'paramountnetwork.com',
'mgid': uri,
}
def _extract_mgid(self, webpage, url): def _extract_mgid(self, webpage, url):
root_data = self._parse_json(self._search_regex( root_data = self._parse_json(self._search_regex(
r'window\.__DATA__\s*=\s*({.+})', r'window\.__DATA__\s*=\s*({.+})',

View File

@@ -3,10 +3,13 @@ from __future__ import unicode_literals
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_HTTPError
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
js_to_json, ExtractorError,
mimetype2ext, float_or_none,
int_or_none,
parse_iso8601,
) )
@@ -15,29 +18,35 @@ class ThreeQSDNIE(InfoExtractor):
IE_DESC = '3Q SDN' IE_DESC = '3Q SDN'
_VALID_URL = r'https?://playout\.3qsdn\.com/(?P<id>[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})' _VALID_URL = r'https?://playout\.3qsdn\.com/(?P<id>[\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'
_TESTS = [{ _TESTS = [{
# ondemand from http://www.philharmonie.tv/veranstaltung/26/ # https://player.3qsdn.com/demo.html
'url': 'http://playout.3qsdn.com/0280d6b9-1215-11e6-b427-0cc47a188158?protocol=http', 'url': 'https://playout.3qsdn.com/7201c779-6b3c-11e7-a40e-002590c750be',
'md5': 'ab040e37bcfa2e0c079f92cb1dd7f6cd', 'md5': '64a57396b16fa011b15e0ea60edce918',
'info_dict': { 'info_dict': {
'id': '0280d6b9-1215-11e6-b427-0cc47a188158', 'id': '7201c779-6b3c-11e7-a40e-002590c750be',
'ext': 'mp4', 'ext': 'mp4',
'title': '0280d6b9-1215-11e6-b427-0cc47a188158', 'title': 'Video Ads',
'is_live': False, 'is_live': False,
'description': 'Video Ads Demo',
'timestamp': 1500334803,
'upload_date': '20170717',
'duration': 888.032,
'subtitles': {
'eng': 'count:1',
},
}, },
'expected_warnings': ['Failed to download MPD manifest', 'Failed to parse JSON'], 'expected_warnings': ['Unknown MIME type application/mp4 in DASH manifest'],
}, { }, {
# live video stream # live video stream
'url': 'https://playout.3qsdn.com/d755d94b-4ab9-11e3-9162-0025907ad44f?js=true', 'url': 'https://playout.3qsdn.com/66e68995-11ca-11e8-9273-002590c750be',
'info_dict': { 'info_dict': {
'id': 'd755d94b-4ab9-11e3-9162-0025907ad44f', 'id': '66e68995-11ca-11e8-9273-002590c750be',
'ext': 'mp4', 'ext': 'mp4',
'title': 're:^d755d94b-4ab9-11e3-9162-0025907ad44f [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$', 'title': 're:^66e68995-11ca-11e8-9273-002590c750be [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}$',
'is_live': True, 'is_live': True,
}, },
'params': { 'params': {
'skip_download': True, # m3u8 downloads 'skip_download': True, # m3u8 downloads
}, },
'expected_warnings': ['Failed to download MPD manifest'],
}, { }, {
# live audio stream # live audio stream
'url': 'http://playout.3qsdn.com/9edf36e0-6bf2-11e2-a16a-9acf09e2db48', 'url': 'http://playout.3qsdn.com/9edf36e0-6bf2-11e2-a16a-9acf09e2db48',
@@ -58,6 +67,14 @@ class ThreeQSDNIE(InfoExtractor):
# live video with rtmp link # live video with rtmp link
'url': 'https://playout.3qsdn.com/6092bb9e-8f72-11e4-a173-002590c750be', 'url': 'https://playout.3qsdn.com/6092bb9e-8f72-11e4-a173-002590c750be',
'only_matching': True, 'only_matching': True,
}, {
# ondemand from http://www.philharmonie.tv/veranstaltung/26/
'url': 'http://playout.3qsdn.com/0280d6b9-1215-11e6-b427-0cc47a188158?protocol=http',
'only_matching': True,
}, {
# live video stream
'url': 'https://playout.3qsdn.com/d755d94b-4ab9-11e3-9162-0025907ad44f?js=true',
'only_matching': True,
}] }]
@staticmethod @staticmethod
@@ -70,73 +87,78 @@ class ThreeQSDNIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
js = self._download_webpage( try:
'http://playout.3qsdn.com/%s' % video_id, video_id, config = self._download_json(
query={'js': 'true'}) url.replace('://playout.3qsdn.com/', '://playout.3qsdn.com/config/'), video_id)
except ExtractorError as e:
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 401:
self.raise_geo_restricted()
raise
if any(p in js for p in ( live = config.get('streamContent') == 'live'
'>This content is not available in your country', aspect = float_or_none(config.get('aspect'))
'playout.3qsdn.com/forbidden')):
self.raise_geo_restricted()
stream_content = self._search_regex(
r'streamContent\s*:\s*(["\'])(?P<content>.+?)\1', js,
'stream content', default='demand', group='content')
live = stream_content == 'live'
stream_type = self._search_regex(
r'streamType\s*:\s*(["\'])(?P<type>audio|video)\1', js,
'stream type', default='video', group='type')
formats = [] formats = []
urls = set() for source_type, source in (config.get('sources') or {}).items():
if not source:
def extract_formats(item_url, item={}):
if not item_url or item_url in urls:
return
urls.add(item_url)
ext = mimetype2ext(item.get('type')) or determine_ext(item_url, default_ext=None)
if ext == 'mpd':
formats.extend(self._extract_mpd_formats(
item_url, video_id, mpd_id='mpd', fatal=False))
elif ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
item_url, video_id, 'mp4',
entry_protocol='m3u8' if live else 'm3u8_native',
m3u8_id='hls', fatal=False))
elif ext == 'f4m':
formats.extend(self._extract_f4m_formats(
item_url, video_id, f4m_id='hds', fatal=False))
else:
if not self._is_valid_url(item_url, video_id):
return
formats.append({
'url': item_url,
'format_id': item.get('quality'),
'ext': 'mp4' if item_url.startswith('rtsp') else ext,
'vcodec': 'none' if stream_type == 'audio' else None,
})
for item_js in re.findall(r'({[^{]*?\b(?:src|source)\s*:\s*["\'].+?})', js):
f = self._parse_json(
item_js, video_id, transform_source=js_to_json, fatal=False)
if not f:
continue continue
extract_formats(f.get('src'), f) if source_type == 'dash':
formats.extend(self._extract_mpd_formats(
source, video_id, mpd_id='mpd', fatal=False))
elif source_type == 'hls':
formats.extend(self._extract_m3u8_formats(
source, video_id, 'mp4', 'm3u8' if live else 'm3u8_native',
m3u8_id='hls', fatal=False))
elif source_type == 'progressive':
for s in source:
src = s.get('src')
if not (src and self._is_valid_url(src, video_id)):
continue
width = None
format_id = ['http']
ext = determine_ext(src)
if ext:
format_id.append(ext)
height = int_or_none(s.get('height'))
if height:
format_id.append('%dp' % height)
if aspect:
width = int(height * aspect)
formats.append({
'ext': ext,
'format_id': '-'.join(format_id),
'height': height,
'source_preference': 0,
'url': src,
'vcodec': 'none' if height == 0 else None,
'width': width,
})
for f in formats:
if f.get('acodec') == 'none':
f['preference'] = -40
elif f.get('vcodec') == 'none':
f['preference'] = -50
self._sort_formats(formats, ('preference', 'width', 'height', 'source_preference', 'tbr', 'vbr', 'abr', 'ext', 'format_id'))
# More relaxed version to collect additional URLs and acting subtitles = {}
# as a future-proof fallback for subtitle in (config.get('subtitles') or []):
for _, src in re.findall(r'\b(?:src|source)\s*:\s*(["\'])((?:https?|rtsp)://.+?)\1', js): src = subtitle.get('src')
extract_formats(src) if not src:
continue
subtitles.setdefault(subtitle.get('label') or 'eng', []).append({
'url': src,
})
self._sort_formats(formats) title = config.get('title') or video_id
title = self._live_title(video_id) if live else video_id
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': self._live_title(title) if live else title,
'thumbnail': config.get('poster') or None,
'description': config.get('description') or None,
'timestamp': parse_iso8601(config.get('upload_date')),
'duration': float_or_none(config.get('vlength')) or None,
'is_live': live, 'is_live': live,
'formats': formats, 'formats': formats,
'subtitles': subtitles,
} }

View File

@@ -17,8 +17,8 @@ class TikTokBaseIE(InfoExtractor):
video_info = try_get( video_info = try_get(
video_data, lambda x: x['itemInfo']['itemStruct'], dict) video_data, lambda x: x['itemInfo']['itemStruct'], dict)
author_info = try_get( author_info = try_get(
video_data, lambda x: x['itemInfo']['itemStruct']['author'], dict) video_data, lambda x: x['itemInfo']['itemStruct']['author'], dict) or {}
share_info = try_get(video_data, lambda x: x['itemInfo']['shareMeta'], dict) share_info = try_get(video_data, lambda x: x['itemInfo']['shareMeta'], dict) or {}
unique_id = str_or_none(author_info.get('uniqueId')) unique_id = str_or_none(author_info.get('uniqueId'))
timestamp = try_get(video_info, lambda x: int(x['createTime']), int) timestamp = try_get(video_info, lambda x: int(x['createTime']), int)

View File

@@ -17,6 +17,7 @@ from ..compat import (
) )
from ..utils import ( from ..utils import (
clean_html, clean_html,
dict_get,
ExtractorError, ExtractorError,
float_or_none, float_or_none,
int_or_none, int_or_none,
@@ -76,14 +77,14 @@ class TwitchBaseIE(InfoExtractor):
headers = { headers = {
'Referer': page_url, 'Referer': page_url,
'Origin': page_url, 'Origin': 'https://www.twitch.tv',
'Content-Type': 'text/plain;charset=UTF-8', 'Content-Type': 'text/plain;charset=UTF-8',
} }
response = self._download_json( response = self._download_json(
post_url, None, note, data=json.dumps(form).encode(), post_url, None, note, data=json.dumps(form).encode(),
headers=headers, expected_status=400) headers=headers, expected_status=400)
error = response.get('error_description') or response.get('error_code') error = dict_get(response, ('error', 'error_description', 'error_code'))
if error: if error:
fail(error) fail(error)
@@ -137,13 +138,17 @@ class TwitchBaseIE(InfoExtractor):
self._sort_formats(formats) self._sort_formats(formats)
def _download_base_gql(self, video_id, ops, note, fatal=True): def _download_base_gql(self, video_id, ops, note, fatal=True):
headers = {
'Content-Type': 'text/plain;charset=UTF-8',
'Client-ID': self._CLIENT_ID,
}
gql_auth = self._get_cookies('https://gql.twitch.tv').get('auth-token')
if gql_auth:
headers['Authorization'] = 'OAuth ' + gql_auth.value
return self._download_json( return self._download_json(
'https://gql.twitch.tv/gql', video_id, note, 'https://gql.twitch.tv/gql', video_id, note,
data=json.dumps(ops).encode(), data=json.dumps(ops).encode(),
headers={ headers=headers, fatal=fatal)
'Content-Type': 'text/plain;charset=UTF-8',
'Client-ID': self._CLIENT_ID,
}, fatal=fatal)
def _download_gql(self, video_id, ops, note, fatal=True): def _download_gql(self, video_id, ops, note, fatal=True):
for op in ops: for op in ops:

View File

@@ -373,6 +373,24 @@ class TwitterIE(TwitterBaseIE):
'uploader_id': '1eVjYOLGkGrQL', 'uploader_id': '1eVjYOLGkGrQL',
}, },
'add_ie': ['TwitterBroadcast'], 'add_ie': ['TwitterBroadcast'],
}, {
# unified card
'url': 'https://twitter.com/BrooklynNets/status/1349794411333394432?s=20',
'info_dict': {
'id': '1349794411333394432',
'ext': 'mp4',
'title': 'md5:d1c4941658e4caaa6cb579260d85dcba',
'thumbnail': r're:^https?://.*\.jpg',
'description': 'md5:71ead15ec44cee55071547d6447c6a3e',
'uploader': 'Brooklyn Nets',
'uploader_id': 'BrooklynNets',
'duration': 324.484,
'timestamp': 1610651040,
'upload_date': '20210114',
},
'params': {
'skip_download': True,
},
}, { }, {
# Twitch Clip Embed # Twitch Clip Embed
'url': 'https://twitter.com/GunB1g/status/1163218564784017422', 'url': 'https://twitter.com/GunB1g/status/1163218564784017422',
@@ -389,6 +407,22 @@ class TwitterIE(TwitterBaseIE):
# appplayer card # appplayer card
'url': 'https://twitter.com/poco_dandy/status/1150646424461176832', 'url': 'https://twitter.com/poco_dandy/status/1150646424461176832',
'only_matching': True, 'only_matching': True,
}, {
# video_direct_message card
'url': 'https://twitter.com/qarev001/status/1348948114569269251',
'only_matching': True,
}, {
# poll2choice_video card
'url': 'https://twitter.com/CAF_Online/status/1349365911120195585',
'only_matching': True,
}, {
# poll3choice_video card
'url': 'https://twitter.com/SamsungMobileSA/status/1348609186725289984',
'only_matching': True,
}, {
# poll4choice_video card
'url': 'https://twitter.com/SouthamptonFC/status/1347577658079641604',
'only_matching': True,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -433,8 +467,7 @@ class TwitterIE(TwitterBaseIE):
'tags': tags, 'tags': tags,
} }
media = try_get(status, lambda x: x['extended_entities']['media'][0]) def extract_from_video_info(media):
if media and media.get('type') != 'photo':
video_info = media.get('video_info') or {} video_info = media.get('video_info') or {}
formats = [] formats = []
@@ -461,6 +494,10 @@ class TwitterIE(TwitterBaseIE):
'thumbnails': thumbnails, 'thumbnails': thumbnails,
'duration': float_or_none(video_info.get('duration_millis'), 1000), 'duration': float_or_none(video_info.get('duration_millis'), 1000),
}) })
media = try_get(status, lambda x: x['extended_entities']['media'][0])
if media and media.get('type') != 'photo':
extract_from_video_info(media)
else: else:
card = status.get('card') card = status.get('card')
if card: if card:
@@ -493,7 +530,12 @@ class TwitterIE(TwitterBaseIE):
'_type': 'url', '_type': 'url',
'url': get_binding_value('card_url'), 'url': get_binding_value('card_url'),
}) })
# amplify, promo_video_website, promo_video_convo, appplayer, ... elif card_name == 'unified_card':
media_entities = self._parse_json(get_binding_value('unified_card'), twid)['media_entities']
extract_from_video_info(next(iter(media_entities.values())))
# amplify, promo_video_website, promo_video_convo, appplayer,
# video_direct_message, poll2choice_video, poll3choice_video,
# poll4choice_video, ...
else: else:
is_amplify = card_name == 'amplify' is_amplify = card_name == 'amplify'
vmap_url = get_binding_value('amplify_url_vmap') if is_amplify else get_binding_value('player_stream_url') vmap_url = get_binding_value('amplify_url_vmap') if is_amplify else get_binding_value('player_stream_url')

View File

@@ -60,6 +60,9 @@ class YouPornIE(InfoExtractor):
}, { }, {
'url': 'http://www.youporn.com/watch/505835', 'url': 'http://www.youporn.com/watch/505835',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.youporn.com/watch/13922959/femdom-principal/',
'only_matching': True,
}] }]
@staticmethod @staticmethod
@@ -88,7 +91,7 @@ class YouPornIE(InfoExtractor):
# Main source # Main source
definitions = self._parse_json( definitions = self._parse_json(
self._search_regex( self._search_regex(
r'mediaDefinition\s*=\s*(\[.+?\]);', webpage, r'mediaDefinition\s*[=:]\s*(\[.+?\])\s*[;,]', webpage,
'media definitions', default='[]'), 'media definitions', default='[]'),
video_id, fatal=False) video_id, fatal=False)
if definitions: if definitions:
@@ -100,7 +103,7 @@ class YouPornIE(InfoExtractor):
links.append(video_url) links.append(video_url)
# Fallback #1, this also contains extra low quality 180p format # Fallback #1, this also contains extra low quality 180p format
for _, link in re.findall(r'<a[^>]+href=(["\'])(http.+?)\1[^>]+title=["\']Download [Vv]ideo', webpage): for _, link in re.findall(r'<a[^>]+href=(["\'])(http(?:(?!\1).)+\.mp4(?:(?!\1).)*)\1[^>]+title=["\']Download [Vv]ideo', webpage):
links.append(link) links.append(link)
# Fallback #2 (unavailable as at 22.06.2017) # Fallback #2 (unavailable as at 22.06.2017)
@@ -128,8 +131,9 @@ class YouPornIE(InfoExtractor):
# Video URL's path looks like this: # Video URL's path looks like this:
# /201012/17/505835/720p_1500k_505835/YouPorn%20-%20Sex%20Ed%20Is%20It%20Safe%20To%20Masturbate%20Daily.mp4 # /201012/17/505835/720p_1500k_505835/YouPorn%20-%20Sex%20Ed%20Is%20It%20Safe%20To%20Masturbate%20Daily.mp4
# /201012/17/505835/vl_240p_240k_505835/YouPorn%20-%20Sex%20Ed%20Is%20It%20Safe%20To%20Masturbate%20Daily.mp4 # /201012/17/505835/vl_240p_240k_505835/YouPorn%20-%20Sex%20Ed%20Is%20It%20Safe%20To%20Masturbate%20Daily.mp4
# /videos/201703/11/109285532/1080P_4000K_109285532.mp4
# We will benefit from it by extracting some metadata # We will benefit from it by extracting some metadata
mobj = re.search(r'(?P<height>\d{3,4})[pP]_(?P<bitrate>\d+)[kK]_\d+/', video_url) mobj = re.search(r'(?P<height>\d{3,4})[pP]_(?P<bitrate>\d+)[kK]_\d+', video_url)
if mobj: if mobj:
height = int(mobj.group('height')) height = int(mobj.group('height'))
bitrate = int(mobj.group('bitrate')) bitrate = int(mobj.group('bitrate'))

View File

@@ -332,6 +332,36 @@ class YoutubeBaseInfoExtractor(InfoExtractor):
r'ytcfg\.set\s*\(\s*({.+?})\s*\)\s*;', webpage, 'ytcfg', r'ytcfg\.set\s*\(\s*({.+?})\s*\)\s*;', webpage, 'ytcfg',
default='{}'), video_id, fatal=False) default='{}'), video_id, fatal=False)
def _extract_video(self, renderer):
video_id = renderer.get('videoId')
title = try_get(
renderer,
(lambda x: x['title']['runs'][0]['text'],
lambda x: x['title']['simpleText']), compat_str)
description = try_get(
renderer, lambda x: x['descriptionSnippet']['runs'][0]['text'],
compat_str)
duration = parse_duration(try_get(
renderer, lambda x: x['lengthText']['simpleText'], compat_str))
view_count_text = try_get(
renderer, lambda x: x['viewCountText']['simpleText'], compat_str) or ''
view_count = str_to_int(self._search_regex(
r'^([\d,]+)', re.sub(r'\s', '', view_count_text),
'view count', default=None))
uploader = try_get(
renderer, lambda x: x['ownerText']['runs'][0]['text'], compat_str)
return {
'_type': 'url_transparent',
'ie_key': YoutubeIE.ie_key(),
'id': video_id,
'url': video_id,
'title': title,
'description': description,
'duration': duration,
'view_count': view_count,
'uploader': uploader,
}
class YoutubeIE(YoutubeBaseInfoExtractor): class YoutubeIE(YoutubeBaseInfoExtractor):
IE_DESC = 'YouTube.com' IE_DESC = 'YouTube.com'
@@ -1817,6 +1847,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if not isinstance(video_info, dict): if not isinstance(video_info, dict):
video_info = {} video_info = {}
playable_in_embed = try_get(
player_response, lambda x: x['playabilityStatus']['playableInEmbed'])
video_details = try_get( video_details = try_get(
player_response, lambda x: x['videoDetails'], dict) or {} player_response, lambda x: x['videoDetails'], dict) or {}
@@ -2538,6 +2571,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'release_date': release_date, 'release_date': release_date,
'release_year': release_year, 'release_year': release_year,
'subscriber_count': subscriber_count, 'subscriber_count': subscriber_count,
'playable_in_embed': playable_in_embed,
} }
@@ -2867,36 +2901,6 @@ class YoutubeTabIE(YoutubeBaseInfoExtractor):
if renderer: if renderer:
return renderer return renderer
def _extract_video(self, renderer):
video_id = renderer.get('videoId')
title = try_get(
renderer,
(lambda x: x['title']['runs'][0]['text'],
lambda x: x['title']['simpleText']), compat_str)
description = try_get(
renderer, lambda x: x['descriptionSnippet']['runs'][0]['text'],
compat_str)
duration = parse_duration(try_get(
renderer, lambda x: x['lengthText']['simpleText'], compat_str))
view_count_text = try_get(
renderer, lambda x: x['viewCountText']['simpleText'], compat_str) or ''
view_count = str_to_int(self._search_regex(
r'^([\d,]+)', re.sub(r'\s', '', view_count_text),
'view count', default=None))
uploader = try_get(
renderer, lambda x: x['ownerText']['runs'][0]['text'], compat_str)
return {
'_type': 'url_transparent',
'ie_key': YoutubeIE.ie_key(),
'id': video_id,
'url': video_id,
'title': title,
'description': description,
'duration': duration,
'view_count': view_count,
'uploader': uploader,
}
def _grid_entries(self, grid_renderer): def _grid_entries(self, grid_renderer):
for item in grid_renderer['items']: for item in grid_renderer['items']:
if not isinstance(item, dict): if not isinstance(item, dict):
@@ -3579,65 +3583,38 @@ class YoutubeSearchIE(SearchInfoExtractor, YoutubeBaseInfoExtractor):
if not slr_contents: if not slr_contents:
break break
isr_contents = []
continuation_token = None
# Youtube sometimes adds promoted content to searches, # Youtube sometimes adds promoted content to searches,
# changing the index location of videos and token. # changing the index location of videos and token.
# So we search through all entries till we find them. # So we search through all entries till we find them.
for index, isr in enumerate(slr_contents): continuation_token = None
for slr_content in slr_contents:
isr_contents = try_get(
slr_content,
lambda x: x['itemSectionRenderer']['contents'],
list)
if not isr_contents: if not isr_contents:
isr_contents = try_get( continue
slr_contents, for content in isr_contents:
(lambda x: x[index]['itemSectionRenderer']['contents']), if not isinstance(content, dict):
list) continue
for content in isr_contents: video = content.get('videoRenderer')
if content.get('videoRenderer') is not None: if not isinstance(video, dict):
break continue
else: video_id = video.get('videoId')
isr_contents = [] if not video_id:
continue
yield self._extract_video(video)
total += 1
if total == n:
return
if continuation_token is None: if continuation_token is None:
continuation_token = try_get( continuation_token = try_get(
slr_contents, slr_content,
lambda x: x[index]['continuationItemRenderer']['continuationEndpoint']['continuationCommand'][ lambda x: x['continuationItemRenderer']['continuationEndpoint']['continuationCommand']['token'],
'token'],
compat_str) compat_str)
if continuation_token is not None and isr_contents:
break
if not isr_contents:
break
for content in isr_contents:
if not isinstance(content, dict):
continue
video = content.get('videoRenderer')
if not isinstance(video, dict):
continue
video_id = video.get('videoId')
if not video_id:
continue
title = try_get(video, lambda x: x['title']['runs'][0]['text'], compat_str)
description = try_get(video, lambda x: x['descriptionSnippet']['runs'][0]['text'], compat_str)
duration = parse_duration(try_get(video, lambda x: x['lengthText']['simpleText'], compat_str))
view_count_text = try_get(video, lambda x: x['viewCountText']['simpleText'], compat_str) or ''
view_count = int_or_none(self._search_regex(
r'^(\d+)', re.sub(r'\s', '', view_count_text),
'view count', default=None))
uploader = try_get(video, lambda x: x['ownerText']['runs'][0]['text'], compat_str)
total += 1
yield {
'_type': 'url_transparent',
'ie_key': YoutubeIE.ie_key(),
'id': video_id,
'url': video_id,
'title': title,
'description': description,
'duration': duration,
'view_count': view_count,
'uploader': uploader,
}
if total == n:
return
if not continuation_token: if not continuation_token:
break break
data['continuation'] = continuation_token data['continuation'] = continuation_token

View File

@@ -54,42 +54,35 @@ def parseOpts(overrideArguments=None):
optionf.close() optionf.close()
return res return res
def _readUserConf(): def _readUserConf(package_name, default=[]):
xdg_config_home = compat_getenv('XDG_CONFIG_HOME') # .config
if xdg_config_home: xdg_config_home = compat_getenv('XDG_CONFIG_HOME') or compat_expanduser('~/.config')
userConfFile = os.path.join(xdg_config_home, 'youtube-dlc', 'config') userConfFile = os.path.join(xdg_config_home, package_name, 'config')
if not os.path.isfile(userConfFile): if not os.path.isfile(userConfFile):
userConfFile = os.path.join(xdg_config_home, 'youtube-dlc.conf') userConfFile = os.path.join(xdg_config_home, '%s.conf' % package_name)
else: userConf = _readOptions(userConfFile, default=None)
userConfFile = os.path.join(compat_expanduser('~'), '.config', 'youtube-dlc', 'config') if userConf is not None:
if not os.path.isfile(userConfFile): return userConf
userConfFile = os.path.join(compat_expanduser('~'), '.config', 'youtube-dlc.conf')
userConf = _readOptions(userConfFile, None)
if userConf is None: # appdata
appdata_dir = compat_getenv('appdata') appdata_dir = compat_getenv('appdata')
if appdata_dir: if appdata_dir:
userConf = _readOptions( userConfFile = os.path.join(appdata_dir, package_name, 'config')
os.path.join(appdata_dir, 'youtube-dlc', 'config'), userConf = _readOptions(userConfFile, default=None)
default=None) if userConf is None:
if userConf is None: userConf = _readOptions('%s.txt' % userConfFile, default=None)
userConf = _readOptions( if userConf is not None:
os.path.join(appdata_dir, 'youtube-dlc', 'config.txt'), return userConf
default=None)
# home
userConfFile = os.path.join(compat_expanduser('~'), '%s.conf' % package_name)
userConf = _readOptions(userConfFile, default=None)
if userConf is None: if userConf is None:
userConf = _readOptions( userConf = _readOptions('%s.txt' % userConfFile, default=None)
os.path.join(compat_expanduser('~'), 'youtube-dlc.conf'), if userConf is not None:
default=None) return userConf
if userConf is None:
userConf = _readOptions(
os.path.join(compat_expanduser('~'), 'youtube-dlc.conf.txt'),
default=None)
if userConf is None: return default
userConf = []
return userConf
def _format_option_string(option): def _format_option_string(option):
''' ('-o', '--option') -> -o, --format METAVAR''' ''' ('-o', '--option') -> -o, --format METAVAR'''
@@ -173,10 +166,10 @@ def parseOpts(overrideArguments=None):
'--ignore-config', '--no-config', '--ignore-config', '--no-config',
action='store_true', action='store_true',
help=( help=(
'Do not read configuration files. ' 'Disable loading any configuration files except the one provided by --config-location. '
'When given in the global configuration file /etc/youtube-dl.conf: ' 'When given inside a configuration file, no further configuration files are loaded. '
'Do not read the user configuration in ~/.config/youtube-dl/config ' 'Additionally, (for backward compatibility) if this option is found inside the '
'(%APPDATA%/youtube-dl/config.txt on Windows)')) 'system configuration file, the user configuration is not loaded.'))
general.add_option( general.add_option(
'--config-location', '--config-location',
dest='config_location', metavar='PATH', dest='config_location', metavar='PATH',
@@ -367,7 +360,11 @@ def parseOpts(overrideArguments=None):
selection.add_option( selection.add_option(
'--break-on-existing', '--break-on-existing',
action='store_true', dest='break_on_existing', default=False, action='store_true', dest='break_on_existing', default=False,
help="Stop the download process after attempting to download a file that's in the archive.") help="Stop the download process when encountering a file that's in the archive.")
selection.add_option(
'--break-on-reject',
action='store_true', dest='break_on_reject', default=False,
help="Stop the download process when encountering a file that has been filtered out.")
selection.add_option( selection.add_option(
'--no-download-archive', '--no-download-archive',
dest='download_archive', action="store_const", const=None, dest='download_archive', action="store_const", const=None,
@@ -785,7 +782,7 @@ def parseOpts(overrideArguments=None):
verbosity.add_option( verbosity.add_option(
'-C', '--call-home', '-C', '--call-home',
dest='call_home', action='store_true', default=False, dest='call_home', action='store_true', default=False,
help='Contact the youtube-dlc server for debugging') help='[Broken] Contact the youtube-dlc server for debugging')
verbosity.add_option( verbosity.add_option(
'--no-call-home', '--no-call-home',
dest='call_home', action='store_false', dest='call_home', action='store_false',
@@ -834,8 +831,16 @@ def parseOpts(overrideArguments=None):
help=optparse.SUPPRESS_HELP) help=optparse.SUPPRESS_HELP)
filesystem.add_option( filesystem.add_option(
'-w', '--no-overwrites', '-w', '--no-overwrites',
action='store_true', dest='nooverwrites', default=False, action='store_false', dest='overwrites', default=None,
help='Do not overwrite files') help='Do not overwrite any files')
filesystem.add_option(
'--force-overwrites', '--yes-overwrites',
action='store_true', dest='overwrites',
help='Overwrite all video and metadata files. This option includes --no-continue')
filesystem.add_option(
'--no-force-overwrites',
action='store_const', dest='overwrites', const=None,
help='Do not overwrite the video, but overwrite related files (default)')
filesystem.add_option( filesystem.add_option(
'-c', '--continue', '-c', '--continue',
action='store_true', dest='continue_dl', default=True, action='store_true', dest='continue_dl', default=True,
@@ -976,8 +981,9 @@ def parseOpts(overrideArguments=None):
'Give these arguments to the postprocessors. ' 'Give these arguments to the postprocessors. '
"Specify the postprocessor name and the arguments separated by a colon ':' " "Specify the postprocessor name and the arguments separated by a colon ':' "
'to give the argument to only the specified postprocessor. Supported names are ' 'to give the argument to only the specified postprocessor. Supported names are '
'ExtractAudio, VideoRemuxer, VideoConvertor, EmbedSubtitle, Metadata, Merger, FixupStretched, FixupM4a, FixupM3u8, SubtitlesConvertor, SponSkrub and Default' 'ExtractAudio, VideoRemuxer, VideoConvertor, EmbedSubtitle, Metadata, Merger, FixupStretched, '
'. You can use this option multiple times to give different arguments to different postprocessors')) 'FixupM4a, FixupM3u8, SubtitlesConvertor, EmbedThumbnail, XAttrMetadata, SponSkrub and Default. '
'You can use this option multiple times to give different arguments to different postprocessors'))
postproc.add_option( postproc.add_option(
'-k', '--keep-video', '-k', '--keep-video',
action='store_true', dest='keepvideo', default=False, action='store_true', dest='keepvideo', default=False,
@@ -1134,33 +1140,60 @@ def parseOpts(overrideArguments=None):
return [a.decode(preferredencoding(), 'replace') for a in conf] return [a.decode(preferredencoding(), 'replace') for a in conf]
return conf return conf
command_line_conf = compat_conf(sys.argv[1:]) configs = {
opts, args = parser.parse_args(command_line_conf) 'command_line': compat_conf(sys.argv[1:]),
'custom': [], 'portable': [], 'user': [], 'system': []}
opts, args = parser.parse_args(configs['command_line'])
system_conf = user_conf = custom_conf = [] def get_configs():
if '--config-location' in configs['command_line']:
location = compat_expanduser(opts.config_location)
if os.path.isdir(location):
location = os.path.join(location, 'youtube-dlc.conf')
if not os.path.exists(location):
parser.error('config-location %s does not exist.' % location)
configs['custom'] = _readOptions(location)
if '--config-location' in command_line_conf: if '--ignore-config' in configs['command_line']:
location = compat_expanduser(opts.config_location) return
if os.path.isdir(location): if '--ignore-config' in configs['custom']:
location = os.path.join(location, 'youtube-dlc.conf') return
if not os.path.exists(location):
parser.error('config-location %s does not exist.' % location)
custom_conf = _readOptions(location)
elif '--ignore-config' in command_line_conf:
pass
else:
system_conf = _readOptions('/etc/youtube-dlc.conf')
if '--ignore-config' not in system_conf:
user_conf = _readUserConf()
argv = system_conf + user_conf + custom_conf + command_line_conf def get_portable_path():
path = os.path.dirname(sys.argv[0])
if os.path.abspath(sys.argv[0]) != os.path.abspath(sys.executable): # Not packaged
path = os.path.join(path, '..')
return os.path.abspath(path)
run_path = get_portable_path()
configs['portable'] = _readOptions(os.path.join(run_path, 'yt-dlp.conf'), default=None)
if configs['portable'] is None:
configs['portable'] = _readOptions(os.path.join(run_path, 'youtube-dlc.conf'))
if '--ignore-config' in configs['portable']:
return
configs['system'] = _readOptions('/etc/yt-dlp.conf', default=None)
if configs['system'] is None:
configs['system'] = _readOptions('/etc/youtube-dlc.conf')
if '--ignore-config' in configs['system']:
return
configs['user'] = _readUserConf('yt-dlp', default=None)
if configs['user'] is None:
configs['user'] = _readUserConf('youtube-dlc')
if '--ignore-config' in configs['user']:
configs['system'] = []
get_configs()
argv = configs['system'] + configs['user'] + configs['portable'] + configs['custom'] + configs['command_line']
opts, args = parser.parse_args(argv) opts, args = parser.parse_args(argv)
if opts.verbose: if opts.verbose:
for conf_label, conf in ( for conf_label, conf in (
('System config', system_conf), ('System config', configs['system']),
('User config', user_conf), ('User config', configs['user']),
('Custom config', custom_conf), ('Portable config', configs['portable']),
('Command-line args', command_line_conf)): ('Custom config', configs['custom']),
('Command-line args', configs['command_line'])):
write_string('[debug] %s: %s\n' % (conf_label, repr(_hide_login_info(conf)))) write_string('[debug] %s: %s\n' % (conf_label, repr(_hide_login_info(conf))))
return parser, opts, args return parser, opts, args

View File

@@ -37,7 +37,25 @@ class PostProcessor(object):
self.PP_NAME = self.__class__.__name__[:-2] self.PP_NAME = self.__class__.__name__[:-2]
def to_screen(self, text, *args, **kwargs): def to_screen(self, text, *args, **kwargs):
return self._downloader.to_screen('[%s] %s' % (self.PP_NAME, text), *args, **kwargs) if self._downloader:
return self._downloader.to_screen('[%s] %s' % (self.PP_NAME, text), *args, **kwargs)
def report_warning(self, text, *args, **kwargs):
if self._downloader:
return self._downloader.report_warning(text, *args, **kwargs)
def report_error(self, text, *args, **kwargs):
if self._downloader:
return self._downloader.report_error(text, *args, **kwargs)
def write_debug(self, text, *args, **kwargs):
if self.get_param('verbose', False):
return self._downloader.to_screen('[debug] %s' % text, *args, **kwargs)
def get_param(self, name, default=None, *args, **kwargs):
if self._downloader:
return self._downloader.params.get(name, default, *args, **kwargs)
return default
def set_downloader(self, downloader): def set_downloader(self, downloader):
"""Sets the downloader for this PP.""" """Sets the downloader for this PP."""
@@ -64,10 +82,10 @@ class PostProcessor(object):
try: try:
os.utime(encodeFilename(path), (atime, mtime)) os.utime(encodeFilename(path), (atime, mtime))
except Exception: except Exception:
self._downloader.report_warning(errnote) self.report_warning(errnote)
def _configuration_args(self, default=[]): def _configuration_args(self, default=[]):
args = self._downloader.params.get('postprocessor_args', {}) args = self.get_param('postprocessor_args', {})
if isinstance(args, list): # for backward compatibility if isinstance(args, list): # for backward compatibility
args = {'default': args, 'sponskrub': []} args = {'default': args, 'sponskrub': []}
return cli_configuration_args(args, self.PP_NAME.lower(), args.get('default', [])) return cli_configuration_args(args, self.PP_NAME.lower(), args.get('default', []))

View File

@@ -41,8 +41,7 @@ class EmbedThumbnailPP(FFmpegPostProcessor):
thumbnail_filename = info['thumbnails'][-1]['filename'] thumbnail_filename = info['thumbnails'][-1]['filename']
if not os.path.exists(encodeFilename(thumbnail_filename)): if not os.path.exists(encodeFilename(thumbnail_filename)):
self._downloader.report_warning( self.report_warning('Skipping embedding the thumbnail because the file is missing.')
'Skipping embedding the thumbnail because the file is missing.')
return [], info return [], info
def is_webp(path): def is_webp(path):
@@ -125,8 +124,7 @@ class EmbedThumbnailPP(FFmpegPostProcessor):
self.to_screen('Adding thumbnail to "%s"' % filename) self.to_screen('Adding thumbnail to "%s"' % filename)
if self._downloader.params.get('verbose', False): self.write_debug('AtomicParsley command line: %s' % shell_quote(cmd))
self._downloader.to_screen('[debug] AtomicParsley command line: %s' % shell_quote(cmd))
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process_communicate_or_kill(p) stdout, stderr = process_communicate_or_kill(p)
@@ -140,7 +138,7 @@ class EmbedThumbnailPP(FFmpegPostProcessor):
# for formats that don't support thumbnails (like 3gp) AtomicParsley # for formats that don't support thumbnails (like 3gp) AtomicParsley
# won't create to the temporary file # won't create to the temporary file
if b'No changes' in stdout: if b'No changes' in stdout:
self._downloader.report_warning('The file format doesn\'t support embedding a thumbnail') self.report_warning('The file format doesn\'t support embedding a thumbnail')
else: else:
os.remove(encodeFilename(filename)) os.remove(encodeFilename(filename))
os.rename(encodeFilename(temp_filename), encodeFilename(filename)) os.rename(encodeFilename(temp_filename), encodeFilename(filename))

View File

@@ -68,8 +68,7 @@ class FFmpegPostProcessor(PostProcessor):
self._versions[self.basename], required_version): self._versions[self.basename], required_version):
warning = 'Your copy of %s is outdated, update %s to version %s or newer if you encounter any errors.' % ( warning = 'Your copy of %s is outdated, update %s to version %s or newer if you encounter any errors.' % (
self.basename, self.basename, required_version) self.basename, self.basename, required_version)
if self._downloader: self.report_warning(warning)
self._downloader.report_warning(warning)
@staticmethod @staticmethod
def get_versions(downloader=None): def get_versions(downloader=None):
@@ -99,11 +98,11 @@ class FFmpegPostProcessor(PostProcessor):
self._paths = None self._paths = None
self._versions = None self._versions = None
if self._downloader: if self._downloader:
prefer_ffmpeg = self._downloader.params.get('prefer_ffmpeg', True) prefer_ffmpeg = self.get_param('prefer_ffmpeg', True)
location = self._downloader.params.get('ffmpeg_location') location = self.get_param('ffmpeg_location')
if location is not None: if location is not None:
if not os.path.exists(location): if not os.path.exists(location):
self._downloader.report_warning( self.report_warning(
'ffmpeg-location %s does not exist! ' 'ffmpeg-location %s does not exist! '
'Continuing without avconv/ffmpeg.' % (location)) 'Continuing without avconv/ffmpeg.' % (location))
self._versions = {} self._versions = {}
@@ -111,7 +110,7 @@ class FFmpegPostProcessor(PostProcessor):
elif not os.path.isdir(location): elif not os.path.isdir(location):
basename = os.path.splitext(os.path.basename(location))[0] basename = os.path.splitext(os.path.basename(location))[0]
if basename not in programs: if basename not in programs:
self._downloader.report_warning( self.report_warning(
'Cannot identify executable %s, its basename should be one of %s. ' 'Cannot identify executable %s, its basename should be one of %s. '
'Continuing without avconv/ffmpeg.' % 'Continuing without avconv/ffmpeg.' %
(location, ', '.join(programs))) (location, ', '.join(programs)))
@@ -177,9 +176,7 @@ class FFmpegPostProcessor(PostProcessor):
encodeFilename(self.executable, True), encodeFilename(self.executable, True),
encodeArgument('-i')] encodeArgument('-i')]
cmd.append(encodeFilename(self._ffmpeg_filename_argument(path), True)) cmd.append(encodeFilename(self._ffmpeg_filename_argument(path), True))
if self._downloader.params.get('verbose', False): self.write_debug('%s command line: %s' % (self.basename, shell_quote(cmd)))
self._downloader.to_screen(
'[debug] %s command line: %s' % (self.basename, shell_quote(cmd)))
handle = subprocess.Popen( handle = subprocess.Popen(
cmd, stderr=subprocess.PIPE, cmd, stderr=subprocess.PIPE,
stdout=subprocess.PIPE, stdin=subprocess.PIPE) stdout=subprocess.PIPE, stdin=subprocess.PIPE)
@@ -228,8 +225,7 @@ class FFmpegPostProcessor(PostProcessor):
+ [encodeArgument(o) for o in opts] + [encodeArgument(o) for o in opts]
+ [encodeFilename(self._ffmpeg_filename_argument(out_path), True)]) + [encodeFilename(self._ffmpeg_filename_argument(out_path), True)])
if self._downloader.params.get('verbose', False): self.write_debug('ffmpeg command line: %s' % shell_quote(cmd))
self._downloader.to_screen('[debug] ffmpeg command line: %s' % shell_quote(cmd))
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
stdout, stderr = process_communicate_or_kill(p) stdout, stderr = process_communicate_or_kill(p)
if p.returncode != 0: if p.returncode != 0:
@@ -566,8 +562,7 @@ class FFmpegMergerPP(FFmpegPostProcessor):
'youtube-dlc will download single file media. ' 'youtube-dlc will download single file media. '
'Update %s to version %s or newer to fix this.') % ( 'Update %s to version %s or newer to fix this.') % (
self.basename, self.basename, required_version) self.basename, self.basename, required_version)
if self._downloader: self.report_warning(warning)
self._downloader.report_warning(warning)
return False return False
return True return True
@@ -656,7 +651,7 @@ class FFmpegSubtitlesConvertorPP(FFmpegPostProcessor):
new_file = subtitles_filename(filename, lang, new_ext, info.get('ext')) new_file = subtitles_filename(filename, lang, new_ext, info.get('ext'))
if ext in ('dfxp', 'ttml', 'tt'): if ext in ('dfxp', 'ttml', 'tt'):
self._downloader.report_warning( self.report_warning(
'You have requested to convert dfxp (TTML) subtitles into another format, ' 'You have requested to convert dfxp (TTML) subtitles into another format, '
'which results in style information loss') 'which results in style information loss')

View File

@@ -46,16 +46,16 @@ class SponSkrubPP(PostProcessor):
self.to_screen('Skipping sponskrub since it is not a YouTube video') self.to_screen('Skipping sponskrub since it is not a YouTube video')
return [], information return [], information
if self.cutout and not self.force and not information.get('__real_download', False): if self.cutout and not self.force and not information.get('__real_download', False):
self._downloader.to_screen( self.report_warning(
'[sponskrub] Skipping sponskrub since the video was already downloaded. ' 'Skipping sponskrub since the video was already downloaded. '
'Use --sponskrub-force to run sponskrub anyway') 'Use --sponskrub-force to run sponskrub anyway')
return [], information return [], information
self.to_screen('Trying to %s sponsor sections' % ('remove' if self.cutout else 'mark')) self.to_screen('Trying to %s sponsor sections' % ('remove' if self.cutout else 'mark'))
if self.cutout: if self.cutout:
self._downloader.to_screen('WARNING: Cutting out sponsor segments will cause the subtitles to go out of sync.') self.report_warning('Cutting out sponsor segments will cause the subtitles to go out of sync.')
if not information.get('__real_download', False): if not information.get('__real_download', False):
self._downloader.to_screen('WARNING: If sponskrub is run multiple times, unintended parts of the video could be cut out.') self.report_warning('If sponskrub is run multiple times, unintended parts of the video could be cut out.')
filename = information['filepath'] filename = information['filepath']
temp_filename = filename + '.' + self._temp_ext + os.path.splitext(filename)[1] temp_filename = filename + '.' + self._temp_ext + os.path.splitext(filename)[1]
@@ -68,8 +68,7 @@ class SponSkrubPP(PostProcessor):
cmd += ['--', information['id'], filename, temp_filename] cmd += ['--', information['id'], filename, temp_filename]
cmd = [encodeArgument(i) for i in cmd] cmd = [encodeArgument(i) for i in cmd]
if self._downloader.params.get('verbose', False): self.write_debug('sponskrub command line: %s' % shell_quote(cmd))
self._downloader.to_screen('[debug] sponskrub command line: %s' % shell_quote(cmd))
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
stdout, stderr = p.communicate() stdout, stderr = p.communicate()
@@ -81,6 +80,8 @@ class SponSkrubPP(PostProcessor):
self.to_screen('No segments in the SponsorBlock database') self.to_screen('No segments in the SponsorBlock database')
else: else:
stderr = stderr.decode('utf-8', 'replace') stderr = stderr.decode('utf-8', 'replace')
msg = stderr.strip().split('\n')[-1] msg = stderr.strip()
if not self.get_param('verbose', False):
msg = msg.split('\n')[-1]
raise PostProcessingError(msg if msg else 'sponskrub failed with error code %s!' % p.returncode) raise PostProcessingError(msg if msg else 'sponskrub failed with error code %s!' % p.returncode)
return [], information return [], information

View File

@@ -57,16 +57,16 @@ class XAttrMetadataPP(PostProcessor):
return [], info return [], info
except XAttrUnavailableError as e: except XAttrUnavailableError as e:
self._downloader.report_error(str(e)) self.report_error(str(e))
return [], info return [], info
except XAttrMetadataError as e: except XAttrMetadataError as e:
if e.reason == 'NO_SPACE': if e.reason == 'NO_SPACE':
self._downloader.report_warning( self.report_warning(
'There\'s no disk space left, disk quota exceeded or filesystem xattr limit exceeded. ' 'There\'s no disk space left, disk quota exceeded or filesystem xattr limit exceeded. '
+ (('Some ' if num_written else '') + 'extended attributes are not written.').capitalize()) + (('Some ' if num_written else '') + 'extended attributes are not written.').capitalize())
elif e.reason == 'VALUE_TOO_LONG': elif e.reason == 'VALUE_TOO_LONG':
self._downloader.report_warning( self.report_warning(
'Unable to write extended attributes due to too long values.') 'Unable to write extended attributes due to too long values.')
else: else:
msg = 'This filesystem doesn\'t support extended attributes. ' msg = 'This filesystem doesn\'t support extended attributes. '
@@ -74,5 +74,5 @@ class XAttrMetadataPP(PostProcessor):
msg += 'You need to use NTFS.' msg += 'You need to use NTFS.'
else: else:
msg += '(You may have to enable them in your /etc/fstab)' msg += '(You may have to enable them in your /etc/fstab)'
self._downloader.report_error(msg) self.report_error(msg)
return [], info return [], info

View File

@@ -32,7 +32,7 @@ def rsa_verify(message, signature, key):
def update_self(to_screen, verbose, opener): def update_self(to_screen, verbose, opener):
"""Update the program file with the latest version from the repository""" """Update the program file with the latest version from the repository"""
return to_screen('Update is currently broken.\nVisit https://github.com/pukkandan/yt-dlc/releases/latest to get the latest version') return to_screen('Update is currently broken.\nVisit https://github.com/pukkandan/yt-dlp/releases/latest to get the latest version')
UPDATE_URL = 'https://blackjack4494.github.io//update/' UPDATE_URL = 'https://blackjack4494.github.io//update/'
VERSION_URL = UPDATE_URL + 'LATEST_VERSION' VERSION_URL = UPDATE_URL + 'LATEST_VERSION'

View File

@@ -2332,8 +2332,8 @@ def bug_reports_message():
if ytdl_is_updateable(): if ytdl_is_updateable():
update_cmd = 'type youtube-dlc -U to update' update_cmd = 'type youtube-dlc -U to update'
else: else:
update_cmd = 'see https://github.com/pukkandan/yt-dlc on how to update' update_cmd = 'see https://github.com/pukkandan/yt-dlp on how to update'
msg = '; please report this issue on https://github.com/pukkandan/yt-dlc .' msg = '; please report this issue on https://github.com/pukkandan/yt-dlp .'
msg += ' Make sure you are using the latest version; %s.' % update_cmd msg += ' Make sure you are using the latest version; %s.' % update_cmd
msg += ' Be sure to call youtube-dlc with the --verbose flag and include its complete output.' msg += ' Be sure to call youtube-dlc with the --verbose flag and include its complete output.'
return msg return msg
@@ -2433,6 +2433,16 @@ class PostProcessingError(YoutubeDLError):
self.msg = msg self.msg = msg
class ExistingVideoReached(YoutubeDLError):
""" --max-downloads limit has been reached. """
pass
class RejectedVideoReached(YoutubeDLError):
""" --max-downloads limit has been reached. """
pass
class MaxDownloadsReached(YoutubeDLError): class MaxDownloadsReached(YoutubeDLError):
""" --max-downloads limit has been reached. """ """ --max-downloads limit has been reached. """
pass pass

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = '2021.01.08' __version__ = '2021.01.14'