mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2026-01-13 02:11:18 +00:00
Compare commits
176 Commits
2024.11.04
...
2025.02.19
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9c3e8b1696 | ||
|
|
4985a40417 | ||
|
|
01a63629a2 | ||
|
|
be69468752 | ||
|
|
5271ef48c6 | ||
|
|
d48e612609 | ||
|
|
5c4c2ddfaa | ||
|
|
ec17fb16e8 | ||
|
|
e7882b682b | ||
|
|
6ca23ffaa4 | ||
|
|
f53553087d | ||
|
|
4ecb833472 | ||
|
|
2081634474 | ||
|
|
c987be0acb | ||
|
|
14cd7f3443 | ||
|
|
4ca8c44a07 | ||
|
|
241ace4f10 | ||
|
|
1295bbedd4 | ||
|
|
19edaa44fc | ||
|
|
10b7ff68e9 | ||
|
|
0d9f061d38 | ||
|
|
517ddf3c3f | ||
|
|
03c3d70577 | ||
|
|
f8d0161455 | ||
|
|
d59f14a0a7 | ||
|
|
817483ccc6 | ||
|
|
861aeec449 | ||
|
|
57c717fee4 | ||
|
|
9fb8ab2ff6 | ||
|
|
18a28514e3 | ||
|
|
5ff7a43623 | ||
|
|
3b45319344 | ||
|
|
421bc72103 | ||
|
|
d4f5be1735 | ||
|
|
797d2472a2 | ||
|
|
3b99a0f0e0 | ||
|
|
c709cc41cb | ||
|
|
4850ce91d1 | ||
|
|
e2e73b5c65 | ||
|
|
13825ab778 | ||
|
|
bc88b904cd | ||
|
|
76ac023ff0 | ||
|
|
b3007c44cd | ||
|
|
78912ed9c8 | ||
|
|
bb69f5dab7 | ||
|
|
6d304133ab | ||
|
|
9ff330948c | ||
|
|
fc12e724a3 | ||
|
|
61ae5dc34a | ||
|
|
4651679104 | ||
|
|
ff44ed5306 | ||
|
|
cdcf1e8672 | ||
|
|
f7d071e8aa | ||
|
|
45732e2590 | ||
|
|
7bfb4f72e4 | ||
|
|
5d904b077d | ||
|
|
e7cc02b14d | ||
|
|
f0d4b8a5d6 | ||
|
|
6b91d232e3 | ||
|
|
de82acf876 | ||
|
|
326fb1ffaf | ||
|
|
ccda63934d | ||
|
|
9676b05715 | ||
|
|
f9f24ae376 | ||
|
|
af2c821d74 | ||
|
|
1ef3ee7500 | ||
|
|
20c765d023 | ||
|
|
3fc4608656 | ||
|
|
68221ecc87 | ||
|
|
de30f652ff | ||
|
|
89198bb23b | ||
|
|
a567f97b62 | ||
|
|
1643686104 | ||
|
|
bbc7591d3b | ||
|
|
c8541f8b13 | ||
|
|
a3c0321825 | ||
|
|
dade5e35c8 | ||
|
|
e2ef4fece6 | ||
|
|
1f489f4a45 | ||
|
|
75079f4e3f | ||
|
|
712d2abb32 | ||
|
|
8346b54915 | ||
|
|
1f4e1e85a2 | ||
|
|
763ed06ee6 | ||
|
|
3c14e9191f | ||
|
|
0b6b7742c2 | ||
|
|
3905f64920 | ||
|
|
65cf46cddd | ||
|
|
9f42e68a74 | ||
|
|
6fc85f617a | ||
|
|
d298693b1b | ||
|
|
09a6c68712 | ||
|
|
1a8851b689 | ||
|
|
b91c3925c2 | ||
|
|
3d3ee458c1 | ||
|
|
2037a6414f | ||
|
|
5421669626 | ||
|
|
dc3c4fddcc | ||
|
|
5460cd9189 | ||
|
|
f6c73aad5f | ||
|
|
d5e2a379f2 | ||
|
|
bc262bcad4 | ||
|
|
f4d3e9e6dc | ||
|
|
6fef824025 | ||
|
|
4bd2655398 | ||
|
|
a95ee6d880 | ||
|
|
4c85ccd136 | ||
|
|
2feb28028e | ||
|
|
fca3eb5f8b | ||
|
|
2e49c789d3 | ||
|
|
354cb4026c | ||
|
|
cfa76f35d2 | ||
|
|
2b67ac300a | ||
|
|
c038a7b187 | ||
|
|
a13a336aa6 | ||
|
|
dc16876480 | ||
|
|
f05a1cd149 | ||
|
|
d8fb349086 | ||
|
|
2bea793632 | ||
|
|
62cba8a1be | ||
|
|
239f5f36fe | ||
|
|
0d146c1e36 | ||
|
|
cd0f934604 | ||
|
|
360aed810a | ||
|
|
00dcde7286 | ||
|
|
910ecc4229 | ||
|
|
0a0d80800b | ||
|
|
e0500cbf79 | ||
|
|
4b5eec0aaa | ||
|
|
fe70f20aed | ||
|
|
c7316373c0 | ||
|
|
e0f1ae813b | ||
|
|
7d6c259a03 | ||
|
|
16336c51d0 | ||
|
|
ccf0a6b86b | ||
|
|
f919729538 | ||
|
|
7ea2787920 | ||
|
|
f7257588bd | ||
|
|
da252d9d32 | ||
|
|
e079ffbda6 | ||
|
|
2009cb27e1 | ||
|
|
f351440f1d | ||
|
|
f9d98509a8 | ||
|
|
37cd7660ea | ||
|
|
d867f99622 | ||
|
|
10fc719bc7 | ||
|
|
eb15fd5a32 | ||
|
|
7cecd299e4 | ||
|
|
52c0ffe40a | ||
|
|
637d62a3a9 | ||
|
|
f95a92b3d0 | ||
|
|
1d253b0a27 | ||
|
|
720b3dc453 | ||
|
|
d215fba7ed | ||
|
|
8388ec256f | ||
|
|
6365e92589 | ||
|
|
70c55cb08f | ||
|
|
c699bafc50 | ||
|
|
eb64ae7d5d | ||
|
|
c014fbcddc | ||
|
|
39d79c9b9c | ||
|
|
f2a4983df7 | ||
|
|
bacc31b05a | ||
|
|
a9f85670d0 | ||
|
|
6b43a8d84b | ||
|
|
2db8c2e7d5 | ||
|
|
f9c8deb4e5 | ||
|
|
0ec9bfed4d | ||
|
|
c673731061 | ||
|
|
e398217aae | ||
|
|
c39016f66d | ||
|
|
b83ca24eb7 | ||
|
|
240a7d43c8 | ||
|
|
f13df591d4 | ||
|
|
be3579aaf0 | ||
|
|
85fdc66b6e |
24
.github/ISSUE_TEMPLATE/1_broken_site.yml
vendored
24
.github/ISSUE_TEMPLATE/1_broken_site.yml
vendored
@@ -2,13 +2,11 @@ name: Broken site support
|
||||
description: Report issue with yt-dlp on a supported site
|
||||
labels: [triage, site-bug]
|
||||
body:
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
@@ -24,9 +22,7 @@ body:
|
||||
required: true
|
||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
|
||||
- type: input
|
||||
@@ -47,6 +43,8 @@ body:
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
required: true
|
||||
@@ -78,11 +76,3 @@ body:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
|
||||
@@ -2,13 +2,11 @@ name: Site support request
|
||||
description: Request support for a new site
|
||||
labels: [triage, site-request]
|
||||
body:
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
@@ -24,9 +22,7 @@ body:
|
||||
required: true
|
||||
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
|
||||
- type: input
|
||||
@@ -59,6 +55,8 @@ body:
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
required: true
|
||||
@@ -90,11 +88,3 @@ body:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
name: Site feature request
|
||||
description: Request a new functionality for a supported site
|
||||
description: Request new functionality for a site supported by yt-dlp
|
||||
labels: [triage, site-enhancement]
|
||||
body:
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
@@ -22,9 +20,7 @@ body:
|
||||
required: true
|
||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
|
||||
- type: input
|
||||
@@ -55,6 +51,8 @@ body:
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
required: true
|
||||
@@ -86,11 +84,3 @@ body:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
|
||||
28
.github/ISSUE_TEMPLATE/4_bug_report.yml
vendored
28
.github/ISSUE_TEMPLATE/4_bug_report.yml
vendored
@@ -2,13 +2,11 @@ name: Core bug report
|
||||
description: Report a bug unrelated to any particular site or extractor
|
||||
labels: [triage, bug]
|
||||
body:
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
@@ -20,13 +18,7 @@ body:
|
||||
required: true
|
||||
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
|
||||
required: true
|
||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||
required: true
|
||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- type: textarea
|
||||
id: description
|
||||
@@ -40,6 +32,8 @@ body:
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
required: true
|
||||
@@ -71,11 +65,3 @@ body:
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
|
||||
26
.github/ISSUE_TEMPLATE/5_feature_request.yml
vendored
26
.github/ISSUE_TEMPLATE/5_feature_request.yml
vendored
@@ -1,14 +1,12 @@
|
||||
name: Feature request
|
||||
description: Request a new functionality unrelated to any particular site or extractor
|
||||
description: Request a new feature unrelated to any particular site or extractor
|
||||
labels: [triage, enhancement]
|
||||
body:
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
@@ -22,9 +20,7 @@ body:
|
||||
required: true
|
||||
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- type: textarea
|
||||
id: description
|
||||
@@ -38,6 +34,8 @@ body:
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||
@@ -65,11 +63,3 @@ body:
|
||||
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
|
||||
<more lines>
|
||||
render: shell
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
|
||||
26
.github/ISSUE_TEMPLATE/6_question.yml
vendored
26
.github/ISSUE_TEMPLATE/6_question.yml
vendored
@@ -1,14 +1,12 @@
|
||||
name: Ask question
|
||||
description: Ask yt-dlp related question
|
||||
description: Ask a question about using yt-dlp
|
||||
labels: [question]
|
||||
body:
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
@@ -28,9 +26,7 @@ body:
|
||||
required: true
|
||||
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- type: textarea
|
||||
id: question
|
||||
@@ -44,6 +40,8 @@ body:
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
- label: "If using API, add `'verbose': True` to `YoutubeDL` params instead"
|
||||
@@ -71,11 +69,3 @@ body:
|
||||
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
|
||||
<more lines>
|
||||
render: shell
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
|
||||
7
.github/ISSUE_TEMPLATE/config.yml
vendored
7
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,8 +1,5 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Get help from the community on Discord
|
||||
- name: Get help on Discord
|
||||
url: https://discord.gg/H5MNcFW63r
|
||||
about: Join the yt-dlp Discord for community-powered support!
|
||||
- name: Matrix Bridge to the Discord server
|
||||
url: https://matrix.to/#/#yt-dlp:matrix.org
|
||||
about: For those who do not want to use Discord
|
||||
about: Join the yt-dlp Discord server for support and discussion
|
||||
|
||||
@@ -18,9 +18,7 @@ body:
|
||||
required: true
|
||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%%3Aissue%%20-label%%3Aspam%%20%%20) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
|
||||
- type: input
|
||||
|
||||
@@ -18,9 +18,7 @@ body:
|
||||
required: true
|
||||
- label: I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%%3Aissue%%20-label%%3Aspam%%20%%20) for similar requests **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
|
||||
- type: input
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
name: Site feature request
|
||||
description: Request a new functionality for a supported site
|
||||
description: Request new functionality for a site supported by yt-dlp
|
||||
labels: [triage, site-enhancement]
|
||||
body:
|
||||
%(no_skip)s
|
||||
@@ -16,9 +16,7 @@ body:
|
||||
required: true
|
||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%%3Aissue%%20-label%%3Aspam%%20%%20) for similar requests **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
|
||||
- type: input
|
||||
|
||||
8
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml
vendored
8
.github/ISSUE_TEMPLATE_tmpl/4_bug_report.yml
vendored
@@ -14,13 +14,7 @@ body:
|
||||
required: true
|
||||
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
|
||||
required: true
|
||||
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
|
||||
required: true
|
||||
- label: I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%%3Aissue%%20-label%%3Aspam%%20%%20) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- type: textarea
|
||||
id: description
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
name: Feature request
|
||||
description: Request a new functionality unrelated to any particular site or extractor
|
||||
description: Request a new feature unrelated to any particular site or extractor
|
||||
labels: [triage, enhancement]
|
||||
body:
|
||||
%(no_skip)s
|
||||
@@ -16,9 +16,7 @@ body:
|
||||
required: true
|
||||
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%%3Aissue%%20-label%%3Aspam%%20%%20) for similar requests **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- type: textarea
|
||||
id: description
|
||||
|
||||
6
.github/ISSUE_TEMPLATE_tmpl/6_question.yml
vendored
6
.github/ISSUE_TEMPLATE_tmpl/6_question.yml
vendored
@@ -1,5 +1,5 @@
|
||||
name: Ask question
|
||||
description: Ask yt-dlp related question
|
||||
description: Ask a question about using yt-dlp
|
||||
labels: [question]
|
||||
body:
|
||||
%(no_skip)s
|
||||
@@ -22,9 +22,7 @@ body:
|
||||
required: true
|
||||
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
|
||||
required: true
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- label: I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
|
||||
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%%3Aissue%%20-label%%3Aspam%%20%%20) for similar questions **including closed ones**. DO NOT post duplicates
|
||||
required: true
|
||||
- type: textarea
|
||||
id: question
|
||||
|
||||
37
.github/PULL_REQUEST_TEMPLATE.md
vendored
37
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,14 +1,17 @@
|
||||
**IMPORTANT**: PRs without the template will be CLOSED
|
||||
<!--
|
||||
**IMPORTANT**: PRs without the template will be CLOSED
|
||||
|
||||
Due to the high volume of pull requests, it may be a while before your PR is reviewed.
|
||||
Please try to keep your pull request focused on a single bugfix or new feature.
|
||||
Pull requests with a vast scope and/or very large diff will take much longer to review.
|
||||
It is recommended for new contributors to stick to smaller pull requests, so you can receive much more immediate feedback as you familiarize yourself with the codebase.
|
||||
|
||||
PLEASE AVOID FORCE-PUSHING after opening a PR, as it makes reviewing more difficult.
|
||||
-->
|
||||
|
||||
### Description of your *pull request* and other information
|
||||
|
||||
<!--
|
||||
|
||||
Explanation of your *pull request* in arbitrary form goes here. Please **make sure the description explains the purpose and effect** of your *pull request* and is worded well enough to be understood. Provide as much **context and examples** as possible
|
||||
|
||||
-->
|
||||
|
||||
ADD DESCRIPTION HERE
|
||||
ADD DETAILED DESCRIPTION HERE
|
||||
|
||||
Fixes #
|
||||
|
||||
@@ -16,24 +19,22 @@ Fixes #
|
||||
<details open><summary>Template</summary> <!-- OPEN is intentional -->
|
||||
|
||||
<!--
|
||||
# PLEASE FOLLOW THE GUIDE BELOW
|
||||
|
||||
# PLEASE FOLLOW THE GUIDE BELOW
|
||||
|
||||
- You will be asked some questions, please read them **carefully** and answer honestly
|
||||
- Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x])
|
||||
- Use *Preview* tab to see how your *pull request* will actually look like
|
||||
|
||||
- You will be asked some questions, please read them **carefully** and answer honestly
|
||||
- Put an `x` into all the boxes `[ ]` relevant to your *pull request* (like [x])
|
||||
- Use *Preview* tab to see what your *pull request* will actually look like
|
||||
-->
|
||||
|
||||
### Before submitting a *pull request* make sure you have:
|
||||
- [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
|
||||
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
|
||||
|
||||
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
|
||||
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
|
||||
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
||||
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check those that apply and remove the others:
|
||||
- [ ] I am the original author of the code in this PR, and I am willing to release it under [Unlicense](http://unlicense.org/)
|
||||
- [ ] I am not the original author of the code in this PR, but it is in the public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
|
||||
|
||||
### What is the purpose of your *pull request*?
|
||||
### What is the purpose of your *pull request*? Check those that apply and remove the others:
|
||||
- [ ] Fix or improvement to an extractor (Make sure to add/update tests)
|
||||
- [ ] New extractor ([Piracy websites will not be accepted](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy))
|
||||
- [ ] Core bug fix/improvement
|
||||
|
||||
7
.github/workflows/build.yml
vendored
7
.github/workflows/build.yml
vendored
@@ -411,7 +411,7 @@ jobs:
|
||||
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
|
||||
python devscripts/install_deps.py -o --include build
|
||||
python devscripts/install_deps.py --include curl-cffi
|
||||
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.10.0-py3-none-any.whl"
|
||||
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.11.1-py3-none-any.whl"
|
||||
|
||||
- name: Prepare
|
||||
run: |
|
||||
@@ -460,7 +460,7 @@ jobs:
|
||||
run: |
|
||||
python devscripts/install_deps.py -o --include build
|
||||
python devscripts/install_deps.py
|
||||
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.10.0-py3-none-any.whl"
|
||||
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.11.1-py3-none-any.whl"
|
||||
|
||||
- name: Prepare
|
||||
run: |
|
||||
@@ -504,7 +504,8 @@ jobs:
|
||||
- windows32
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/download-artifact@v4
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: artifact
|
||||
pattern: build-bin-*
|
||||
|
||||
6
.github/workflows/codeql.yml
vendored
6
.github/workflows/codeql.yml
vendored
@@ -33,7 +33,7 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v2
|
||||
uses: github/codeql-action/init@v3
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
# If you wish to specify custom queries, you can do so here or in a config file.
|
||||
@@ -47,7 +47,7 @@ jobs:
|
||||
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
|
||||
# If this step fails, then you should remove it and run the build manually (see below)
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@v2
|
||||
uses: github/codeql-action/autobuild@v3
|
||||
|
||||
# ℹ️ Command-line programs to run using the OS shell.
|
||||
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
|
||||
@@ -60,6 +60,6 @@ jobs:
|
||||
# ./location_of_script_within_repo/buildscript.sh
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v2
|
||||
uses: github/codeql-action/analyze@v3
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
17
.github/workflows/release-master.yml
vendored
17
.github/workflows/release-master.yml
vendored
@@ -28,3 +28,20 @@ jobs:
|
||||
actions: write # For cleaning up cache
|
||||
id-token: write # mandatory for trusted publishing
|
||||
secrets: inherit
|
||||
|
||||
publish_pypi:
|
||||
needs: [release]
|
||||
if: vars.MASTER_PYPI_PROJECT != ''
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
id-token: write # mandatory for trusted publishing
|
||||
steps:
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: dist
|
||||
name: build-pypi
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
verbose: true
|
||||
|
||||
17
.github/workflows/release-nightly.yml
vendored
17
.github/workflows/release-nightly.yml
vendored
@@ -41,3 +41,20 @@ jobs:
|
||||
actions: write # For cleaning up cache
|
||||
id-token: write # mandatory for trusted publishing
|
||||
secrets: inherit
|
||||
|
||||
publish_pypi:
|
||||
needs: [release]
|
||||
if: vars.NIGHTLY_PYPI_PROJECT != ''
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
id-token: write # mandatory for trusted publishing
|
||||
steps:
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: dist
|
||||
name: build-pypi
|
||||
- name: Publish to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
verbose: true
|
||||
|
||||
19
.github/workflows/release.yml
vendored
19
.github/workflows/release.yml
vendored
@@ -2,10 +2,6 @@ name: Release
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
prerelease:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
source:
|
||||
required: false
|
||||
default: ''
|
||||
@@ -18,6 +14,10 @@ on:
|
||||
required: false
|
||||
default: ''
|
||||
type: string
|
||||
prerelease:
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
source:
|
||||
@@ -278,11 +278,20 @@ jobs:
|
||||
make clean-cache
|
||||
python -m build --no-isolation .
|
||||
|
||||
- name: Upload artifacts
|
||||
if: github.event_name != 'workflow_dispatch'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: build-pypi
|
||||
path: |
|
||||
dist/*
|
||||
compression-level: 0
|
||||
|
||||
- name: Publish to PyPI
|
||||
if: github.event_name == 'workflow_dispatch'
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
verbose: true
|
||||
attestations: false # Currently doesn't work w/ reusable workflows (breaks nightly)
|
||||
|
||||
publish:
|
||||
needs: [prepare, build]
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -92,6 +92,7 @@ updates_key.pem
|
||||
*.class
|
||||
*.isorted
|
||||
*.stackdump
|
||||
uv.lock
|
||||
|
||||
# Generated
|
||||
AUTHORS
|
||||
|
||||
47
CONTRIBUTORS
47
CONTRIBUTORS
@@ -695,3 +695,50 @@ KBelmin
|
||||
kesor
|
||||
MellowKyler
|
||||
Wesley107772
|
||||
a13ssandr0
|
||||
ChocoLZS
|
||||
doe1080
|
||||
hugovdev
|
||||
jshumphrey
|
||||
julionc
|
||||
manavchaudhary1
|
||||
powergold1
|
||||
Sakura286
|
||||
SamDecrock
|
||||
stratus-ss
|
||||
subrat-lima
|
||||
gitninja1234
|
||||
jkruse
|
||||
xiaomac
|
||||
wesson09
|
||||
Crypto90
|
||||
MutantPiggieGolem1
|
||||
Sanceilaks
|
||||
Strkmn
|
||||
0x9fff00
|
||||
4ft35t
|
||||
7x11x13
|
||||
b5i
|
||||
cotko
|
||||
d3d9
|
||||
Dioarya
|
||||
finch71
|
||||
hexahigh
|
||||
InvalidUsernameException
|
||||
jixunmoe
|
||||
knackku
|
||||
krandor
|
||||
kvk-2015
|
||||
lonble
|
||||
msm595
|
||||
n10dollar
|
||||
NecroRomnt
|
||||
pjrobertson
|
||||
subsense
|
||||
test20140
|
||||
arantius
|
||||
entourage8
|
||||
lfavole
|
||||
mp3butcher
|
||||
slipinthedove
|
||||
YoshiTabletopGamer
|
||||
|
||||
258
Changelog.md
258
Changelog.md
@@ -4,6 +4,264 @@
|
||||
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
|
||||
-->
|
||||
|
||||
### 2025.02.19
|
||||
|
||||
#### Core changes
|
||||
- **jsinterp**
|
||||
- [Add `js_number_to_string`](https://github.com/yt-dlp/yt-dlp/commit/0d9f061d38c3a4da61972e2adad317079f2f1c84) ([#12110](https://github.com/yt-dlp/yt-dlp/issues/12110)) by [Grub4K](https://github.com/Grub4K)
|
||||
- [Improve zeroise](https://github.com/yt-dlp/yt-dlp/commit/4ca8c44a073d5aa3a3e3112c35b2b23d6ce25ac6) ([#12313](https://github.com/yt-dlp/yt-dlp/issues/12313)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Extractor changes
|
||||
- **acast**: [Support shows.acast.com URLs](https://github.com/yt-dlp/yt-dlp/commit/57c717fee4bfbc9309845bbb48901b72e4b69304) ([#12223](https://github.com/yt-dlp/yt-dlp/issues/12223)) by [barsnick](https://github.com/barsnick)
|
||||
- **cwtv**
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/18a28514e306e822eab4f3a79c76d515bf076406) ([#12207](https://github.com/yt-dlp/yt-dlp/issues/12207)) by [arantius](https://github.com/arantius)
|
||||
- movie: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/03c3d705778c07739e0034b51490877cffdc0983) ([#12227](https://github.com/yt-dlp/yt-dlp/issues/12227)) by [bashonly](https://github.com/bashonly)
|
||||
- **digiview**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/f53553087d3fde9dcd61d6e9f98caf09db1d8ef2) ([#9902](https://github.com/yt-dlp/yt-dlp/issues/9902)) by [lfavole](https://github.com/lfavole)
|
||||
- **dropbox**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/861aeec449c8f3c062d962945b234ff0341f61f3) ([#12228](https://github.com/yt-dlp/yt-dlp/issues/12228)) by [bashonly](https://github.com/bashonly)
|
||||
- **francetv**
|
||||
- site
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/817483ccc68aed6049ed9c4a2ffae44ca82d2b1c) ([#12236](https://github.com/yt-dlp/yt-dlp/issues/12236)) by [bashonly](https://github.com/bashonly)
|
||||
- [Fix livestream extraction](https://github.com/yt-dlp/yt-dlp/commit/1295bbedd45fa8d9bc3f7a194864ae280297848e) ([#12316](https://github.com/yt-dlp/yt-dlp/issues/12316)) by [bashonly](https://github.com/bashonly)
|
||||
- **francetvinfo.fr**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/5c4c2ddfaa47988b4d50c1ad4988badc0b4f30c2) ([#12402](https://github.com/yt-dlp/yt-dlp/issues/12402)) by [bashonly](https://github.com/bashonly)
|
||||
- **gem.cbc.ca**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/5271ef48c6f61c145e03e18e960995d2e651d205) ([#12404](https://github.com/yt-dlp/yt-dlp/issues/12404)) by [bashonly](https://github.com/bashonly), [dirkf](https://github.com/dirkf)
|
||||
- **generic**: [Extract `live_status` for DASH manifest URLs](https://github.com/yt-dlp/yt-dlp/commit/19edaa44fcd375f54e63d6227b092f5252d3e889) ([#12256](https://github.com/yt-dlp/yt-dlp/issues/12256)) by [mp3butcher](https://github.com/mp3butcher)
|
||||
- **globo**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f8d0161455f00add65585ca1a476a7b5d56f5f96) ([#11795](https://github.com/yt-dlp/yt-dlp/issues/11795)) by [slipinthedove](https://github.com/slipinthedove), [YoshiTabletopGamer](https://github.com/YoshiTabletopGamer)
|
||||
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/d59f14a0a7a8b55e6bf468237def62b73ab4a517) ([#12237](https://github.com/yt-dlp/yt-dlp/issues/12237)) by [alard](https://github.com/alard)
|
||||
- **pbs**: [Support www.thirteen.org URLs](https://github.com/yt-dlp/yt-dlp/commit/9fb8ab2ff67fb699f60cce09163a580976e90c0e) ([#11191](https://github.com/yt-dlp/yt-dlp/issues/11191)) by [rohieb](https://github.com/rohieb)
|
||||
- **reddit**: [Bypass gated subreddit warning](https://github.com/yt-dlp/yt-dlp/commit/6ca23ffaa4663cb552f937f0b1e9769b66db11bd) ([#12335](https://github.com/yt-dlp/yt-dlp/issues/12335)) by [bashonly](https://github.com/bashonly)
|
||||
- **twitter**: [Fix syndication token generation](https://github.com/yt-dlp/yt-dlp/commit/14cd7f3443c6da4d49edaefcc12da9dee86e243e) ([#12107](https://github.com/yt-dlp/yt-dlp/issues/12107)) by [Grub4K](https://github.com/Grub4K), [pjrobertson](https://github.com/pjrobertson)
|
||||
- **youtube**
|
||||
- [Retry on more critical requests](https://github.com/yt-dlp/yt-dlp/commit/d48e612609d012abbea3785be4d26d78a014abb2) ([#12339](https://github.com/yt-dlp/yt-dlp/issues/12339)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
- [nsig workaround for `tce` player JS](https://github.com/yt-dlp/yt-dlp/commit/ec17fb16e8d69d4e3e10fb73bf3221be8570dfee) ([#12401](https://github.com/yt-dlp/yt-dlp/issues/12401)) by [bashonly](https://github.com/bashonly)
|
||||
- **zdf**: [Extract more metadata](https://github.com/yt-dlp/yt-dlp/commit/241ace4f104d50fdf7638f9203927aefcf57a1f7) ([#9565](https://github.com/yt-dlp/yt-dlp/issues/9565)) by [StefanLobbenmeier](https://github.com/StefanLobbenmeier) (With fixes in [e7882b6](https://github.com/yt-dlp/yt-dlp/commit/e7882b682b959e476d8454911655b3e9b14c79b2) by [bashonly](https://github.com/bashonly))
|
||||
|
||||
#### Downloader changes
|
||||
- **hls**
|
||||
- [Fix `BYTERANGE` logic](https://github.com/yt-dlp/yt-dlp/commit/10b7ff68e98f17655e31952f6e17120b2d7dda96) ([#11972](https://github.com/yt-dlp/yt-dlp/issues/11972)) by [entourage8](https://github.com/entourage8)
|
||||
- [Support `--write-pages` for m3u8 media playlists](https://github.com/yt-dlp/yt-dlp/commit/be69468752ff598cacee57bb80533deab2367a5d) ([#12333](https://github.com/yt-dlp/yt-dlp/issues/12333)) by [bashonly](https://github.com/bashonly)
|
||||
- [Support `hls_media_playlist_data` format field](https://github.com/yt-dlp/yt-dlp/commit/c987be0acb6872c6561f28aa28171e803393d851) ([#12322](https://github.com/yt-dlp/yt-dlp/issues/12322)) by [bashonly](https://github.com/bashonly)
|
||||
|
||||
#### Misc. changes
|
||||
- [Improve Issue/PR templates](https://github.com/yt-dlp/yt-dlp/commit/517ddf3c3f12560ab93e3d36244dc82db9f97818) ([#11499](https://github.com/yt-dlp/yt-dlp/issues/11499)) by [seproDev](https://github.com/seproDev) (With fixes in [4ecb833](https://github.com/yt-dlp/yt-dlp/commit/4ecb833472c90e078567b561fb7c089f1aa9587b) by [bashonly](https://github.com/bashonly))
|
||||
- **cleanup**: Miscellaneous: [4985a40](https://github.com/yt-dlp/yt-dlp/commit/4985a4041770eaa0016271809a1fd950dc809a55) by [dirkf](https://github.com/dirkf), [Grub4K](https://github.com/Grub4K), [StefanLobbenmeier](https://github.com/StefanLobbenmeier)
|
||||
- **docs**: [Add note to `supportedsites.md`](https://github.com/yt-dlp/yt-dlp/commit/01a63629a21781458dcbd38779898e117678f5ff) ([#12382](https://github.com/yt-dlp/yt-dlp/issues/12382)) by [seproDev](https://github.com/seproDev)
|
||||
- **test**: download: [Validate and sort info dict fields](https://github.com/yt-dlp/yt-dlp/commit/208163447408c78673b08c172beafe5c310fb167) ([#12299](https://github.com/yt-dlp/yt-dlp/issues/12299)) by [bashonly](https://github.com/bashonly), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
|
||||
### 2025.01.26
|
||||
|
||||
#### Core changes
|
||||
- [Fix float comparison values in format filters](https://github.com/yt-dlp/yt-dlp/commit/f7d071e8aa3bf67ed7e0f881e749ca9ab50b3f8f) ([#11880](https://github.com/yt-dlp/yt-dlp/issues/11880)) by [bashonly](https://github.com/bashonly), [Dioarya](https://github.com/Dioarya)
|
||||
- **utils**: `sanitize_path`: [Fix some incorrect behavior](https://github.com/yt-dlp/yt-dlp/commit/fc12e724a3b4988cfc467d2981887dde48c26b69) ([#11923](https://github.com/yt-dlp/yt-dlp/issues/11923)) by [Grub4K](https://github.com/Grub4K)
|
||||
|
||||
#### Extractor changes
|
||||
- **1tv**: [Support sport1tv.ru domain](https://github.com/yt-dlp/yt-dlp/commit/61ae5dc34ac775d6c122575e21ef2153b1273a2b) ([#11889](https://github.com/yt-dlp/yt-dlp/issues/11889)) by [kvk-2015](https://github.com/kvk-2015)
|
||||
- **abematv**: [Support season extraction](https://github.com/yt-dlp/yt-dlp/commit/c709cc41cbc16edc846e0a431cfa8508396d4cb6) ([#11771](https://github.com/yt-dlp/yt-dlp/issues/11771)) by [middlingphys](https://github.com/middlingphys)
|
||||
- **bilibili**
|
||||
- [Support space `/lists/` URLs](https://github.com/yt-dlp/yt-dlp/commit/465167910407449354eb48e9861efd0819f53eb5) ([#11964](https://github.com/yt-dlp/yt-dlp/issues/11964)) by [c-basalt](https://github.com/c-basalt)
|
||||
- [Support space video list extraction without login](https://github.com/yt-dlp/yt-dlp/commit/78912ed9c81f109169b828c397294a6cf8eacf41) ([#12089](https://github.com/yt-dlp/yt-dlp/issues/12089)) by [grqz](https://github.com/grqz)
|
||||
- **bilibilidynamic**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/9676b05715b61c8c5dd5598871e60d8807fb1a86) ([#11838](https://github.com/yt-dlp/yt-dlp/issues/11838)) by [finch71](https://github.com/finch71), [grqz](https://github.com/grqz)
|
||||
- **bluesky**: [Prefer source format](https://github.com/yt-dlp/yt-dlp/commit/ccda63934df7de2823f0834218c4254c7c4d2e4c) ([#12154](https://github.com/yt-dlp/yt-dlp/issues/12154)) by [0x9fff00](https://github.com/0x9fff00)
|
||||
- **crunchyroll**: [Remove extractors](https://github.com/yt-dlp/yt-dlp/commit/ff44ed53061e065804da6275d182d7928cc03a5e) ([#12195](https://github.com/yt-dlp/yt-dlp/issues/12195)) by [seproDev](https://github.com/seproDev)
|
||||
- **dropout**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/164368610456e2d96b279f8b120dea08f7b1d74f) ([#12102](https://github.com/yt-dlp/yt-dlp/issues/12102)) by [bashonly](https://github.com/bashonly)
|
||||
- **eggs**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/20c765d02385a105c8ef13b6f7a737491d29c19a) ([#11904](https://github.com/yt-dlp/yt-dlp/issues/11904)) by [seproDev](https://github.com/seproDev), [subsense](https://github.com/subsense)
|
||||
- **funimation**: [Remove extractors](https://github.com/yt-dlp/yt-dlp/commit/cdcf1e86726b8fa44f7e7126bbf1c18e1798d25c) ([#12167](https://github.com/yt-dlp/yt-dlp/issues/12167)) by [doe1080](https://github.com/doe1080)
|
||||
- **goodgame**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e7cc02b14d8d323f805d14325a9c95593a170d28) ([#12173](https://github.com/yt-dlp/yt-dlp/issues/12173)) by [NecroRomnt](https://github.com/NecroRomnt)
|
||||
- **lbry**: [Support signed URLs](https://github.com/yt-dlp/yt-dlp/commit/de30f652ffb7623500215f5906844f2ae0d92c7b) ([#12138](https://github.com/yt-dlp/yt-dlp/issues/12138)) by [seproDev](https://github.com/seproDev)
|
||||
- **naver**: [Fix m3u8 formats extraction](https://github.com/yt-dlp/yt-dlp/commit/b3007c44cdac38187fc6600de76959a7079a44d1) ([#12037](https://github.com/yt-dlp/yt-dlp/issues/12037)) by [kclauhk](https://github.com/kclauhk)
|
||||
- **nest**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/1ef3ee7500c4ab8c26f7fdc5b0ad1da4d16eec8e) ([#11747](https://github.com/yt-dlp/yt-dlp/issues/11747)) by [pabs3](https://github.com/pabs3), [seproDev](https://github.com/seproDev)
|
||||
- **niconico**: series: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/bc88b904cd02314da41ce1b2fdf046d0680fe965) ([#11822](https://github.com/yt-dlp/yt-dlp/issues/11822)) by [test20140](https://github.com/test20140)
|
||||
- **nrk**
|
||||
- [Extract more formats](https://github.com/yt-dlp/yt-dlp/commit/89198bb23b4d03e0473ac408bfb50d67c2f71165) ([#12069](https://github.com/yt-dlp/yt-dlp/issues/12069)) by [hexahigh](https://github.com/hexahigh)
|
||||
- [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/45732e2590a1bd0bc9608f5eb68c59341ca84f02) ([#12193](https://github.com/yt-dlp/yt-dlp/issues/12193)) by [hexahigh](https://github.com/hexahigh)
|
||||
- **patreon**: [Extract attachment filename as `alt_title`](https://github.com/yt-dlp/yt-dlp/commit/e2e73b5c65593ec0a5e685663e6ec0f4aaffc1f1) ([#12000](https://github.com/yt-dlp/yt-dlp/issues/12000)) by [msm595](https://github.com/msm595)
|
||||
- **pbs**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/13825ab77815ee6e1603abbecbb9f3795057b93c) ([#12024](https://github.com/yt-dlp/yt-dlp/issues/12024)) by [dirkf](https://github.com/dirkf), [krandor](https://github.com/krandor), [n10dollar](https://github.com/n10dollar)
|
||||
- **piramidetv**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/af2c821d74049b519895288aca23cee81fc4b049) ([#10777](https://github.com/yt-dlp/yt-dlp/issues/10777)) by [HobbyistDev](https://github.com/HobbyistDev), [kclauhk](https://github.com/kclauhk), [seproDev](https://github.com/seproDev)
|
||||
- **redgifs**: [Support `/ifr/` URLs](https://github.com/yt-dlp/yt-dlp/commit/4850ce91d163579fa615c3c0d44c9bd64682c22b) ([#11805](https://github.com/yt-dlp/yt-dlp/issues/11805)) by [invertico](https://github.com/invertico)
|
||||
- **rtvslo.si**: show: [Extract more metadata](https://github.com/yt-dlp/yt-dlp/commit/3fc46086562857d5493cbcff687f76e4e4ed303f) ([#12136](https://github.com/yt-dlp/yt-dlp/issues/12136)) by [cotko](https://github.com/cotko)
|
||||
- **senategov**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/68221ecc87c6a3f3515757bac2a0f9674a38e3f2) ([#9361](https://github.com/yt-dlp/yt-dlp/issues/9361)) by [Grabien](https://github.com/Grabien), [seproDev](https://github.com/seproDev)
|
||||
- **soundcloud**
|
||||
- [Extract more metadata](https://github.com/yt-dlp/yt-dlp/commit/6d304133ab32bcd1eb78ff1467f1a41dd9b66c33) ([#11945](https://github.com/yt-dlp/yt-dlp/issues/11945)) by [7x11x13](https://github.com/7x11x13)
|
||||
- user: [Add `/comments` page support](https://github.com/yt-dlp/yt-dlp/commit/7bfb4f72e490310d2681c7f4815218a2ebbc73ee) ([#11999](https://github.com/yt-dlp/yt-dlp/issues/11999)) by [7x11x13](https://github.com/7x11x13)
|
||||
- **subsplash**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/5d904b077d2f58ae44bdf208d2dcfcc3ff8347f5) ([#11054](https://github.com/yt-dlp/yt-dlp/issues/11054)) by [seproDev](https://github.com/seproDev), [subrat-lima](https://github.com/subrat-lima)
|
||||
- **theatercomplextownppv**: [Support `live` URLs](https://github.com/yt-dlp/yt-dlp/commit/797d2472a299692e01ad1500e8c3b7bc1daa7fe4) ([#11720](https://github.com/yt-dlp/yt-dlp/issues/11720)) by [bashonly](https://github.com/bashonly)
|
||||
- **vimeo**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/9ff330948c92f6b2e1d9c928787362ab19cd6c62) ([#12142](https://github.com/yt-dlp/yt-dlp/issues/12142)) by [jixunmoe](https://github.com/jixunmoe)
|
||||
- **vimp**: Playlist: [Add support for tags](https://github.com/yt-dlp/yt-dlp/commit/d4f5be1735c8feaeb3308666e0b878e9782f529d) ([#11688](https://github.com/yt-dlp/yt-dlp/issues/11688)) by [FestplattenSchnitzel](https://github.com/FestplattenSchnitzel)
|
||||
- **weibo**: [Extend `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/a567f97b62ae9f6d6f5a9376c361512ab8dceda2) ([#12088](https://github.com/yt-dlp/yt-dlp/issues/12088)) by [4ft35t](https://github.com/4ft35t)
|
||||
- **xhamster**: [Various improvements](https://github.com/yt-dlp/yt-dlp/commit/3b99a0f0e07f0120ab416f34a8f5ab75d4fdf1d1) ([#11738](https://github.com/yt-dlp/yt-dlp/issues/11738)) by [knackku](https://github.com/knackku)
|
||||
- **xiaohongshu**: [Extract more formats](https://github.com/yt-dlp/yt-dlp/commit/f9f24ae376a9eaca777816479a4a29f6f0ce7681) ([#12147](https://github.com/yt-dlp/yt-dlp/issues/12147)) by [seproDev](https://github.com/seproDev)
|
||||
- **youtube**
|
||||
- [Download `tv` client Innertube config](https://github.com/yt-dlp/yt-dlp/commit/326fb1ffaf4e8349f1fe8ba2a81839652e044bff) ([#12168](https://github.com/yt-dlp/yt-dlp/issues/12168)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
- [Extract `media_type` for livestreams](https://github.com/yt-dlp/yt-dlp/commit/421bc72103d1faed473a451299cd17d6abb433bb) ([#11605](https://github.com/yt-dlp/yt-dlp/issues/11605)) by [nosoop](https://github.com/nosoop)
|
||||
- [Restore convenience workarounds](https://github.com/yt-dlp/yt-dlp/commit/f0d4b8a5d6354b294bc9631cf15a7160b7bad5de) ([#12181](https://github.com/yt-dlp/yt-dlp/issues/12181)) by [bashonly](https://github.com/bashonly)
|
||||
- [Update `ios` player client](https://github.com/yt-dlp/yt-dlp/commit/de82acf8769282ce321a86737ecc1d4bef0e82a7) ([#12155](https://github.com/yt-dlp/yt-dlp/issues/12155)) by [b5i](https://github.com/b5i)
|
||||
- [Use different PO token for GVS and Player](https://github.com/yt-dlp/yt-dlp/commit/6b91d232e316efa406035915532eb126fbaeea38) ([#12090](https://github.com/yt-dlp/yt-dlp/issues/12090)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
- tab: [Improve shorts title extraction](https://github.com/yt-dlp/yt-dlp/commit/76ac023ff02f06e8c003d104f02a03deeddebdcd) ([#11997](https://github.com/yt-dlp/yt-dlp/issues/11997)) by [bashonly](https://github.com/bashonly), [d3d9](https://github.com/d3d9)
|
||||
- **zdf**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/bb69f5dab79fb32c4ec0d50e05f7fa26d05d54ba) ([#11041](https://github.com/yt-dlp/yt-dlp/issues/11041)) by [InvalidUsernameException](https://github.com/InvalidUsernameException)
|
||||
|
||||
#### Misc. changes
|
||||
- **cleanup**: Miscellaneous: [3b45319](https://github.com/yt-dlp/yt-dlp/commit/3b4531934465580be22937fecbb6e1a3a9e2334f) by [bashonly](https://github.com/bashonly), [lonble](https://github.com/lonble), [pjrobertson](https://github.com/pjrobertson), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2025.01.15
|
||||
|
||||
#### Extractor changes
|
||||
- **youtube**: [Do not use `web_creator` as a default client](https://github.com/yt-dlp/yt-dlp/commit/c8541f8b13e743fcfa06667530d13fee8686e22a) ([#12087](https://github.com/yt-dlp/yt-dlp/issues/12087)) by [bashonly](https://github.com/bashonly)
|
||||
|
||||
### 2025.01.12
|
||||
|
||||
#### Core changes
|
||||
- [Fix filename sanitization with `--no-windows-filenames`](https://github.com/yt-dlp/yt-dlp/commit/8346b549150003df988538e54c9d8bc4de568979) ([#11988](https://github.com/yt-dlp/yt-dlp/issues/11988)) by [bashonly](https://github.com/bashonly)
|
||||
- [Validate retries values are non-negative](https://github.com/yt-dlp/yt-dlp/commit/1f4e1e85a27c5b43e34d7706cfd88ffce1b56a4a) ([#11927](https://github.com/yt-dlp/yt-dlp/issues/11927)) by [Strkmn](https://github.com/Strkmn)
|
||||
|
||||
#### Extractor changes
|
||||
- **drtalks**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/1f489f4a45691cac3f9e787d22a3a8a086229ba6) ([#10831](https://github.com/yt-dlp/yt-dlp/issues/10831)) by [pzhlkj6612](https://github.com/pzhlkj6612), [seproDev](https://github.com/seproDev)
|
||||
- **plvideo**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/3c14e9191f3035b9a729d1d87bc0381f42de57cf) ([#10657](https://github.com/yt-dlp/yt-dlp/issues/10657)) by [Sanceilaks](https://github.com/Sanceilaks), [seproDev](https://github.com/seproDev)
|
||||
- **vine**: [Remove extractors](https://github.com/yt-dlp/yt-dlp/commit/e2ef4fece6c9742d1733e3bae408c4787765f78c) ([#11700](https://github.com/yt-dlp/yt-dlp/issues/11700)) by [allendema](https://github.com/allendema)
|
||||
- **xiaohongshu**: [Extend `_VALID_URL`](https://github.com/yt-dlp/yt-dlp/commit/763ed06ee69f13949397897bd42ff2ec3dc3d384) ([#11806](https://github.com/yt-dlp/yt-dlp/issues/11806)) by [HobbyistDev](https://github.com/HobbyistDev)
|
||||
- **youtube**
|
||||
- [Fix DASH formats incorrectly skipped in some situations](https://github.com/yt-dlp/yt-dlp/commit/0b6b7742c2e7f2a1fcb0b54ef3dd484bab404b3f) ([#11910](https://github.com/yt-dlp/yt-dlp/issues/11910)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
- [Refactor cookie auth](https://github.com/yt-dlp/yt-dlp/commit/75079f4e3f7dce49b61ef01da7adcd9876a0ca3b) ([#11989](https://github.com/yt-dlp/yt-dlp/issues/11989)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
- [Use `tv` instead of `mweb` client by default](https://github.com/yt-dlp/yt-dlp/commit/712d2abb32f59b2d246be2901255f84f1a4c30b3) ([#12059](https://github.com/yt-dlp/yt-dlp/issues/12059)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
|
||||
#### Misc. changes
|
||||
- **cleanup**: Miscellaneous: [dade5e3](https://github.com/yt-dlp/yt-dlp/commit/dade5e35c89adaad04408bfef766820dbca06ebe) by [grqz](https://github.com/grqz), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.12.23
|
||||
|
||||
#### Core changes
|
||||
- [Don't sanitize filename on Unix when `--no-windows-filenames`](https://github.com/yt-dlp/yt-dlp/commit/6fc85f617a5850307fd5b258477070e6ee177796) ([#9591](https://github.com/yt-dlp/yt-dlp/issues/9591)) by [pukkandan](https://github.com/pukkandan)
|
||||
- **update**
|
||||
- [Check 64-bitness when upgrading ARM builds](https://github.com/yt-dlp/yt-dlp/commit/b91c3925c2059970daa801cb131c0c2f4f302e72) ([#11819](https://github.com/yt-dlp/yt-dlp/issues/11819)) by [bashonly](https://github.com/bashonly)
|
||||
- [Fix endless update loop for `linux_exe` builds](https://github.com/yt-dlp/yt-dlp/commit/3d3ee458c1fe49dd5ebd7651a092119d23eb7000) ([#11827](https://github.com/yt-dlp/yt-dlp/issues/11827)) by [bashonly](https://github.com/bashonly)
|
||||
|
||||
#### Extractor changes
|
||||
- **soundcloud**: [Various fixes](https://github.com/yt-dlp/yt-dlp/commit/d298693b1b266d198e8eeecb90ea17c4a031268f) ([#11820](https://github.com/yt-dlp/yt-dlp/issues/11820)) by [bashonly](https://github.com/bashonly)
|
||||
- **youtube**
|
||||
- [Add age-gate workaround for some embeddable videos](https://github.com/yt-dlp/yt-dlp/commit/09a6c687126f04e243fcb105a828787efddd1030) ([#11821](https://github.com/yt-dlp/yt-dlp/issues/11821)) by [bashonly](https://github.com/bashonly)
|
||||
- [Fix `uploader_id` extraction](https://github.com/yt-dlp/yt-dlp/commit/1a8851b689763e5173b96f70f8a71df0e4a44b66) ([#11818](https://github.com/yt-dlp/yt-dlp/issues/11818)) by [bashonly](https://github.com/bashonly)
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/65cf46cddd873fd229dbb0fc0689bca4c201c6b6) ([#11893](https://github.com/yt-dlp/yt-dlp/issues/11893)) by [bashonly](https://github.com/bashonly)
|
||||
- [Skip iOS formats that require PO Token](https://github.com/yt-dlp/yt-dlp/commit/9f42e68a74f3f00b0253fe70763abd57cac4237b) ([#11890](https://github.com/yt-dlp/yt-dlp/issues/11890)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
|
||||
### 2024.12.13
|
||||
|
||||
#### Extractor changes
|
||||
- **patreon**: campaign: [Support /c/ URLs](https://github.com/yt-dlp/yt-dlp/commit/bc262bcad4d3683ceadf61a7eb87e233e72adef3) ([#11756](https://github.com/yt-dlp/yt-dlp/issues/11756)) by [bashonly](https://github.com/bashonly)
|
||||
- **soundcloud**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/f4d3e9e6dc25077b79849a31a2f67f93fdc01e62) ([#11777](https://github.com/yt-dlp/yt-dlp/issues/11777)) by [bashonly](https://github.com/bashonly)
|
||||
- **youtube**
|
||||
- [Fix `release_date` extraction](https://github.com/yt-dlp/yt-dlp/commit/d5e2a379f2adcb28bc48c7d9e90716d7278f89d2) ([#11759](https://github.com/yt-dlp/yt-dlp/issues/11759)) by [MutantPiggieGolem1](https://github.com/MutantPiggieGolem1)
|
||||
- [Fix signature function extraction for `2f1832d2`](https://github.com/yt-dlp/yt-dlp/commit/5460cd91891bf613a2065e2fc278d9903c37a127) ([#11801](https://github.com/yt-dlp/yt-dlp/issues/11801)) by [bashonly](https://github.com/bashonly)
|
||||
- [Prioritize original language over auto-dubbed audio](https://github.com/yt-dlp/yt-dlp/commit/dc3c4fddcc653989dae71fc563d82a308fc898cc) ([#11803](https://github.com/yt-dlp/yt-dlp/issues/11803)) by [bashonly](https://github.com/bashonly)
|
||||
- search_url: [Fix playlist searches](https://github.com/yt-dlp/yt-dlp/commit/f6c73aad5f1a67544bea137ebd9d1e22e0e56567) ([#11782](https://github.com/yt-dlp/yt-dlp/issues/11782)) by [Crypto90](https://github.com/Crypto90)
|
||||
|
||||
#### Misc. changes
|
||||
- **cleanup**: [Make more playlist entries lazy](https://github.com/yt-dlp/yt-dlp/commit/54216696261bc07cacd9a837c501d9e0b7fed09e) ([#11763](https://github.com/yt-dlp/yt-dlp/issues/11763)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.12.06
|
||||
|
||||
#### Core changes
|
||||
- **cookies**: [Add `--cookies-from-browser` support for MS Store Firefox](https://github.com/yt-dlp/yt-dlp/commit/354cb4026cf2191e1a130ec2a627b95cabfbc60a) ([#11731](https://github.com/yt-dlp/yt-dlp/issues/11731)) by [wesson09](https://github.com/wesson09)
|
||||
|
||||
#### Extractor changes
|
||||
- **bilibili**: [Fix HD formats extraction](https://github.com/yt-dlp/yt-dlp/commit/fca3eb5f8be08d5fab2e18b45b7281a12e566725) ([#11734](https://github.com/yt-dlp/yt-dlp/issues/11734)) by [grqz](https://github.com/grqz)
|
||||
- **soundcloud**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/2feb28028ee48f2185d2d95076e62accb09b9e2e) ([#11742](https://github.com/yt-dlp/yt-dlp/issues/11742)) by [bashonly](https://github.com/bashonly)
|
||||
- **youtube**
|
||||
- [Fix `n` sig extraction for player `3bb1f723`](https://github.com/yt-dlp/yt-dlp/commit/a95ee6d8803fca9157adecf63732ab58bf87fd88) ([#11750](https://github.com/yt-dlp/yt-dlp/issues/11750)) by [bashonly](https://github.com/bashonly) (With fixes in [4bd2655](https://github.com/yt-dlp/yt-dlp/commit/4bd2655398aed450456197a6767639114a24eac2))
|
||||
- [Fix signature function extraction](https://github.com/yt-dlp/yt-dlp/commit/4c85ccd1366c88cf93982f8350f58eed17355981) ([#11751](https://github.com/yt-dlp/yt-dlp/issues/11751)) by [bashonly](https://github.com/bashonly)
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/2e49c789d3eebc39af8910705d65a98bca0e4c4f) ([#11724](https://github.com/yt-dlp/yt-dlp/issues/11724)) by [bashonly](https://github.com/bashonly)
|
||||
|
||||
### 2024.12.03
|
||||
|
||||
#### Core changes
|
||||
- [Add `playlist_webpage_url` field](https://github.com/yt-dlp/yt-dlp/commit/7d6c259a03bc4707a319e5e8c6eff0278707874b) ([#11613](https://github.com/yt-dlp/yt-dlp/issues/11613)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Extractor changes
|
||||
- [Handle fragmented formats in `_remove_duplicate_formats`](https://github.com/yt-dlp/yt-dlp/commit/e0500cbf796323551bbabe5b8ed8c75a511ba47a) ([#11637](https://github.com/yt-dlp/yt-dlp/issues/11637)) by [Grub4K](https://github.com/Grub4K)
|
||||
- **bilibili**
|
||||
- [Always try to extract HD formats](https://github.com/yt-dlp/yt-dlp/commit/dc1687648077c5bf64863b307ecc5ab7e029bd8d) ([#10559](https://github.com/yt-dlp/yt-dlp/issues/10559)) by [grqz](https://github.com/grqz)
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/239f5f36fe04603bec59c8b975f6a792f10246db) ([#11667](https://github.com/yt-dlp/yt-dlp/issues/11667)) by [grqz](https://github.com/grqz) (With fixes in [f05a1cd](https://github.com/yt-dlp/yt-dlp/commit/f05a1cd1492fc98dc8d80d2081d632a1879913d2) by [bashonly](https://github.com/bashonly), [grqz](https://github.com/grqz))
|
||||
- [Fix subtitles and chapters extraction](https://github.com/yt-dlp/yt-dlp/commit/a13a336aa6f906812701abec8101b73b73db8ff7) ([#11708](https://github.com/yt-dlp/yt-dlp/issues/11708)) by [xiaomac](https://github.com/xiaomac)
|
||||
- **chaturbate**: [Fix support for non-public streams](https://github.com/yt-dlp/yt-dlp/commit/4b5eec0aaa7c02627f27a386591b735b90e681a8) ([#11624](https://github.com/yt-dlp/yt-dlp/issues/11624)) by [jkruse](https://github.com/jkruse)
|
||||
- **dacast**: [Fix HLS AES formats extraction](https://github.com/yt-dlp/yt-dlp/commit/0a0d80800b9350d1a4c4b18d82cfb77ffbc3c507) ([#11644](https://github.com/yt-dlp/yt-dlp/issues/11644)) by [bashonly](https://github.com/bashonly)
|
||||
- **dropbox**: [Fix password-protected video extraction](https://github.com/yt-dlp/yt-dlp/commit/00dcde728635633eee969ad4d498b9f233c4a94e) ([#11636](https://github.com/yt-dlp/yt-dlp/issues/11636)) by [bashonly](https://github.com/bashonly)
|
||||
- **duoplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/62cba8a1bedbfc0ddde7267ae57b72bf5f7ea7b1) ([#11588](https://github.com/yt-dlp/yt-dlp/issues/11588)) by [bashonly](https://github.com/bashonly), [glensc](https://github.com/glensc)
|
||||
- **facebook**: [Support more groups URLs](https://github.com/yt-dlp/yt-dlp/commit/e0f1ae813b36e783e2348ba2a1566e12f5cd8f6e) ([#11576](https://github.com/yt-dlp/yt-dlp/issues/11576)) by [grqz](https://github.com/grqz)
|
||||
- **instagram**: [Support `share` URLs](https://github.com/yt-dlp/yt-dlp/commit/360aed810ad85db950df586282d256516c98cd2d) ([#11677](https://github.com/yt-dlp/yt-dlp/issues/11677)) by [grqz](https://github.com/grqz)
|
||||
- **microsoftembed**: [Make format extraction non fatal](https://github.com/yt-dlp/yt-dlp/commit/2bea7936323ca4b6f3b9b1fdd892566223e30efa) ([#11654](https://github.com/yt-dlp/yt-dlp/issues/11654)) by [seproDev](https://github.com/seproDev)
|
||||
- **mitele**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/cd0f934604587ed793e9177f6a127e5dcf99a7dd) ([#11683](https://github.com/yt-dlp/yt-dlp/issues/11683)) by [DarkZeros](https://github.com/DarkZeros)
|
||||
- **stripchat**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/16336c51d0848a6868a4fa04e749fa03548b4913) ([#11596](https://github.com/yt-dlp/yt-dlp/issues/11596)) by [gitninja1234](https://github.com/gitninja1234)
|
||||
- **tiktok**: [Deprioritize animated thumbnails](https://github.com/yt-dlp/yt-dlp/commit/910ecc422930bca14e2abe4986f5f92359e3cea8) ([#11645](https://github.com/yt-dlp/yt-dlp/issues/11645)) by [bashonly](https://github.com/bashonly)
|
||||
- **vk**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/c038a7b187ba24360f14134842a7a2cf897c33b1) ([#11715](https://github.com/yt-dlp/yt-dlp/issues/11715)) by [bashonly](https://github.com/bashonly)
|
||||
- **youtube**
|
||||
- [Adjust player clients for site changes](https://github.com/yt-dlp/yt-dlp/commit/0d146c1e36f467af30e87b7af651bdee67b73500) ([#11663](https://github.com/yt-dlp/yt-dlp/issues/11663)) by [bashonly](https://github.com/bashonly)
|
||||
- tab: [Fix playlists tab extraction](https://github.com/yt-dlp/yt-dlp/commit/fe70f20aedf528fdee332131bc9b6710e54e6f10) ([#11615](https://github.com/yt-dlp/yt-dlp/issues/11615)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Networking changes
|
||||
- **Request Handler**: websockets: [Support websockets 14.0+](https://github.com/yt-dlp/yt-dlp/commit/c7316373c0a886f65a07a51e50ee147bb3294c85) ([#11616](https://github.com/yt-dlp/yt-dlp/issues/11616)) by [coletdjnz](https://github.com/coletdjnz)
|
||||
|
||||
#### Misc. changes
|
||||
- **cleanup**
|
||||
- [Bump ruff to 0.8.x](https://github.com/yt-dlp/yt-dlp/commit/d8fb3490863653182864d2a53522f350d67a9ff8) ([#11608](https://github.com/yt-dlp/yt-dlp/issues/11608)) by [seproDev](https://github.com/seproDev)
|
||||
- Miscellaneous
|
||||
- [ccf0a6b](https://github.com/yt-dlp/yt-dlp/commit/ccf0a6b86b7f68a75463804fe485ec240b8635f0) by [bashonly](https://github.com/bashonly), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- [2b67ac3](https://github.com/yt-dlp/yt-dlp/commit/2b67ac300ac8b44368fb121637d1743cea8c5b6b) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.11.18
|
||||
|
||||
#### Important changes
|
||||
- **Login with OAuth is no longer supported for YouTube**
|
||||
Due to a change made by the site, yt-dlp is no longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)
|
||||
|
||||
#### Core changes
|
||||
- [Catch broken Cryptodome installations](https://github.com/yt-dlp/yt-dlp/commit/b83ca24eb72e1e558b0185bd73975586c0bc0546) ([#11486](https://github.com/yt-dlp/yt-dlp/issues/11486)) by [seproDev](https://github.com/seproDev)
|
||||
- **utils**
|
||||
- [Fix `join_nonempty`, add `**kwargs` to `unpack`](https://github.com/yt-dlp/yt-dlp/commit/39d79c9b9cf23411d935910685c40aa1a2fdb409) ([#11559](https://github.com/yt-dlp/yt-dlp/issues/11559)) by [Grub4K](https://github.com/Grub4K)
|
||||
- `subs_list_to_dict`: [Add `lang` default parameter](https://github.com/yt-dlp/yt-dlp/commit/c014fbcddcb4c8f79d914ac5bb526758b540ea33) ([#11508](https://github.com/yt-dlp/yt-dlp/issues/11508)) by [Grub4K](https://github.com/Grub4K)
|
||||
|
||||
#### Extractor changes
|
||||
- [Allow `ext` override for thumbnails](https://github.com/yt-dlp/yt-dlp/commit/eb64ae7d5def6df2aba74fb703e7f168fb299865) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **adobepass**: [Fix provider requests](https://github.com/yt-dlp/yt-dlp/commit/85fdc66b6e01d19a94b4f39b58e3c0cf23600902) ([#11472](https://github.com/yt-dlp/yt-dlp/issues/11472)) by [bashonly](https://github.com/bashonly)
|
||||
- **archive.org**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/f2a4983df7a64c4e93b56f79dbd16a781bd90206) ([#11527](https://github.com/yt-dlp/yt-dlp/issues/11527)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **bandlab**: [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/6365e92589e4bc17b8fffb0125a716d144ad2137) ([#11535](https://github.com/yt-dlp/yt-dlp/issues/11535)) by [seproDev](https://github.com/seproDev)
|
||||
- **chaturbate**
|
||||
- [Extract from API and support impersonation](https://github.com/yt-dlp/yt-dlp/commit/720b3dc453c342bc2e8df7dbc0acaab4479de46c) ([#11555](https://github.com/yt-dlp/yt-dlp/issues/11555)) by [powergold1](https://github.com/powergold1) (With fixes in [7cecd29](https://github.com/yt-dlp/yt-dlp/commit/7cecd299e4a5ef1f0f044b2fedc26f17e41f15e3) by [seproDev](https://github.com/seproDev))
|
||||
- [Support alternate domains](https://github.com/yt-dlp/yt-dlp/commit/a9f85670d03ab993dc589f21a9ffffcad61392d5) ([#10595](https://github.com/yt-dlp/yt-dlp/issues/10595)) by [manavchaudhary1](https://github.com/manavchaudhary1)
|
||||
- **cloudflarestream**: [Avoid extraction via videodelivery.net](https://github.com/yt-dlp/yt-dlp/commit/2db8c2e7d57a1784b06057c48e3e91023720d195) ([#11478](https://github.com/yt-dlp/yt-dlp/issues/11478)) by [hugovdev](https://github.com/hugovdev)
|
||||
- **ctvnews**
|
||||
- [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/f351440f1dc5b3dfbfc5737b037a869d946056fe) ([#11534](https://github.com/yt-dlp/yt-dlp/issues/11534)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- [Fix playlist ID extraction](https://github.com/yt-dlp/yt-dlp/commit/f9d98509a898737c12977b2e2117277bada2c196) ([#8892](https://github.com/yt-dlp/yt-dlp/issues/8892)) by [qbnu](https://github.com/qbnu)
|
||||
- **digitalconcerthall**: [Support login with access/refresh tokens](https://github.com/yt-dlp/yt-dlp/commit/f7257588bdff5f0b0452635a66b253a783c97357) ([#11571](https://github.com/yt-dlp/yt-dlp/issues/11571)) by [bashonly](https://github.com/bashonly)
|
||||
- **facebook**: [Fix formats extraction](https://github.com/yt-dlp/yt-dlp/commit/bacc31b05a04181b63100c481565256b14813a5e) ([#11513](https://github.com/yt-dlp/yt-dlp/issues/11513)) by [bashonly](https://github.com/bashonly)
|
||||
- **gamedevtv**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/be3579aaf0c3b71a0a3195e1955415d5e4d6b3d8) ([#11368](https://github.com/yt-dlp/yt-dlp/issues/11368)) by [bashonly](https://github.com/bashonly), [stratus-ss](https://github.com/stratus-ss)
|
||||
- **goplay**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/6b43a8d84b881d769b480ba6e20ec691e9d1b92d) ([#11466](https://github.com/yt-dlp/yt-dlp/issues/11466)) by [bashonly](https://github.com/bashonly), [SamDecrock](https://github.com/SamDecrock)
|
||||
- **kenh14**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/eb15fd5a32d8b35ef515f7a3d1158c03025648ff) ([#3996](https://github.com/yt-dlp/yt-dlp/issues/3996)) by [krichbanana](https://github.com/krichbanana), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **litv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/e079ffbda66de150c0a9ebef05e89f61bb4d5f76) ([#11071](https://github.com/yt-dlp/yt-dlp/issues/11071)) by [jiru](https://github.com/jiru)
|
||||
- **mixchmovie**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/0ec9bfed4d4a52bfb4f8733da1acf0aeeae21e6b) ([#10897](https://github.com/yt-dlp/yt-dlp/issues/10897)) by [Sakura286](https://github.com/Sakura286)
|
||||
- **patreon**: [Fix comments extraction](https://github.com/yt-dlp/yt-dlp/commit/1d253b0a27110d174c40faf8fb1c999d099e0cde) ([#11530](https://github.com/yt-dlp/yt-dlp/issues/11530)) by [bashonly](https://github.com/bashonly), [jshumphrey](https://github.com/jshumphrey)
|
||||
- **pialive**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/d867f99622ef7fba690b08da56c39d739b822bb7) ([#10811](https://github.com/yt-dlp/yt-dlp/issues/10811)) by [ChocoLZS](https://github.com/ChocoLZS)
|
||||
- **radioradicale**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/70c55cb08f780eab687e881ef42bb5c6007d290b) ([#5607](https://github.com/yt-dlp/yt-dlp/issues/5607)) by [a13ssandr0](https://github.com/a13ssandr0), [pzhlkj6612](https://github.com/pzhlkj6612)
|
||||
- **reddit**: [Improve error handling](https://github.com/yt-dlp/yt-dlp/commit/7ea2787920cccc6b8ea30791993d114fbd564434) ([#11573](https://github.com/yt-dlp/yt-dlp/issues/11573)) by [bashonly](https://github.com/bashonly)
|
||||
- **redgifsuser**: [Fix extraction](https://github.com/yt-dlp/yt-dlp/commit/d215fba7edb69d4fa665f43663756fd260b1489f) ([#11531](https://github.com/yt-dlp/yt-dlp/issues/11531)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **rutube**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/e398217aae19bb25f91797bfbe8a3243698d7f45) ([#11480](https://github.com/yt-dlp/yt-dlp/issues/11480)) by [seproDev](https://github.com/seproDev)
|
||||
- **sonylivseries**: [Add `sort_order` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/2009cb27e17014787bf63eaa2ada51293d54f22a) ([#11569](https://github.com/yt-dlp/yt-dlp/issues/11569)) by [bashonly](https://github.com/bashonly)
|
||||
- **soop**: [Fix thumbnail extraction](https://github.com/yt-dlp/yt-dlp/commit/c699bafc5038b59c9afe8c2e69175fb66424c832) ([#11545](https://github.com/yt-dlp/yt-dlp/issues/11545)) by [bashonly](https://github.com/bashonly)
|
||||
- **spankbang**: [Support browser impersonation](https://github.com/yt-dlp/yt-dlp/commit/8388ec256f7753b02488788e3cfa771f6e1db247) ([#11542](https://github.com/yt-dlp/yt-dlp/issues/11542)) by [jshumphrey](https://github.com/jshumphrey)
|
||||
- **spreaker**
|
||||
- [Support episode pages and access keys](https://github.com/yt-dlp/yt-dlp/commit/c39016f66df76d14284c705736ca73db8055d8de) ([#11489](https://github.com/yt-dlp/yt-dlp/issues/11489)) by [julionc](https://github.com/julionc)
|
||||
- [Support podcast and feed pages](https://github.com/yt-dlp/yt-dlp/commit/c6737310619022248f5d0fd13872073cac168453) ([#10968](https://github.com/yt-dlp/yt-dlp/issues/10968)) by [subrat-lima](https://github.com/subrat-lima)
|
||||
- **youtube**
|
||||
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/637d62a3a9fc723d68632c1af25c30acdadeeb85) ([#11528](https://github.com/yt-dlp/yt-dlp/issues/11528)) by [bashonly](https://github.com/bashonly), [seproDev](https://github.com/seproDev)
|
||||
- [Remove broken OAuth support](https://github.com/yt-dlp/yt-dlp/commit/52c0ffe40ad6e8404d93296f575007b05b04c686) ([#11558](https://github.com/yt-dlp/yt-dlp/issues/11558)) by [bashonly](https://github.com/bashonly)
|
||||
- tab: [Fix podcasts tab extraction](https://github.com/yt-dlp/yt-dlp/commit/37cd7660eaff397c551ee18d80507702342b0c2b) ([#11567](https://github.com/yt-dlp/yt-dlp/issues/11567)) by [seproDev](https://github.com/seproDev)
|
||||
|
||||
#### Misc. changes
|
||||
- **build**
|
||||
- [Bump PyInstaller version pin to `>=6.11.1`](https://github.com/yt-dlp/yt-dlp/commit/f9c8deb4e5887ff5150e911ac0452e645f988044) ([#11507](https://github.com/yt-dlp/yt-dlp/issues/11507)) by [bashonly](https://github.com/bashonly)
|
||||
- [Enable attestations for trusted publishing](https://github.com/yt-dlp/yt-dlp/commit/f13df591d4d7ca8e2f31b35c9c91e69ba9e9b013) ([#11420](https://github.com/yt-dlp/yt-dlp/issues/11420)) by [bashonly](https://github.com/bashonly)
|
||||
- [Pin `websockets` version to >=13.0,<14](https://github.com/yt-dlp/yt-dlp/commit/240a7d43c8a67ffb86d44dc276805aa43c358dcc) ([#11488](https://github.com/yt-dlp/yt-dlp/issues/11488)) by [bashonly](https://github.com/bashonly)
|
||||
- **cleanup**
|
||||
- [Deprecate more compat functions](https://github.com/yt-dlp/yt-dlp/commit/f95a92b3d0169a784ee15a138fbe09d82b2754a1) ([#11439](https://github.com/yt-dlp/yt-dlp/issues/11439)) by [seproDev](https://github.com/seproDev)
|
||||
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/10fc719bc7f1eef469389c5219102266ef411f29) ([#11566](https://github.com/yt-dlp/yt-dlp/issues/11566)) by [doe1080](https://github.com/doe1080)
|
||||
- Miscellaneous: [da252d9](https://github.com/yt-dlp/yt-dlp/commit/da252d9d322af3e2178ac5eae324809502a0a862) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K), [seproDev](https://github.com/seproDev)
|
||||
|
||||
### 2024.11.04
|
||||
|
||||
#### Important changes
|
||||
|
||||
33
README.md
33
README.md
@@ -6,7 +6,6 @@
|
||||
[](#installation "Installation")
|
||||
[](https://pypi.org/project/yt-dlp "PyPI")
|
||||
[](Collaborators.md#collaborators "Donate")
|
||||
[](https://matrix.to/#/#yt-dlp:matrix.org "Matrix")
|
||||
[](https://discord.gg/H5MNcFW63r "Discord")
|
||||
[](supportedsites.md "Supported Sites")
|
||||
[](LICENSE "License")
|
||||
@@ -342,8 +341,9 @@ If you fork the project on GitHub, you can run your fork's [build workflow](.git
|
||||
extractor plugins; postprocessor plugins can
|
||||
only be loaded from the default plugin
|
||||
directories
|
||||
--flat-playlist Do not extract the videos of a playlist,
|
||||
only list them
|
||||
--flat-playlist Do not extract a playlist's URL result
|
||||
entries; some entry metadata may be missing
|
||||
and downloading may be bypassed
|
||||
--no-flat-playlist Fully extract the videos of a playlist
|
||||
(default)
|
||||
--live-from-start Download livestreams from the start.
|
||||
@@ -612,8 +612,7 @@ If you fork the project on GitHub, you can run your fork's [build workflow](.git
|
||||
--no-restrict-filenames Allow Unicode characters, "&" and spaces in
|
||||
filenames (default)
|
||||
--windows-filenames Force filenames to be Windows-compatible
|
||||
--no-windows-filenames Make filenames Windows-compatible only if
|
||||
using Windows (default)
|
||||
--no-windows-filenames Sanitize filenames only minimally
|
||||
--trim-filenames LENGTH Limit the filename length (excluding
|
||||
extension) to the specified number of
|
||||
characters
|
||||
@@ -1293,6 +1292,7 @@ The available fields are:
|
||||
- `playlist_uploader_id` (string): Nickname or id of the playlist uploader
|
||||
- `playlist_channel` (string): Display name of the channel that uploaded the playlist
|
||||
- `playlist_channel_id` (string): Identifier of the channel that uploaded the playlist
|
||||
- `playlist_webpage_url` (string): URL of the playlist webpage
|
||||
- `webpage_url` (string): A URL to the video webpage which, if given to yt-dlp, should yield the same result again
|
||||
- `webpage_url_basename` (string): The basename of the webpage URL
|
||||
- `webpage_url_domain` (string): The domain of the webpage URL
|
||||
@@ -1525,7 +1525,7 @@ The available fields are:
|
||||
- `hasvid`: Gives priority to formats that have a video stream
|
||||
- `hasaud`: Gives priority to formats that have an audio stream
|
||||
- `ie_pref`: The format preference
|
||||
- `lang`: The language preference
|
||||
- `lang`: The language preference as determined by the extractor (e.g. original language preferred over audio description)
|
||||
- `quality`: The quality of the format
|
||||
- `source`: The preference of the source
|
||||
- `proto`: Protocol used for download (`https`/`ftps` > `http`/`ftp` > `m3u8_native`/`m3u8` > `http_dash_segments`> `websocket_frag` > `mms`/`rtsp` > `f4f`/`f4m`)
|
||||
@@ -1759,7 +1759,7 @@ $ yt-dlp --replace-in-metadata "title,uploader" "[ _]" "-"
|
||||
|
||||
# EXTRACTOR ARGUMENTS
|
||||
|
||||
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=mediaconnect,web;formats=incomplete" --extractor-args "funimation:version=uncut"`
|
||||
Some extractors accept additional arguments which can be passed using `--extractor-args KEY:ARGS`. `ARGS` is a `;` (semicolon) separated string of `ARG=VAL1,VAL2`. E.g. `--extractor-args "youtube:player-client=tv,mweb;formats=incomplete" --extractor-args "twitter:api=syndication"`
|
||||
|
||||
Note: In CLI, `ARG` can use `-` instead of `_`; e.g. `youtube:player-client"` becomes `youtube:player_client"`
|
||||
|
||||
@@ -1768,19 +1768,19 @@ The following extractors use this feature:
|
||||
#### youtube
|
||||
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube.py](https://github.com/yt-dlp/yt-dlp/blob/c26f9b991a0681fd3ea548d535919cec1fbbd430/yt_dlp/extractor/youtube.py#L381-L390) for list of supported content language codes
|
||||
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
|
||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music` and `_creator` (e.g. `ios_creator`); and `mweb`, `mediaconnect`, `android_testsuite`, `android_vr`, `web_safari`, `web_embedded`, `tv` and `tv_embedded` with no variants. By default, `ios,mweb` is used, and `web_creator,mediaconnect` is added as needed for age-gated videos when account age verification is required. Similarly, the `_music` variants are added for `music.youtube.com` URLs. Some clients, such as `web` and `android`, require a `po_token` for their formats to be downloadable. Some clients, such as the `_creator` variants, will only work with authentication. You can use `all` to use all the clients, and `default` for the default clients. You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=all,-web`
|
||||
* `player_client`: Clients to extract video data from. The main clients are `web`, `ios` and `android`, with variants `_music` and `_creator` (e.g. `ios_creator`); and `mweb`, `android_vr`, `web_safari`, `web_embedded`, `tv` and `tv_embedded` with no variants. By default, `tv,ios,web` is used, or `tv,web` is used when authenticating with cookies. The `web_music` client is added for `music.youtube.com` URLs when logged-in cookies are used. The `tv_embedded` and `web_creator` clients are added for age-restricted videos if account age-verification is required. Some clients, such as `web` and `web_music`, require a `po_token` for their formats to be downloadable. Some clients, such as the `_creator` variants, will only work with authentication. Not all clients support authentication via cookies. You can use `default` for the default clients, or you can use `all` for all clients (not recommended). You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=default,-ios`
|
||||
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause some issues. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) for more details
|
||||
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
|
||||
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
|
||||
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
|
||||
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
|
||||
* `formats`: Change the types of formats to return. `dashy` (convert HTTP to DASH), `duplicate` (identical content but different URLs or protocol; includes `dashy`), `incomplete` (cannot be downloaded completely - live dash and post-live m3u8)
|
||||
* `formats`: Change the types of formats to return. `dashy` (convert HTTP to DASH), `duplicate` (identical content but different URLs or protocol; includes `dashy`), `incomplete` (cannot be downloaded completely - live dash and post-live m3u8), `missing_pot` (include formats that require a PO Token but are missing one)
|
||||
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
|
||||
* `innertube_key`: Innertube API key to use for all API requests. By default, no API key is used
|
||||
* `raise_incomplete_data`: `Incomplete Data Received` raises an error instead of reporting a warning
|
||||
* `data_sync_id`: Overrides the account Data Sync ID used in Innertube API requests. This may be needed if you are using an account with `youtube:player_skip=webpage,configs` or `youtubetab:skip=webpage`
|
||||
* `visitor_data`: Overrides the Visitor Data used in Innertube API requests. This should be used with `player_skip=webpage,configs` and without cookies. Note: this may have adverse effects if used improperly. If a session from a browser is wanted, you should pass cookies instead (which contain the Visitor ID)
|
||||
* `po_token`: Proof of Origin (PO) Token(s) to use for requesting video playback. Comma seperated list of PO Tokens in the format `CLIENT+PO_TOKEN`, e.g. `youtube:po_token=web+XXX,android+YYY`
|
||||
* `po_token`: Proof of Origin (PO) Token(s) to use. Comma seperated list of PO Tokens in the format `CLIENT.CONTEXT+PO_TOKEN`, e.g. `youtube:po_token=web.gvs+XXX,web.player=XXX,web_safari.gvs+YYY`. Context can be either `gvs` (Google Video Server URLs) or `player` (Innertube player request)
|
||||
|
||||
#### youtubetab (YouTube playlists, channels, feeds, etc.)
|
||||
* `skip`: One or more of `webpage` (skip initial webpage download), `authcheck` (allow the download of playlists requiring authentication when no initial webpage is downloaded. This may cause unwanted behavior, see [#1122](https://github.com/yt-dlp/yt-dlp/pull/1122) for more details)
|
||||
@@ -1794,13 +1794,6 @@ The following extractors use this feature:
|
||||
* `is_live`: Bypass live HLS detection and manually set `live_status` - a value of `false` will set `not_live`, any other value (or no value) will set `is_live`
|
||||
* `impersonate`: Target(s) to try and impersonate with the initial webpage request; e.g. `generic:impersonate=safari,chrome-110`. Use `generic:impersonate` to impersonate any available target, and use `generic:impersonate=false` to disable impersonation (default)
|
||||
|
||||
#### funimation
|
||||
* `language`: Audio languages to extract, e.g. `funimation:language=english,japanese`
|
||||
* `version`: The video version to extract - `uncut` or `simulcast`
|
||||
|
||||
#### crunchyrollbeta (Crunchyroll)
|
||||
* `hardsub`: One or more hardsub versions to extract (in order of preference), or `all` (default: `None` = no hardsubs will be extracted), e.g. `crunchyrollbeta:hardsub=en-US,de-DE`
|
||||
|
||||
#### vikichannel
|
||||
* `video_types`: Types of videos to download - one or more of `episodes`, `movies`, `clips`, `trailers`
|
||||
|
||||
@@ -1858,7 +1851,7 @@ The following extractors use this feature:
|
||||
* `cdn`: One or more CDN IDs to use with the API call for stream URLs, e.g. `gcp_cdn`, `gs_cdn_pc_app`, `gs_cdn_mobile_web`, `gs_cdn_pc_web`
|
||||
|
||||
#### soundcloud
|
||||
* `formats`: Formats to request from the API. Requested values should be in the format of `{protocol}_{extension}` (omitting the bitrate), e.g. `hls_opus,http_aac`. The `*` character functions as a wildcard, e.g. `*_mp3`, and can be passed by itself to request all formats. Known protocols include `http`, `hls` and `hls-aes`; known extensions include `aac`, `opus` and `mp3`. Original `download` formats are always extracted. Default is `http_aac,hls_aac,http_opus,hls_opus,http_mp3,hls_mp3`
|
||||
* `formats`: Formats to request from the API. Requested values should be in the format of `{protocol}_{codec}`, e.g. `hls_opus,http_aac`. The `*` character functions as a wildcard, e.g. `*_mp3`, and can be passed by itself to request all formats. Known protocols include `http`, `hls` and `hls-aes`; known codecs include `aac`, `opus` and `mp3`. Original `download` formats are always extracted. Default is `http_aac,hls_aac,http_opus,hls_opus,http_mp3,hls_mp3`
|
||||
|
||||
#### orfon (orf:on)
|
||||
* `prefer_segments_playlist`: Prefer a playlist of program segments instead of a single complete video when available. If individual segments are desired, use `--concat-playlist never --extractor-args "orfon:prefer_segments_playlist"`
|
||||
@@ -1866,8 +1859,8 @@ The following extractors use this feature:
|
||||
#### bilibili
|
||||
* `prefer_multi_flv`: Prefer extracting flv formats over mp4 for older videos that still provide legacy formats
|
||||
|
||||
#### digitalconcerthall
|
||||
* `prefer_combined_hls`: Prefer extracting combined/pre-merged video and audio HLS formats. This will exclude 4K/HEVC video and lossless/FLAC audio formats, which are only available as split video/audio HLS formats
|
||||
#### sonylivseries
|
||||
* `sort_order`: Episode sort order for series extraction - one of `asc` (ascending, oldest first) or `desc` (descending, newest first). Default is `asc`
|
||||
|
||||
**Note**: These options may be changed/removed in the future without concern for backward compatibility
|
||||
|
||||
|
||||
@@ -234,5 +234,16 @@
|
||||
"when": "57212a5f97ce367590aaa5c3e9a135eead8f81f7",
|
||||
"short": "[ie/vimeo] Fix API retries (#11351)",
|
||||
"authors": ["bashonly"]
|
||||
},
|
||||
{
|
||||
"action": "add",
|
||||
"when": "52c0ffe40ad6e8404d93296f575007b05b04c686",
|
||||
"short": "[priority] **Login with OAuth is no longer supported for YouTube**\nDue to a change made by the site, yt-dlp is no longer able to support OAuth login for YouTube. [Read more](https://github.com/yt-dlp/yt-dlp/issues/11462#issuecomment-2471703090)"
|
||||
},
|
||||
{
|
||||
"action": "change",
|
||||
"when": "76ac023ff02f06e8c003d104f02a03deeddebdcd",
|
||||
"short": "[ie/youtube:tab] Improve shorts title extraction (#11997)",
|
||||
"authors": ["bashonly", "d3d9"]
|
||||
}
|
||||
]
|
||||
|
||||
@@ -11,13 +11,12 @@ import codecs
|
||||
import subprocess
|
||||
|
||||
from yt_dlp.aes import aes_encrypt, key_expansion
|
||||
from yt_dlp.utils import intlist_to_bytes
|
||||
|
||||
secret_msg = b'Secret message goes here'
|
||||
|
||||
|
||||
def hex_str(int_list):
|
||||
return codecs.encode(intlist_to_bytes(int_list), 'hex')
|
||||
return codecs.encode(bytes(int_list), 'hex')
|
||||
|
||||
|
||||
def openssl_encode(algo, key, iv):
|
||||
|
||||
@@ -11,11 +11,13 @@ import re
|
||||
|
||||
from devscripts.utils import get_filename_args, read_file, write_file
|
||||
|
||||
VERBOSE_TMPL = '''
|
||||
VERBOSE = '''
|
||||
- type: checkboxes
|
||||
id: verbose
|
||||
attributes:
|
||||
label: Provide verbose output that clearly demonstrates the problem
|
||||
description: |
|
||||
This is mandatory unless absolutely impossible to provide. If you are unable to provide the output, please explain why.
|
||||
options:
|
||||
- label: Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
|
||||
required: true
|
||||
@@ -47,31 +49,23 @@ VERBOSE_TMPL = '''
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
> [!CAUTION]
|
||||
> ### GitHub is experiencing a high volume of malicious spam comments.
|
||||
> ### If you receive any replies asking you download a file, do NOT follow the download links!
|
||||
>
|
||||
> Note that this issue may be temporarily locked as an anti-spam measure after it is opened.
|
||||
'''.strip()
|
||||
|
||||
NO_SKIP = '''
|
||||
- type: checkboxes
|
||||
- type: markdown
|
||||
attributes:
|
||||
label: DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
|
||||
description: Fill all fields even if you think it is irrelevant for the issue
|
||||
options:
|
||||
- label: I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field
|
||||
required: true
|
||||
value: |
|
||||
> [!IMPORTANT]
|
||||
> Not providing the required (*) information or removing the template will result in your issue being closed and ignored.
|
||||
'''.strip()
|
||||
|
||||
|
||||
def main():
|
||||
fields = {'no_skip': NO_SKIP}
|
||||
fields['verbose'] = VERBOSE_TMPL % fields
|
||||
fields['verbose_optional'] = re.sub(r'(\n\s+validations:)?\n\s+required: true', '', fields['verbose'])
|
||||
fields = {
|
||||
'no_skip': NO_SKIP,
|
||||
'verbose': VERBOSE,
|
||||
'verbose_optional': re.sub(r'(\n\s+validations:)?\n\s+required: true', '', VERBOSE),
|
||||
}
|
||||
|
||||
infile, outfile = get_filename_args(has_infile=True)
|
||||
write_file(outfile, read_file(infile) % fields)
|
||||
|
||||
@@ -10,10 +10,21 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
from devscripts.utils import get_filename_args, write_file
|
||||
from yt_dlp.extractor import list_extractor_classes
|
||||
|
||||
TEMPLATE = '''\
|
||||
# Supported sites
|
||||
|
||||
Below is a list of all extractors that are currently included with yt-dlp.
|
||||
If a site is not listed here, it might still be supported by yt-dlp's embed extraction or generic extractor.
|
||||
Not all sites listed here are guaranteed to work; websites are constantly changing and sometimes this breaks yt-dlp's support for them.
|
||||
The only reliable way to check if a site is supported is to try it.
|
||||
|
||||
{ie_list}
|
||||
'''
|
||||
|
||||
|
||||
def main():
|
||||
out = '\n'.join(ie.description() for ie in list_extractor_classes() if ie.IE_DESC is not False)
|
||||
write_file(get_filename_args(), f'# Supported sites\n{out}\n')
|
||||
write_file(get_filename_args(), TEMPLATE.format(ie_list=out))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -25,7 +25,8 @@ def parse_args():
|
||||
|
||||
|
||||
def run_tests(*tests, pattern=None, ci=False):
|
||||
run_core = 'core' in tests or (not pattern and not tests)
|
||||
# XXX: hatch uses `tests` if no arguments are passed
|
||||
run_core = 'core' in tests or 'tests' in tests or (not pattern and not tests)
|
||||
run_download = 'download' in tests
|
||||
|
||||
pytest_args = args.pytest_args or os.getenv('HATCH_TEST_ARGS', '')
|
||||
|
||||
@@ -76,14 +76,14 @@ dev = [
|
||||
]
|
||||
static-analysis = [
|
||||
"autopep8~=2.0",
|
||||
"ruff~=0.7.0",
|
||||
"ruff~=0.9.0",
|
||||
]
|
||||
test = [
|
||||
"pytest~=8.1",
|
||||
"pytest-rerunfailures~=14.0",
|
||||
]
|
||||
pyinstaller = [
|
||||
"pyinstaller>=6.10.0", # Windows temp cleanup fixed in 6.10.0
|
||||
"pyinstaller>=6.11.1", # Windows temp cleanup fixed in 6.11.1
|
||||
]
|
||||
|
||||
[project.urls]
|
||||
@@ -186,6 +186,7 @@ ignore = [
|
||||
"E501", # line-too-long
|
||||
"E731", # lambda-assignment
|
||||
"E741", # ambiguous-variable-name
|
||||
"UP031", # printf-string-formatting
|
||||
"UP036", # outdated-version-block
|
||||
"B006", # mutable-argument-default
|
||||
"B008", # function-call-in-default-argument
|
||||
@@ -194,6 +195,7 @@ ignore = [
|
||||
"B023", # function-uses-loop-variable (false positives)
|
||||
"B028", # no-explicit-stacklevel
|
||||
"B904", # raise-without-from-inside-except
|
||||
"A005", # stdlib-module-shadowing
|
||||
"C401", # unnecessary-generator-set
|
||||
"C402", # unnecessary-generator-dict
|
||||
"PIE790", # unnecessary-placeholder
|
||||
@@ -258,9 +260,6 @@ select = [
|
||||
"A002", # builtin-argument-shadowing
|
||||
"C408", # unnecessary-collection-call
|
||||
]
|
||||
"yt_dlp/jsinterp.py" = [
|
||||
"UP031", # printf-string-formatting
|
||||
]
|
||||
|
||||
[tool.ruff.lint.isort]
|
||||
known-first-party = [
|
||||
@@ -313,6 +312,16 @@ banned-from = [
|
||||
"yt_dlp.compat.compat_urllib_parse_urlparse".msg = "Use `urllib.parse.urlparse` instead."
|
||||
"yt_dlp.compat.compat_shlex_quote".msg = "Use `yt_dlp.utils.shell_quote` instead."
|
||||
"yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead."
|
||||
"yt_dlp.utils.bytes_to_intlist".msg = "Use `list` instead."
|
||||
"yt_dlp.utils.intlist_to_bytes".msg = "Use `bytes` instead."
|
||||
"yt_dlp.utils.decodeArgument".msg = "Do not use"
|
||||
"yt_dlp.utils.decodeFilename".msg = "Do not use"
|
||||
"yt_dlp.utils.encodeFilename".msg = "Do not use"
|
||||
"yt_dlp.compat.compat_os_name".msg = "Use `os.name` instead."
|
||||
"yt_dlp.compat.compat_realpath".msg = "Use `os.path.realpath` instead."
|
||||
"yt_dlp.compat.functools".msg = "Use `functools` instead."
|
||||
"yt_dlp.utils.decodeOption".msg = "Do not use"
|
||||
"yt_dlp.utils.compiled_regex_type".msg = "Use `re.Pattern` instead."
|
||||
|
||||
[tool.autopep8]
|
||||
max_line_length = 120
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# Supported sites
|
||||
|
||||
Below is a list of all extractors that are currently included with yt-dlp.
|
||||
If a site is not listed here, it might still be supported by yt-dlp's embed extraction or generic extractor.
|
||||
Not all sites listed here are guaranteed to work; websites are constantly changing and sometimes this breaks yt-dlp's support for them.
|
||||
The only reliable way to check if a site is supported is to try it.
|
||||
|
||||
- **17live**
|
||||
- **17live:clip**
|
||||
- **1News**: 1news.co.nz article videos
|
||||
@@ -129,6 +135,8 @@
|
||||
- **Bandcamp:album**
|
||||
- **Bandcamp:user**
|
||||
- **Bandcamp:weekly**
|
||||
- **Bandlab**
|
||||
- **BandlabPlaylist**
|
||||
- **BannedVideo**
|
||||
- **bbc**: [*bbc*](## "netrc machine") BBC
|
||||
- **bbc.co.uk**: [*bbc*](## "netrc machine") BBC iPlayer
|
||||
@@ -169,6 +177,7 @@
|
||||
- **BilibiliCheese**
|
||||
- **BilibiliCheeseSeason**
|
||||
- **BilibiliCollectionList**
|
||||
- **BiliBiliDynamic**
|
||||
- **BilibiliFavoritesList**
|
||||
- **BiliBiliPlayer**
|
||||
- **BilibiliPlaylist**
|
||||
@@ -301,10 +310,6 @@
|
||||
- **CrowdBunker**
|
||||
- **CrowdBunkerChannel**
|
||||
- **Crtvg**
|
||||
- **crunchyroll**: [*crunchyroll*](## "netrc machine")
|
||||
- **crunchyroll:artist**: [*crunchyroll*](## "netrc machine")
|
||||
- **crunchyroll:music**: [*crunchyroll*](## "netrc machine")
|
||||
- **crunchyroll:playlist**: [*crunchyroll*](## "netrc machine")
|
||||
- **CSpan**: C-SPAN
|
||||
- **CSpanCongress**
|
||||
- **CtsNews**: 華視新聞
|
||||
@@ -315,7 +320,8 @@
|
||||
- **curiositystream**: [*curiositystream*](## "netrc machine")
|
||||
- **curiositystream:collections**: [*curiositystream*](## "netrc machine")
|
||||
- **curiositystream:series**: [*curiositystream*](## "netrc machine")
|
||||
- **CWTV**
|
||||
- **cwtv**
|
||||
- **cwtv:movie**
|
||||
- **Cybrary**: [*cybrary*](## "netrc machine")
|
||||
- **CybraryCourse**: [*cybrary*](## "netrc machine")
|
||||
- **DacastPlaylist**
|
||||
@@ -350,6 +356,7 @@
|
||||
- **DigitalConcertHall**: [*digitalconcerthall*](## "netrc machine") DigitalConcertHall extractor
|
||||
- **DigitallySpeaking**
|
||||
- **Digiteka**
|
||||
- **Digiview**
|
||||
- **DiscogsReleasePlaylist**
|
||||
- **DiscoveryLife**
|
||||
- **DiscoveryNetworksDe**
|
||||
@@ -372,6 +379,7 @@
|
||||
- **Dropbox**
|
||||
- **Dropout**: [*dropout*](## "netrc machine")
|
||||
- **DropoutSeason**
|
||||
- **DrTalks**
|
||||
- **DrTuber**
|
||||
- **drtv**
|
||||
- **drtv:live**
|
||||
@@ -390,6 +398,8 @@
|
||||
- **Ebay**
|
||||
- **egghead:course**: egghead.io course
|
||||
- **egghead:lesson**: egghead.io lesson
|
||||
- **eggs:artist**
|
||||
- **eggs:single**
|
||||
- **EinsUndEinsTV**: [*1und1tv*](## "netrc machine")
|
||||
- **EinsUndEinsTVLive**: [*1und1tv*](## "netrc machine")
|
||||
- **EinsUndEinsTVRecordings**: [*1und1tv*](## "netrc machine")
|
||||
@@ -463,9 +473,9 @@
|
||||
- **fptplay**: fptplay.vn
|
||||
- **FranceCulture**
|
||||
- **FranceInter**
|
||||
- **FranceTV**
|
||||
- **francetv**
|
||||
- **francetv:site**
|
||||
- **francetvinfo.fr**
|
||||
- **FranceTVSite**
|
||||
- **Freesound**
|
||||
- **freespeech.org**
|
||||
- **freetv:series**
|
||||
@@ -474,9 +484,6 @@
|
||||
- **FrontendMastersCourse**: [*frontendmasters*](## "netrc machine")
|
||||
- **FrontendMastersLesson**: [*frontendmasters*](## "netrc machine")
|
||||
- **FujiTVFODPlus7**
|
||||
- **Funimation**: [*funimation*](## "netrc machine")
|
||||
- **funimation:page**: [*funimation*](## "netrc machine")
|
||||
- **funimation:show**: [*funimation*](## "netrc machine")
|
||||
- **Funk**
|
||||
- **Funker530**
|
||||
- **Fux**
|
||||
@@ -484,6 +491,7 @@
|
||||
- **Gab**
|
||||
- **GabTV**
|
||||
- **Gaia**: [*gaia*](## "netrc machine")
|
||||
- **GameDevTVDashboard**: [*gamedevtv*](## "netrc machine")
|
||||
- **GameJolt**
|
||||
- **GameJoltCommunity**
|
||||
- **GameJoltGame**
|
||||
@@ -499,7 +507,7 @@
|
||||
- **GediDigital**
|
||||
- **gem.cbc.ca**: [*cbcgem*](## "netrc machine")
|
||||
- **gem.cbc.ca:live**
|
||||
- **gem.cbc.ca:playlist**
|
||||
- **gem.cbc.ca:playlist**: [*cbcgem*](## "netrc machine")
|
||||
- **Genius**
|
||||
- **GeniusLyrics**
|
||||
- **Germanupa**: germanupa.de
|
||||
@@ -651,6 +659,8 @@
|
||||
- **Karaoketv**
|
||||
- **Katsomo**: (**Currently broken**)
|
||||
- **KelbyOne**: (**Currently broken**)
|
||||
- **Kenh14Playlist**
|
||||
- **Kenh14Video**
|
||||
- **Ketnet**
|
||||
- **khanacademy**
|
||||
- **khanacademy:unit**
|
||||
@@ -784,10 +794,6 @@
|
||||
- **MicrosoftLearnSession**
|
||||
- **MicrosoftMedius**
|
||||
- **microsoftstream**: Microsoft Stream
|
||||
- **mildom**: Record ongoing live by specific user in Mildom
|
||||
- **mildom:clip**: Clip in Mildom
|
||||
- **mildom:user:vod**: Download all VODs from specific user in Mildom
|
||||
- **mildom:vod**: VOD in Mildom
|
||||
- **minds**
|
||||
- **minds:channel**
|
||||
- **minds:group**
|
||||
@@ -798,6 +804,7 @@
|
||||
- **MiTele**: mitele.es
|
||||
- **mixch**
|
||||
- **mixch:archive**
|
||||
- **mixch:movie**
|
||||
- **mixcloud**
|
||||
- **mixcloud:playlist**
|
||||
- **mixcloud:user**
|
||||
@@ -889,6 +896,8 @@
|
||||
- **nebula:video**: [*watchnebula*](## "netrc machine")
|
||||
- **NekoHacker**
|
||||
- **NerdCubedFeed**
|
||||
- **Nest**
|
||||
- **NestClip**
|
||||
- **netease:album**: 网易云音乐 - 专辑
|
||||
- **netease:djradio**: 网易云音乐 - 电台
|
||||
- **netease:mv**: 网易云音乐 - MV
|
||||
@@ -1060,14 +1069,16 @@
|
||||
- **PhilharmonieDeParis**: Philharmonie de Paris
|
||||
- **phoenix.de**
|
||||
- **Photobucket**
|
||||
- **PiaLive**
|
||||
- **Piapro**: [*piapro*](## "netrc machine")
|
||||
- **PIAULIZAPortal**: ulizaportal.jp - PIA LIVE STREAM
|
||||
- **Picarto**
|
||||
- **PicartoVod**
|
||||
- **Piksel**
|
||||
- **Pinkbike**
|
||||
- **Pinterest**
|
||||
- **PinterestCollection**
|
||||
- **PiramideTV**
|
||||
- **PiramideTVChannel**
|
||||
- **pixiv:sketch**
|
||||
- **pixiv:sketch:user**
|
||||
- **Pladform**
|
||||
@@ -1084,12 +1095,11 @@
|
||||
- **pluralsight**: [*pluralsight*](## "netrc machine")
|
||||
- **pluralsight:course**
|
||||
- **PlutoTV**: (**Currently broken**)
|
||||
- **PlVideo**: Платформа
|
||||
- **PodbayFM**
|
||||
- **PodbayFMChannel**
|
||||
- **Podchaser**
|
||||
- **podomatic**: (**Currently broken**)
|
||||
- **Pokemon**
|
||||
- **PokemonWatch**
|
||||
- **PokerGo**: [*pokergo*](## "netrc machine")
|
||||
- **PokerGoCollection**: [*pokergo*](## "netrc machine")
|
||||
- **PolsatGo**
|
||||
@@ -1160,6 +1170,7 @@
|
||||
- **RadioJavan**: (**Currently broken**)
|
||||
- **radiokapital**
|
||||
- **radiokapital:show**
|
||||
- **RadioRadicale**
|
||||
- **RadioZetPodcast**
|
||||
- **radlive**
|
||||
- **radlive:channel**
|
||||
@@ -1367,9 +1378,7 @@
|
||||
- **spotify**: Spotify episodes (**Currently broken**)
|
||||
- **spotify:show**: Spotify shows (**Currently broken**)
|
||||
- **Spreaker**
|
||||
- **SpreakerPage**
|
||||
- **SpreakerShow**
|
||||
- **SpreakerShowPage**
|
||||
- **SpringboardPlatform**
|
||||
- **Sprout**
|
||||
- **SproutVideo**
|
||||
@@ -1395,6 +1404,8 @@
|
||||
- **StretchInternet**
|
||||
- **Stripchat**
|
||||
- **stv:player**
|
||||
- **Subsplash**
|
||||
- **subsplash:playlist**
|
||||
- **Substack**
|
||||
- **SunPorno**
|
||||
- **sverigesradio:episode**
|
||||
@@ -1570,6 +1581,8 @@
|
||||
- **UFCTV**: [*ufctv*](## "netrc machine")
|
||||
- **ukcolumn**: (**Currently broken**)
|
||||
- **UKTVPlay**
|
||||
- **UlizaPlayer**
|
||||
- **UlizaPortal**: ulizaportal.jp
|
||||
- **umg:de**: Universal Music Deutschland (**Currently broken**)
|
||||
- **Unistra**
|
||||
- **Unity**: (**Currently broken**)
|
||||
@@ -1587,8 +1600,6 @@
|
||||
- **Varzesh3**: (**Currently broken**)
|
||||
- **Vbox7**
|
||||
- **Veo**
|
||||
- **Veoh**
|
||||
- **veoh:user**
|
||||
- **Vesti**: Вести.Ru (**Currently broken**)
|
||||
- **Vevo**
|
||||
- **VevoPlaylist**
|
||||
@@ -1642,8 +1653,6 @@
|
||||
- **Vimm:stream**
|
||||
- **ViMP**
|
||||
- **ViMP:Playlist**
|
||||
- **Vine**
|
||||
- **vine:user**
|
||||
- **Viously**
|
||||
- **Viqeo**: (**Currently broken**)
|
||||
- **Viu**
|
||||
|
||||
@@ -9,7 +9,6 @@ import types
|
||||
|
||||
import yt_dlp.extractor
|
||||
from yt_dlp import YoutubeDL
|
||||
from yt_dlp.compat import compat_os_name
|
||||
from yt_dlp.utils import preferredencoding, try_call, write_string, find_available_port
|
||||
|
||||
if 'pytest' in sys.modules:
|
||||
@@ -49,7 +48,7 @@ def report_warning(message, *args, **kwargs):
|
||||
Print the message to stderr, it will be prefixed with 'WARNING:'
|
||||
If stderr is a tty file the 'WARNING:' will be colored
|
||||
"""
|
||||
if sys.stderr.isatty() and compat_os_name != 'nt':
|
||||
if sys.stderr.isatty() and os.name != 'nt':
|
||||
_msg_header = '\033[0;33mWARNING:\033[0m'
|
||||
else:
|
||||
_msg_header = 'WARNING:'
|
||||
@@ -238,6 +237,20 @@ def sanitize_got_info_dict(got_dict):
|
||||
|
||||
|
||||
def expect_info_dict(self, got_dict, expected_dict):
|
||||
ALLOWED_KEYS_SORT_ORDER = (
|
||||
# NB: Keep in sync with the docstring of extractor/common.py
|
||||
'id', 'ext', 'direct', 'display_id', 'title', 'alt_title', 'description', 'media_type',
|
||||
'uploader', 'uploader_id', 'uploader_url', 'channel', 'channel_id', 'channel_url', 'channel_is_verified',
|
||||
'channel_follower_count', 'comment_count', 'view_count', 'concurrent_view_count',
|
||||
'like_count', 'dislike_count', 'repost_count', 'average_rating', 'age_limit', 'duration', 'thumbnail', 'heatmap',
|
||||
'chapters', 'chapter', 'chapter_number', 'chapter_id', 'start_time', 'end_time', 'section_start', 'section_end',
|
||||
'categories', 'tags', 'cast', 'composers', 'artists', 'album_artists', 'creators', 'genres',
|
||||
'track', 'track_number', 'track_id', 'album', 'album_type', 'disc_number',
|
||||
'series', 'series_id', 'season', 'season_number', 'season_id', 'episode', 'episode_number', 'episode_id',
|
||||
'timestamp', 'upload_date', 'release_timestamp', 'release_date', 'release_year', 'modified_timestamp', 'modified_date',
|
||||
'playable_in_embed', 'availability', 'live_status', 'location', 'license', '_old_archive_ids',
|
||||
)
|
||||
|
||||
expect_dict(self, got_dict, expected_dict)
|
||||
# Check for the presence of mandatory fields
|
||||
if got_dict.get('_type') not in ('playlist', 'multi_video'):
|
||||
@@ -253,7 +266,13 @@ def expect_info_dict(self, got_dict, expected_dict):
|
||||
|
||||
test_info_dict = sanitize_got_info_dict(got_dict)
|
||||
|
||||
missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys())
|
||||
# Check for invalid/misspelled field names being returned by the extractor
|
||||
invalid_keys = sorted(test_info_dict.keys() - ALLOWED_KEYS_SORT_ORDER)
|
||||
self.assertFalse(invalid_keys, f'Invalid fields returned by the extractor: {", ".join(invalid_keys)}')
|
||||
|
||||
missing_keys = sorted(
|
||||
test_info_dict.keys() - expected_dict.keys(),
|
||||
key=lambda x: ALLOWED_KEYS_SORT_ORDER.index(x))
|
||||
if missing_keys:
|
||||
def _repr(v):
|
||||
if isinstance(v, str):
|
||||
|
||||
@@ -15,7 +15,6 @@ import json
|
||||
|
||||
from test.helper import FakeYDL, assertRegexpMatches, try_rm
|
||||
from yt_dlp import YoutubeDL
|
||||
from yt_dlp.compat import compat_os_name
|
||||
from yt_dlp.extractor import YoutubeIE
|
||||
from yt_dlp.extractor.common import InfoExtractor
|
||||
from yt_dlp.postprocessor.common import PostProcessor
|
||||
@@ -487,11 +486,11 @@ class TestFormatSelection(unittest.TestCase):
|
||||
|
||||
def test_format_filtering(self):
|
||||
formats = [
|
||||
{'format_id': 'A', 'filesize': 500, 'width': 1000},
|
||||
{'format_id': 'B', 'filesize': 1000, 'width': 500},
|
||||
{'format_id': 'C', 'filesize': 1000, 'width': 400},
|
||||
{'format_id': 'D', 'filesize': 2000, 'width': 600},
|
||||
{'format_id': 'E', 'filesize': 3000},
|
||||
{'format_id': 'A', 'filesize': 500, 'width': 1000, 'aspect_ratio': 1.0},
|
||||
{'format_id': 'B', 'filesize': 1000, 'width': 500, 'aspect_ratio': 1.33},
|
||||
{'format_id': 'C', 'filesize': 1000, 'width': 400, 'aspect_ratio': 1.5},
|
||||
{'format_id': 'D', 'filesize': 2000, 'width': 600, 'aspect_ratio': 1.78},
|
||||
{'format_id': 'E', 'filesize': 3000, 'aspect_ratio': 0.56},
|
||||
{'format_id': 'F'},
|
||||
{'format_id': 'G', 'filesize': 1000000},
|
||||
]
|
||||
@@ -550,6 +549,31 @@ class TestFormatSelection(unittest.TestCase):
|
||||
ydl.process_ie_result(info_dict)
|
||||
self.assertEqual(ydl.downloaded_info_dicts, [])
|
||||
|
||||
ydl = YDL({'format': 'best[aspect_ratio=1]'})
|
||||
ydl.process_ie_result(info_dict)
|
||||
downloaded = ydl.downloaded_info_dicts[0]
|
||||
self.assertEqual(downloaded['format_id'], 'A')
|
||||
|
||||
ydl = YDL({'format': 'all[aspect_ratio > 1.00]'})
|
||||
ydl.process_ie_result(info_dict)
|
||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
||||
self.assertEqual(downloaded_ids, ['D', 'C', 'B'])
|
||||
|
||||
ydl = YDL({'format': 'all[aspect_ratio < 1.00]'})
|
||||
ydl.process_ie_result(info_dict)
|
||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
||||
self.assertEqual(downloaded_ids, ['E'])
|
||||
|
||||
ydl = YDL({'format': 'best[aspect_ratio=1.5]'})
|
||||
ydl.process_ie_result(info_dict)
|
||||
downloaded = ydl.downloaded_info_dicts[0]
|
||||
self.assertEqual(downloaded['format_id'], 'C')
|
||||
|
||||
ydl = YDL({'format': 'all[aspect_ratio!=1]'})
|
||||
ydl.process_ie_result(info_dict)
|
||||
downloaded_ids = [info['format_id'] for info in ydl.downloaded_info_dicts]
|
||||
self.assertEqual(downloaded_ids, ['E', 'D', 'C', 'B'])
|
||||
|
||||
@patch('yt_dlp.postprocessor.ffmpeg.FFmpegMergerPP.available', False)
|
||||
def test_default_format_spec_without_ffmpeg(self):
|
||||
ydl = YDL({})
|
||||
@@ -762,6 +786,13 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
test('%(width)06d.%%(ext)s', 'NA.%(ext)s')
|
||||
test('%%(width)06d.%(ext)s', '%(width)06d.mp4')
|
||||
|
||||
# Sanitization options
|
||||
test('%(title3)s', (None, 'foo⧸bar⧹test'))
|
||||
test('%(title5)s', (None, 'aei_A'), restrictfilenames=True)
|
||||
test('%(title3)s', (None, 'foo_bar_test'), windowsfilenames=False, restrictfilenames=True)
|
||||
if sys.platform != 'win32':
|
||||
test('%(title3)s', (None, 'foo⧸bar\\test'), windowsfilenames=False)
|
||||
|
||||
# ID sanitization
|
||||
test('%(id)s', '_abcd', info={'id': '_abcd'})
|
||||
test('%(some_id)s', '_abcd', info={'some_id': '_abcd'})
|
||||
@@ -839,8 +870,8 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
test('%(filesize)#D', '1Ki')
|
||||
test('%(height)5.2D', ' 1.08k')
|
||||
test('%(title4)#S', 'foo_bar_test')
|
||||
test('%(title4).10S', ('foo "bar" ', 'foo "bar"' + ('#' if compat_os_name == 'nt' else ' ')))
|
||||
if compat_os_name == 'nt':
|
||||
test('%(title4).10S', ('foo "bar" ', 'foo "bar"' + ('#' if os.name == 'nt' else ' ')))
|
||||
if os.name == 'nt':
|
||||
test('%(title4)q', ('"foo ""bar"" test"', None))
|
||||
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', None))
|
||||
test('%(formats.0.id)#q', ('"id 1"', None))
|
||||
@@ -903,9 +934,9 @@ class TestYoutubeDL(unittest.TestCase):
|
||||
|
||||
# Environment variable expansion for prepare_filename
|
||||
os.environ['__yt_dlp_var'] = 'expanded'
|
||||
envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var'
|
||||
envvar = '%__yt_dlp_var%' if os.name == 'nt' else '$__yt_dlp_var'
|
||||
test(envvar, (envvar, 'expanded'))
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
test('%s%', ('%s%', '%s%'))
|
||||
os.environ['s'] = 'expanded'
|
||||
test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s
|
||||
|
||||
@@ -27,7 +27,6 @@ from yt_dlp.aes import (
|
||||
pad_block,
|
||||
)
|
||||
from yt_dlp.dependencies import Cryptodome
|
||||
from yt_dlp.utils import bytes_to_intlist, intlist_to_bytes
|
||||
|
||||
# the encrypted data can be generate with 'devscripts/generate_aes_testdata.py'
|
||||
|
||||
@@ -40,33 +39,33 @@ class TestAES(unittest.TestCase):
|
||||
def test_encrypt(self):
|
||||
msg = b'message'
|
||||
key = list(range(16))
|
||||
encrypted = aes_encrypt(bytes_to_intlist(msg), key)
|
||||
decrypted = intlist_to_bytes(aes_decrypt(encrypted, key))
|
||||
encrypted = aes_encrypt(list(msg), key)
|
||||
decrypted = bytes(aes_decrypt(encrypted, key))
|
||||
self.assertEqual(decrypted, msg)
|
||||
|
||||
def test_cbc_decrypt(self):
|
||||
data = b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\x27\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd'
|
||||
decrypted = intlist_to_bytes(aes_cbc_decrypt(bytes_to_intlist(data), self.key, self.iv))
|
||||
decrypted = bytes(aes_cbc_decrypt(list(data), self.key, self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
if Cryptodome.AES:
|
||||
decrypted = aes_cbc_decrypt_bytes(data, intlist_to_bytes(self.key), intlist_to_bytes(self.iv))
|
||||
decrypted = aes_cbc_decrypt_bytes(data, bytes(self.key), bytes(self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_cbc_encrypt(self):
|
||||
data = bytes_to_intlist(self.secret_msg)
|
||||
encrypted = intlist_to_bytes(aes_cbc_encrypt(data, self.key, self.iv))
|
||||
data = list(self.secret_msg)
|
||||
encrypted = bytes(aes_cbc_encrypt(data, self.key, self.iv))
|
||||
self.assertEqual(
|
||||
encrypted,
|
||||
b'\x97\x92+\xe5\x0b\xc3\x18\x91ky9m&\xb3\xb5@\xe6\'\xc2\x96.\xc8u\x88\xab9-[\x9e|\xf1\xcd')
|
||||
|
||||
def test_ctr_decrypt(self):
|
||||
data = bytes_to_intlist(b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
|
||||
decrypted = intlist_to_bytes(aes_ctr_decrypt(data, self.key, self.iv))
|
||||
data = list(b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
|
||||
decrypted = bytes(aes_ctr_decrypt(data, self.key, self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_ctr_encrypt(self):
|
||||
data = bytes_to_intlist(self.secret_msg)
|
||||
encrypted = intlist_to_bytes(aes_ctr_encrypt(data, self.key, self.iv))
|
||||
data = list(self.secret_msg)
|
||||
encrypted = bytes(aes_ctr_encrypt(data, self.key, self.iv))
|
||||
self.assertEqual(
|
||||
encrypted,
|
||||
b'\x03\xc7\xdd\xd4\x8e\xb3\xbc\x1a*O\xdc1\x12+8Aio\xd1z\xb5#\xaf\x08')
|
||||
@@ -75,19 +74,19 @@ class TestAES(unittest.TestCase):
|
||||
data = b'\x159Y\xcf5eud\x90\x9c\x85&]\x14\x1d\x0f.\x08\xb4T\xe4/\x17\xbd'
|
||||
authentication_tag = b'\xe8&I\x80rI\x07\x9d}YWuU@:e'
|
||||
|
||||
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
||||
bytes_to_intlist(data), self.key, bytes_to_intlist(authentication_tag), self.iv[:12]))
|
||||
decrypted = bytes(aes_gcm_decrypt_and_verify(
|
||||
list(data), self.key, list(authentication_tag), self.iv[:12]))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
if Cryptodome.AES:
|
||||
decrypted = aes_gcm_decrypt_and_verify_bytes(
|
||||
data, intlist_to_bytes(self.key), authentication_tag, intlist_to_bytes(self.iv[:12]))
|
||||
data, bytes(self.key), authentication_tag, bytes(self.iv[:12]))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_gcm_aligned_decrypt(self):
|
||||
data = b'\x159Y\xcf5eud\x90\x9c\x85&]\x14\x1d\x0f'
|
||||
authentication_tag = b'\x08\xb1\x9d!&\x98\xd0\xeaRq\x90\xe6;\xb5]\xd8'
|
||||
|
||||
decrypted = intlist_to_bytes(aes_gcm_decrypt_and_verify(
|
||||
decrypted = bytes(aes_gcm_decrypt_and_verify(
|
||||
list(data), self.key, list(authentication_tag), self.iv[:12]))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg[:16])
|
||||
if Cryptodome.AES:
|
||||
@@ -96,38 +95,38 @@ class TestAES(unittest.TestCase):
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg[:16])
|
||||
|
||||
def test_decrypt_text(self):
|
||||
password = intlist_to_bytes(self.key).decode()
|
||||
password = bytes(self.key).decode()
|
||||
encrypted = base64.b64encode(
|
||||
intlist_to_bytes(self.iv[:8])
|
||||
bytes(self.iv[:8])
|
||||
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae',
|
||||
).decode()
|
||||
decrypted = (aes_decrypt_text(encrypted, password, 16))
|
||||
self.assertEqual(decrypted, self.secret_msg)
|
||||
|
||||
password = intlist_to_bytes(self.key).decode()
|
||||
password = bytes(self.key).decode()
|
||||
encrypted = base64.b64encode(
|
||||
intlist_to_bytes(self.iv[:8])
|
||||
bytes(self.iv[:8])
|
||||
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83',
|
||||
).decode()
|
||||
decrypted = (aes_decrypt_text(encrypted, password, 32))
|
||||
self.assertEqual(decrypted, self.secret_msg)
|
||||
|
||||
def test_ecb_encrypt(self):
|
||||
data = bytes_to_intlist(self.secret_msg)
|
||||
encrypted = intlist_to_bytes(aes_ecb_encrypt(data, self.key))
|
||||
data = list(self.secret_msg)
|
||||
encrypted = bytes(aes_ecb_encrypt(data, self.key))
|
||||
self.assertEqual(
|
||||
encrypted,
|
||||
b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
|
||||
|
||||
def test_ecb_decrypt(self):
|
||||
data = bytes_to_intlist(b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
|
||||
decrypted = intlist_to_bytes(aes_ecb_decrypt(data, self.key, self.iv))
|
||||
data = list(b'\xaa\x86]\x81\x97>\x02\x92\x9d\x1bR[[L/u\xd3&\xd1(h\xde{\x81\x94\xba\x02\xae\xbd\xa6\xd0:')
|
||||
decrypted = bytes(aes_ecb_decrypt(data, self.key, self.iv))
|
||||
self.assertEqual(decrypted.rstrip(b'\x08'), self.secret_msg)
|
||||
|
||||
def test_key_expansion(self):
|
||||
key = '4f6bdaa39e2f8cb07f5e722d9edef314'
|
||||
|
||||
self.assertEqual(key_expansion(bytes_to_intlist(bytearray.fromhex(key))), [
|
||||
self.assertEqual(key_expansion(list(bytearray.fromhex(key))), [
|
||||
0x4F, 0x6B, 0xDA, 0xA3, 0x9E, 0x2F, 0x8C, 0xB0, 0x7F, 0x5E, 0x72, 0x2D, 0x9E, 0xDE, 0xF3, 0x14,
|
||||
0x53, 0x66, 0x20, 0xA8, 0xCD, 0x49, 0xAC, 0x18, 0xB2, 0x17, 0xDE, 0x35, 0x2C, 0xC9, 0x2D, 0x21,
|
||||
0x8C, 0xBE, 0xDD, 0xD9, 0x41, 0xF7, 0x71, 0xC1, 0xF3, 0xE0, 0xAF, 0xF4, 0xDF, 0x29, 0x82, 0xD5,
|
||||
|
||||
@@ -12,12 +12,7 @@ import struct
|
||||
|
||||
from yt_dlp import compat
|
||||
from yt_dlp.compat import urllib # isort: split
|
||||
from yt_dlp.compat import (
|
||||
compat_etree_fromstring,
|
||||
compat_expanduser,
|
||||
compat_urllib_parse_unquote, # noqa: TID251
|
||||
compat_urllib_parse_urlencode, # noqa: TID251
|
||||
)
|
||||
from yt_dlp.compat import compat_etree_fromstring, compat_expanduser
|
||||
from yt_dlp.compat.urllib.request import getproxies
|
||||
|
||||
|
||||
@@ -43,39 +38,6 @@ class TestCompat(unittest.TestCase):
|
||||
finally:
|
||||
os.environ['HOME'] = old_home or ''
|
||||
|
||||
def test_compat_urllib_parse_unquote(self):
|
||||
self.assertEqual(compat_urllib_parse_unquote('abc%20def'), 'abc def')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%7e/abc+def'), '~/abc+def')
|
||||
self.assertEqual(compat_urllib_parse_unquote(''), '')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%'), '%')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%%'), '%%')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%%%'), '%%%')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%2F'), '/')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%2f'), '/')
|
||||
self.assertEqual(compat_urllib_parse_unquote('%E6%B4%A5%E6%B3%A2'), '津波')
|
||||
self.assertEqual(
|
||||
compat_urllib_parse_unquote('''<meta property="og:description" content="%E2%96%81%E2%96%82%E2%96%83%E2%96%84%25%E2%96%85%E2%96%86%E2%96%87%E2%96%88" />
|
||||
%<a href="https://ar.wikipedia.org/wiki/%D8%AA%D8%B3%D9%88%D9%86%D8%A7%D9%85%D9%8A">%a'''),
|
||||
'''<meta property="og:description" content="▁▂▃▄%▅▆▇█" />
|
||||
%<a href="https://ar.wikipedia.org/wiki/تسونامي">%a''')
|
||||
self.assertEqual(
|
||||
compat_urllib_parse_unquote('''%28%5E%E2%97%A3_%E2%97%A2%5E%29%E3%81%A3%EF%B8%BB%E3%83%87%E2%95%90%E4%B8%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%87%80 %E2%86%B6%I%Break%25Things%'''),
|
||||
'''(^◣_◢^)っ︻デ═一 ⇀ ⇀ ⇀ ⇀ ⇀ ↶%I%Break%Things%''')
|
||||
|
||||
def test_compat_urllib_parse_unquote_plus(self):
|
||||
self.assertEqual(urllib.parse.unquote_plus('abc%20def'), 'abc def')
|
||||
self.assertEqual(urllib.parse.unquote_plus('%7e/abc+def'), '~/abc def')
|
||||
|
||||
def test_compat_urllib_parse_urlencode(self):
|
||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': 'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode({'abc': b'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': 'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode({b'abc': b'def'}), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', 'def')]), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([('abc', b'def')]), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', 'def')]), 'abc=def')
|
||||
self.assertEqual(compat_urllib_parse_urlencode([(b'abc', b'def')]), 'abc=def')
|
||||
|
||||
def test_compat_etree_fromstring(self):
|
||||
xml = '''
|
||||
<root foo="bar" spam="中文">
|
||||
|
||||
@@ -15,7 +15,6 @@ import threading
|
||||
from test.helper import http_server_port, try_rm
|
||||
from yt_dlp import YoutubeDL
|
||||
from yt_dlp.downloader.http import HttpFD
|
||||
from yt_dlp.utils import encodeFilename
|
||||
from yt_dlp.utils._utils import _YDLLogger as FakeLogger
|
||||
|
||||
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
@@ -82,12 +81,12 @@ class TestHttpFD(unittest.TestCase):
|
||||
ydl = YoutubeDL(params)
|
||||
downloader = HttpFD(ydl, params)
|
||||
filename = 'testfile.mp4'
|
||||
try_rm(encodeFilename(filename))
|
||||
try_rm(filename)
|
||||
self.assertTrue(downloader.real_download(filename, {
|
||||
'url': f'http://127.0.0.1:{self.port}/{ep}',
|
||||
}), ep)
|
||||
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE, ep)
|
||||
try_rm(encodeFilename(filename))
|
||||
self.assertEqual(os.path.getsize(filename), TEST_SIZE, ep)
|
||||
try_rm(filename)
|
||||
|
||||
def download_all(self, params):
|
||||
for ep in ('regular', 'no-content-length', 'no-range', 'no-range-no-content-length'):
|
||||
|
||||
@@ -9,7 +9,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
import math
|
||||
|
||||
from yt_dlp.jsinterp import JS_Undefined, JSInterpreter
|
||||
from yt_dlp.jsinterp import JS_Undefined, JSInterpreter, js_number_to_string
|
||||
|
||||
|
||||
class NaN:
|
||||
@@ -93,6 +93,16 @@ class TestJSInterpreter(unittest.TestCase):
|
||||
self._test('function f(){return 0 ?? 42;}', 0)
|
||||
self._test('function f(){return "life, the universe and everything" < 42;}', False)
|
||||
self._test('function f(){return 0 - 7 * - 6;}', 42)
|
||||
self._test('function f(){return true << "5";}', 32)
|
||||
self._test('function f(){return true << true;}', 2)
|
||||
self._test('function f(){return "19" & "21.9";}', 17)
|
||||
self._test('function f(){return "19" & false;}', 0)
|
||||
self._test('function f(){return "11.0" >> "2.1";}', 2)
|
||||
self._test('function f(){return 5 ^ 9;}', 12)
|
||||
self._test('function f(){return 0.0 << NaN}', 0)
|
||||
self._test('function f(){return null << undefined}', 0)
|
||||
# TODO: Does not work due to number too large
|
||||
# self._test('function f(){return 21 << 4294967297}', 42)
|
||||
|
||||
def test_array_access(self):
|
||||
self._test('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}', [5, 2, 7])
|
||||
@@ -431,6 +441,27 @@ class TestJSInterpreter(unittest.TestCase):
|
||||
self._test('function f(){return "012345678".slice(-1, 1)}', '')
|
||||
self._test('function f(){return "012345678".slice(-3, -1)}', '67')
|
||||
|
||||
def test_js_number_to_string(self):
|
||||
for test, radix, expected in [
|
||||
(0, None, '0'),
|
||||
(-0, None, '0'),
|
||||
(0.0, None, '0'),
|
||||
(-0.0, None, '0'),
|
||||
(math.nan, None, 'NaN'),
|
||||
(-math.nan, None, 'NaN'),
|
||||
(math.inf, None, 'Infinity'),
|
||||
(-math.inf, None, '-Infinity'),
|
||||
(10 ** 21.5, 8, '526665530627250154000000'),
|
||||
(6, 2, '110'),
|
||||
(254, 16, 'fe'),
|
||||
(-10, 2, '-1010'),
|
||||
(-0xff, 2, '-11111111'),
|
||||
(0.1 + 0.2, 16, '0.4cccccccccccd'),
|
||||
(1234.1234, 10, '1234.1234'),
|
||||
# (1000000000000000128, 10, '1000000000000000100')
|
||||
]:
|
||||
assert js_number_to_string(test, radix) == expected
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
||||
@@ -216,7 +216,9 @@ class SocksWebSocketTestRequestHandler(SocksTestRequestHandler):
|
||||
protocol = websockets.ServerProtocol()
|
||||
connection = websockets.sync.server.ServerConnection(socket=self.request, protocol=protocol, close_timeout=0)
|
||||
connection.handshake()
|
||||
connection.send(json.dumps(self.socks_info))
|
||||
for message in connection:
|
||||
if message == 'socks_info':
|
||||
connection.send(json.dumps(self.socks_info))
|
||||
connection.close()
|
||||
|
||||
|
||||
|
||||
@@ -481,7 +481,7 @@ class TestTraversalHelpers:
|
||||
'id': 'name',
|
||||
'data': 'content',
|
||||
'url': 'url',
|
||||
}, all, {subs_list_to_dict}]) == {
|
||||
}, all, {subs_list_to_dict(lang=None)}]) == {
|
||||
'de': [{'url': 'https://example.com/subs/de.ass'}],
|
||||
'en': [{'data': 'content'}],
|
||||
}, 'subs with mandatory items missing should be filtered'
|
||||
@@ -507,6 +507,54 @@ class TestTraversalHelpers:
|
||||
{'url': 'https://example.com/subs/en1', 'ext': 'ext'},
|
||||
{'url': 'https://example.com/subs/en2', 'ext': 'ext'},
|
||||
]}, '`quality` key should sort subtitle list accordingly'
|
||||
assert traverse_obj([
|
||||
{'name': 'de', 'url': 'https://example.com/subs/de.ass'},
|
||||
{'name': 'de'},
|
||||
{'name': 'en', 'content': 'content'},
|
||||
{'url': 'https://example.com/subs/en'},
|
||||
], [..., {
|
||||
'id': 'name',
|
||||
'url': 'url',
|
||||
'data': 'content',
|
||||
}, all, {subs_list_to_dict(lang='en')}]) == {
|
||||
'de': [{'url': 'https://example.com/subs/de.ass'}],
|
||||
'en': [
|
||||
{'data': 'content'},
|
||||
{'url': 'https://example.com/subs/en'},
|
||||
],
|
||||
}, 'optionally provided lang should be used if no id available'
|
||||
assert traverse_obj([
|
||||
{'name': 1, 'url': 'https://example.com/subs/de1'},
|
||||
{'name': {}, 'url': 'https://example.com/subs/de2'},
|
||||
{'name': 'de', 'ext': 1, 'url': 'https://example.com/subs/de3'},
|
||||
{'name': 'de', 'ext': {}, 'url': 'https://example.com/subs/de4'},
|
||||
], [..., {
|
||||
'id': 'name',
|
||||
'url': 'url',
|
||||
'ext': 'ext',
|
||||
}, all, {subs_list_to_dict(lang=None)}]) == {
|
||||
'de': [
|
||||
{'url': 'https://example.com/subs/de3'},
|
||||
{'url': 'https://example.com/subs/de4'},
|
||||
],
|
||||
}, 'non str types should be ignored for id and ext'
|
||||
assert traverse_obj([
|
||||
{'name': 1, 'url': 'https://example.com/subs/de1'},
|
||||
{'name': {}, 'url': 'https://example.com/subs/de2'},
|
||||
{'name': 'de', 'ext': 1, 'url': 'https://example.com/subs/de3'},
|
||||
{'name': 'de', 'ext': {}, 'url': 'https://example.com/subs/de4'},
|
||||
], [..., {
|
||||
'id': 'name',
|
||||
'url': 'url',
|
||||
'ext': 'ext',
|
||||
}, all, {subs_list_to_dict(lang='de')}]) == {
|
||||
'de': [
|
||||
{'url': 'https://example.com/subs/de1'},
|
||||
{'url': 'https://example.com/subs/de2'},
|
||||
{'url': 'https://example.com/subs/de3'},
|
||||
{'url': 'https://example.com/subs/de4'},
|
||||
],
|
||||
}, 'non str types should be replaced by default id'
|
||||
|
||||
def test_trim_str(self):
|
||||
with pytest.raises(TypeError):
|
||||
@@ -525,7 +573,7 @@ class TestTraversalHelpers:
|
||||
def test_unpack(self):
|
||||
assert unpack(lambda *x: ''.join(map(str, x)))([1, 2, 3]) == '123'
|
||||
assert unpack(join_nonempty)([1, 2, 3]) == '1-2-3'
|
||||
assert unpack(join_nonempty(delim=' '))([1, 2, 3]) == '1 2 3'
|
||||
assert unpack(join_nonempty, delim=' ')([1, 2, 3]) == '1 2 3'
|
||||
with pytest.raises(TypeError):
|
||||
unpack(join_nonempty)()
|
||||
with pytest.raises(TypeError):
|
||||
|
||||
@@ -21,7 +21,6 @@ import xml.etree.ElementTree
|
||||
from yt_dlp.compat import (
|
||||
compat_etree_fromstring,
|
||||
compat_HTMLParseError,
|
||||
compat_os_name,
|
||||
)
|
||||
from yt_dlp.utils import (
|
||||
Config,
|
||||
@@ -49,7 +48,6 @@ from yt_dlp.utils import (
|
||||
dfxp2srt,
|
||||
encode_base_n,
|
||||
encode_compat_str,
|
||||
encodeFilename,
|
||||
expand_path,
|
||||
extract_attributes,
|
||||
extract_basic_auth,
|
||||
@@ -69,10 +67,8 @@ from yt_dlp.utils import (
|
||||
get_elements_html_by_class,
|
||||
get_elements_text_and_html_by_attribute,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
iri_to_uri,
|
||||
is_html,
|
||||
join_nonempty,
|
||||
js_to_json,
|
||||
limit_length,
|
||||
locked_file,
|
||||
@@ -253,17 +249,36 @@ class TestUtil(unittest.TestCase):
|
||||
self.assertEqual(sanitize_path('abc/def...'), 'abc\\def..#')
|
||||
self.assertEqual(sanitize_path('abc.../def'), 'abc..#\\def')
|
||||
self.assertEqual(sanitize_path('abc.../def...'), 'abc..#\\def..#')
|
||||
|
||||
self.assertEqual(sanitize_path('../abc'), '..\\abc')
|
||||
self.assertEqual(sanitize_path('../../abc'), '..\\..\\abc')
|
||||
self.assertEqual(sanitize_path('./abc'), 'abc')
|
||||
self.assertEqual(sanitize_path('./../abc'), '..\\abc')
|
||||
|
||||
self.assertEqual(sanitize_path('\\abc'), '\\abc')
|
||||
self.assertEqual(sanitize_path('C:abc'), 'C:abc')
|
||||
self.assertEqual(sanitize_path('C:abc\\..\\'), 'C:..')
|
||||
self.assertEqual(sanitize_path('C:\\abc:%(title)s.%(ext)s'), 'C:\\abc#%(title)s.%(ext)s')
|
||||
|
||||
# Check with nt._path_normpath if available
|
||||
try:
|
||||
import nt
|
||||
|
||||
nt_path_normpath = getattr(nt, '_path_normpath', None)
|
||||
except Exception:
|
||||
nt_path_normpath = None
|
||||
|
||||
for test, expected in [
|
||||
('C:\\', 'C:\\'),
|
||||
('../abc', '..\\abc'),
|
||||
('../../abc', '..\\..\\abc'),
|
||||
('./abc', 'abc'),
|
||||
('./../abc', '..\\abc'),
|
||||
('\\abc', '\\abc'),
|
||||
('C:abc', 'C:abc'),
|
||||
('C:abc\\..\\', 'C:'),
|
||||
('C:abc\\..\\def\\..\\..\\', 'C:..'),
|
||||
('C:\\abc\\xyz///..\\def\\', 'C:\\abc\\def'),
|
||||
('abc/../', '.'),
|
||||
('./abc/../', '.'),
|
||||
]:
|
||||
result = sanitize_path(test)
|
||||
assert result == expected, f'{test} was incorrectly resolved'
|
||||
assert result == sanitize_path(result), f'{test} changed after sanitizing again'
|
||||
if nt_path_normpath:
|
||||
assert result == nt_path_normpath(test), f'{test} does not match nt._path_normpath'
|
||||
|
||||
def test_sanitize_url(self):
|
||||
self.assertEqual(sanitize_url('//foo.bar'), 'http://foo.bar')
|
||||
self.assertEqual(sanitize_url('httpss://foo.bar'), 'https://foo.bar')
|
||||
@@ -567,10 +582,10 @@ class TestUtil(unittest.TestCase):
|
||||
self.assertEqual(res_data, {'a': 'b', 'c': 'd'})
|
||||
|
||||
def test_shell_quote(self):
|
||||
args = ['ffmpeg', '-i', encodeFilename('ñ€ß\'.mp4')]
|
||||
args = ['ffmpeg', '-i', 'ñ€ß\'.mp4']
|
||||
self.assertEqual(
|
||||
shell_quote(args),
|
||||
"""ffmpeg -i 'ñ€ß'"'"'.mp4'""" if compat_os_name != 'nt' else '''ffmpeg -i "ñ€ß'.mp4"''')
|
||||
"""ffmpeg -i 'ñ€ß'"'"'.mp4'""" if os.name != 'nt' else '''ffmpeg -i "ñ€ß'.mp4"''')
|
||||
|
||||
def test_float_or_none(self):
|
||||
self.assertEqual(float_or_none('42.42'), 42.42)
|
||||
@@ -1310,15 +1325,10 @@ class TestUtil(unittest.TestCase):
|
||||
self.assertEqual(clean_html('a:\n "b"'), 'a: "b"')
|
||||
self.assertEqual(clean_html('a<br>\xa0b'), 'a\nb')
|
||||
|
||||
def test_intlist_to_bytes(self):
|
||||
self.assertEqual(
|
||||
intlist_to_bytes([0, 1, 127, 128, 255]),
|
||||
b'\x00\x01\x7f\x80\xff')
|
||||
|
||||
def test_args_to_str(self):
|
||||
self.assertEqual(
|
||||
args_to_str(['foo', 'ba/r', '-baz', '2 be', '']),
|
||||
'foo ba/r -baz \'2 be\' \'\'' if compat_os_name != 'nt' else 'foo ba/r -baz "2 be" ""',
|
||||
'foo ba/r -baz \'2 be\' \'\'' if os.name != 'nt' else 'foo ba/r -baz "2 be" ""',
|
||||
)
|
||||
|
||||
def test_parse_filesize(self):
|
||||
@@ -2118,7 +2128,7 @@ Line 1
|
||||
assert extract_basic_auth('http://user:@foo.bar') == ('http://foo.bar', 'Basic dXNlcjo=')
|
||||
assert extract_basic_auth('http://user:pass@foo.bar') == ('http://foo.bar', 'Basic dXNlcjpwYXNz')
|
||||
|
||||
@unittest.skipUnless(compat_os_name == 'nt', 'Only relevant on Windows')
|
||||
@unittest.skipUnless(os.name == 'nt', 'Only relevant on Windows')
|
||||
def test_windows_escaping(self):
|
||||
tests = [
|
||||
'test"&',
|
||||
@@ -2158,10 +2168,6 @@ Line 1
|
||||
assert int_or_none(v=10) == 10, 'keyword passed positional should call function'
|
||||
assert int_or_none(scale=0.1)(10) == 100, 'call after partial application should call the function'
|
||||
|
||||
assert callable(join_nonempty(delim=', ')), 'varargs positional should apply partially'
|
||||
assert callable(join_nonempty()), 'varargs positional should apply partially'
|
||||
assert join_nonempty(None, delim=', ') == '', 'passed varargs should call the function'
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
||||
@@ -68,6 +68,16 @@ _SIG_TESTS = [
|
||||
'2aq0aqSyOoJXtK73m-uME_jv7-pT15gOFC02RFkGMqWpzEICs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
'AOq0QJ8wRAIgXmPlOPSBkkUs1bYFYlJCfe29xx8j7v1pDL2QwbdV96sCIEzpWqMGkFR20CFOg51Tp-7vj_EMu-m37KtXJoOySqa0',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/3bb1f723/player_ias.vflset/en_US/base.js',
|
||||
'2aq0aqSyOoJXtK73m-uME_jv7-pT15gOFC02RFkGMqWpzEICs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
'MyOSJXtKI3m-uME_jv7-pT12gOFC02RFkGoqWpzE0Cs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/2f1832d2/player_ias.vflset/en_US/base.js',
|
||||
'2aq0aqSyOoJXtK73m-uME_jv7-pT15gOFC02RFkGMqWpzEICs69VdbwQ0LDp1v7j8xx92efCJlYFYb1sUkkBSPOlPmXgIARw8JQ0qOAOAA',
|
||||
'0QJ8wRAIgXmPlOPSBkkUs1bYFYlJCfe29xxAj7v1pDL0QwbdV96sCIEzpWqMGkFR20CFOg51Tp-7vj_EMu-m37KtXJ2OySqa0q',
|
||||
),
|
||||
]
|
||||
|
||||
_NSIG_TESTS = [
|
||||
@@ -183,6 +193,18 @@ _NSIG_TESTS = [
|
||||
'https://www.youtube.com/s/player/b12cc44b/player_ias.vflset/en_US/base.js',
|
||||
'keLa5R2U00sR9SQK', 'N1OGyujjEwMnLw',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/3bb1f723/player_ias.vflset/en_US/base.js',
|
||||
'gK15nzVyaXE9RsMP3z', 'ZFFWFLPWx9DEgQ',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/2f1832d2/player_ias.vflset/en_US/base.js',
|
||||
'YWt1qdbe8SAfkoPHW5d', 'RrRjWQOJmBiP',
|
||||
),
|
||||
(
|
||||
'https://www.youtube.com/s/player/9c6dfc4a/player_ias.vflset/en_US/base.js',
|
||||
'jbu7ylIosQHyJyJV', 'uwI0ESiynAmhNg',
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
@@ -254,8 +276,11 @@ def signature(jscode, sig_input):
|
||||
|
||||
|
||||
def n_sig(jscode, sig_input):
|
||||
funcname = YoutubeIE(FakeYDL())._extract_n_function_name(jscode)
|
||||
return JSInterpreter(jscode).call_function(funcname, sig_input)
|
||||
ie = YoutubeIE(FakeYDL())
|
||||
funcname = ie._extract_n_function_name(jscode)
|
||||
jsi = JSInterpreter(jscode)
|
||||
func = jsi.extract_function_from_code(*ie._fixup_n_function_code(*jsi.extract_function_code(funcname)))
|
||||
return func([sig_input])
|
||||
|
||||
|
||||
make_sig_test = t_factory(
|
||||
|
||||
@@ -26,7 +26,7 @@ import unicodedata
|
||||
|
||||
from .cache import Cache
|
||||
from .compat import urllib # isort: split
|
||||
from .compat import compat_os_name, urllib_req_to_req
|
||||
from .compat import urllib_req_to_req
|
||||
from .cookies import CookieLoadError, LenientSimpleCookie, load_cookies
|
||||
from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name
|
||||
from .downloader.rtmp import rtmpdump_version
|
||||
@@ -109,7 +109,6 @@ from .utils import (
|
||||
determine_ext,
|
||||
determine_protocol,
|
||||
encode_compat_str,
|
||||
encodeFilename,
|
||||
escapeHTML,
|
||||
expand_path,
|
||||
extract_basic_auth,
|
||||
@@ -167,7 +166,7 @@ from .utils.networking import (
|
||||
)
|
||||
from .version import CHANNEL, ORIGIN, RELEASE_GIT_HEAD, VARIANT, __version__
|
||||
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
import ctypes
|
||||
|
||||
|
||||
@@ -267,7 +266,9 @@ class YoutubeDL:
|
||||
outtmpl_na_placeholder: Placeholder for unavailable meta fields.
|
||||
restrictfilenames: Do not allow "&" and spaces in file names
|
||||
trim_file_name: Limit length of filename (extension excluded)
|
||||
windowsfilenames: Force the filenames to be windows compatible
|
||||
windowsfilenames: True: Force filenames to be Windows compatible
|
||||
False: Sanitize filenames only minimally
|
||||
This option has no effect when running on Windows
|
||||
ignoreerrors: Do not stop on download/postprocessing errors.
|
||||
Can be 'only_download' to ignore only download errors.
|
||||
Default is 'only_download' for CLI, but False for API
|
||||
@@ -282,7 +283,10 @@ class YoutubeDL:
|
||||
lazy_playlist: Process playlist entries as they are received.
|
||||
matchtitle: Download only matching titles.
|
||||
rejecttitle: Reject downloads for matching titles.
|
||||
logger: Log messages to a logging.Logger instance.
|
||||
logger: A class having a `debug`, `warning` and `error` function where
|
||||
each has a single string parameter, the message to be logged.
|
||||
For compatibility reasons, both debug and info messages are passed to `debug`.
|
||||
A debug message will have a prefix of `[debug] ` to discern it from info messages.
|
||||
logtostderr: Print everything to stderr instead of stdout.
|
||||
consoletitle: Display progress in the console window's titlebar.
|
||||
writedescription: Write the video description to a .description file
|
||||
@@ -594,7 +598,7 @@ class YoutubeDL:
|
||||
# NB: Keep in sync with the docstring of extractor/common.py
|
||||
'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note',
|
||||
'width', 'height', 'aspect_ratio', 'resolution', 'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'audio_channels',
|
||||
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns',
|
||||
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns', 'hls_media_playlist_data',
|
||||
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start', 'is_dash_periods', 'request_data',
|
||||
'preference', 'language', 'language_preference', 'quality', 'source_preference', 'cookies',
|
||||
'http_headers', 'stretched_ratio', 'no_resume', 'has_drm', 'extra_param_to_segment_url', 'extra_param_to_key_url',
|
||||
@@ -643,7 +647,7 @@ class YoutubeDL:
|
||||
out=stdout,
|
||||
error=sys.stderr,
|
||||
screen=sys.stderr if self.params.get('quiet') else stdout,
|
||||
console=None if compat_os_name == 'nt' else next(
|
||||
console=None if os.name == 'nt' else next(
|
||||
filter(supports_terminal_sequences, (sys.stderr, sys.stdout)), None),
|
||||
)
|
||||
|
||||
@@ -952,7 +956,7 @@ class YoutubeDL:
|
||||
self._write_string(f'{self._bidi_workaround(message)}\n', self._out_files.error, only_once=only_once)
|
||||
|
||||
def _send_console_code(self, code):
|
||||
if compat_os_name == 'nt' or not self._out_files.console:
|
||||
if os.name == 'nt' or not self._out_files.console:
|
||||
return
|
||||
self._write_string(code, self._out_files.console)
|
||||
|
||||
@@ -960,7 +964,7 @@ class YoutubeDL:
|
||||
if not self.params.get('consoletitle', False):
|
||||
return
|
||||
message = remove_terminal_sequences(message)
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
if ctypes.windll.kernel32.GetConsoleWindow():
|
||||
# c_wchar_p() might not be necessary if `message` is
|
||||
# already of type unicode()
|
||||
@@ -1117,7 +1121,7 @@ class YoutubeDL:
|
||||
def raise_no_formats(self, info, forced=False, *, msg=None):
|
||||
has_drm = info.get('_has_drm')
|
||||
ignored, expected = self.params.get('ignore_no_formats_error'), bool(msg)
|
||||
msg = msg or has_drm and 'This video is DRM protected' or 'No video formats found!'
|
||||
msg = msg or (has_drm and 'This video is DRM protected') or 'No video formats found!'
|
||||
if forced or not ignored:
|
||||
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
|
||||
expected=has_drm or ignored or expected)
|
||||
@@ -1193,8 +1197,7 @@ class YoutubeDL:
|
||||
|
||||
def prepare_outtmpl(self, outtmpl, info_dict, sanitize=False):
|
||||
""" Make the outtmpl and info_dict suitable for substitution: ydl.escape_outtmpl(outtmpl) % info_dict
|
||||
@param sanitize Whether to sanitize the output as a filename.
|
||||
For backward compatibility, a function can also be passed
|
||||
@param sanitize Whether to sanitize the output as a filename
|
||||
"""
|
||||
|
||||
info_dict.setdefault('epoch', int(time.time())) # keep epoch consistent once set
|
||||
@@ -1310,14 +1313,23 @@ class YoutubeDL:
|
||||
|
||||
na = self.params.get('outtmpl_na_placeholder', 'NA')
|
||||
|
||||
def filename_sanitizer(key, value, restricted=self.params.get('restrictfilenames')):
|
||||
def filename_sanitizer(key, value, restricted):
|
||||
return sanitize_filename(str(value), restricted=restricted, is_id=(
|
||||
bool(re.search(r'(^|[_.])id(\.|$)', key))
|
||||
if 'filename-sanitization' in self.params['compat_opts']
|
||||
else NO_DEFAULT))
|
||||
|
||||
sanitizer = sanitize if callable(sanitize) else filename_sanitizer
|
||||
sanitize = bool(sanitize)
|
||||
if callable(sanitize):
|
||||
self.deprecation_warning('Passing a callable "sanitize" to YoutubeDL.prepare_outtmpl is deprecated')
|
||||
elif not sanitize:
|
||||
pass
|
||||
elif (sys.platform != 'win32' and not self.params.get('restrictfilenames')
|
||||
and self.params.get('windowsfilenames') is False):
|
||||
def sanitize(key, value):
|
||||
return str(value).replace('/', '\u29F8').replace('\0', '')
|
||||
else:
|
||||
def sanitize(key, value):
|
||||
return filename_sanitizer(key, value, restricted=self.params.get('restrictfilenames'))
|
||||
|
||||
def _dumpjson_default(obj):
|
||||
if isinstance(obj, (set, LazyList)):
|
||||
@@ -1400,13 +1412,13 @@ class YoutubeDL:
|
||||
|
||||
if sanitize:
|
||||
# If value is an object, sanitize might convert it to a string
|
||||
# So we convert it to repr first
|
||||
# So we manually convert it before sanitizing
|
||||
if fmt[-1] == 'r':
|
||||
value, fmt = repr(value), str_fmt
|
||||
elif fmt[-1] == 'a':
|
||||
value, fmt = ascii(value), str_fmt
|
||||
if fmt[-1] in 'csra':
|
||||
value = sanitizer(last_field, value)
|
||||
value = sanitize(last_field, value)
|
||||
|
||||
key = '{}\0{}'.format(key.replace('%', '%\0'), outer_mobj.group('format'))
|
||||
TMPL_DICT[key] = value
|
||||
@@ -1948,6 +1960,7 @@ class YoutubeDL:
|
||||
'playlist_uploader_id': ie_result.get('uploader_id'),
|
||||
'playlist_channel': ie_result.get('channel'),
|
||||
'playlist_channel_id': ie_result.get('channel_id'),
|
||||
'playlist_webpage_url': ie_result.get('webpage_url'),
|
||||
**kwargs,
|
||||
}
|
||||
if strict:
|
||||
@@ -2108,7 +2121,7 @@ class YoutubeDL:
|
||||
m = operator_rex.fullmatch(filter_spec)
|
||||
if m:
|
||||
try:
|
||||
comparison_value = int(m.group('value'))
|
||||
comparison_value = float(m.group('value'))
|
||||
except ValueError:
|
||||
comparison_value = parse_filesize(m.group('value'))
|
||||
if comparison_value is None:
|
||||
@@ -2196,7 +2209,7 @@ class YoutubeDL:
|
||||
def _default_format_spec(self, info_dict):
|
||||
prefer_best = (
|
||||
self.params['outtmpl']['default'] == '-'
|
||||
or info_dict.get('is_live') and not self.params.get('live_from_start'))
|
||||
or (info_dict.get('is_live') and not self.params.get('live_from_start')))
|
||||
|
||||
def can_merge():
|
||||
merger = FFmpegMergerPP(self)
|
||||
@@ -2365,7 +2378,7 @@ class YoutubeDL:
|
||||
vexts=[f['ext'] for f in video_fmts],
|
||||
aexts=[f['ext'] for f in audio_fmts],
|
||||
preferences=(try_call(lambda: self.params['merge_output_format'].split('/'))
|
||||
or self.params.get('prefer_free_formats') and ('webm', 'mkv')))
|
||||
or (self.params.get('prefer_free_formats') and ('webm', 'mkv'))))
|
||||
|
||||
filtered = lambda *keys: filter(None, (traverse_obj(fmt, *keys) for fmt in formats_info))
|
||||
|
||||
@@ -3255,9 +3268,9 @@ class YoutubeDL:
|
||||
|
||||
if full_filename is None:
|
||||
return
|
||||
if not self._ensure_dir_exists(encodeFilename(full_filename)):
|
||||
if not self._ensure_dir_exists(full_filename):
|
||||
return
|
||||
if not self._ensure_dir_exists(encodeFilename(temp_filename)):
|
||||
if not self._ensure_dir_exists(temp_filename):
|
||||
return
|
||||
|
||||
if self._write_description('video', info_dict,
|
||||
@@ -3289,16 +3302,16 @@ class YoutubeDL:
|
||||
if self.params.get('writeannotations', False):
|
||||
annofn = self.prepare_filename(info_dict, 'annotation')
|
||||
if annofn:
|
||||
if not self._ensure_dir_exists(encodeFilename(annofn)):
|
||||
if not self._ensure_dir_exists(annofn):
|
||||
return
|
||||
if not self.params.get('overwrites', True) and os.path.exists(encodeFilename(annofn)):
|
||||
if not self.params.get('overwrites', True) and os.path.exists(annofn):
|
||||
self.to_screen('[info] Video annotations are already present')
|
||||
elif not info_dict.get('annotations'):
|
||||
self.report_warning('There are no annotations to write.')
|
||||
else:
|
||||
try:
|
||||
self.to_screen('[info] Writing video annotations to: ' + annofn)
|
||||
with open(encodeFilename(annofn), 'w', encoding='utf-8') as annofile:
|
||||
with open(annofn, 'w', encoding='utf-8') as annofile:
|
||||
annofile.write(info_dict['annotations'])
|
||||
except (KeyError, TypeError):
|
||||
self.report_warning('There are no annotations to write.')
|
||||
@@ -3314,14 +3327,14 @@ class YoutubeDL:
|
||||
f'Cannot write internet shortcut file because the actual URL of "{info_dict["webpage_url"]}" is unknown')
|
||||
return True
|
||||
linkfn = replace_extension(self.prepare_filename(info_dict, 'link'), link_type, info_dict.get('ext'))
|
||||
if not self._ensure_dir_exists(encodeFilename(linkfn)):
|
||||
if not self._ensure_dir_exists(linkfn):
|
||||
return False
|
||||
if self.params.get('overwrites', True) and os.path.exists(encodeFilename(linkfn)):
|
||||
if self.params.get('overwrites', True) and os.path.exists(linkfn):
|
||||
self.to_screen(f'[info] Internet shortcut (.{link_type}) is already present')
|
||||
return True
|
||||
try:
|
||||
self.to_screen(f'[info] Writing internet shortcut (.{link_type}) to: {linkfn}')
|
||||
with open(encodeFilename(to_high_limit_path(linkfn)), 'w', encoding='utf-8',
|
||||
with open(to_high_limit_path(linkfn), 'w', encoding='utf-8',
|
||||
newline='\r\n' if link_type == 'url' else '\n') as linkfile:
|
||||
template_vars = {'url': url}
|
||||
if link_type == 'desktop':
|
||||
@@ -3352,7 +3365,7 @@ class YoutubeDL:
|
||||
|
||||
if self.params.get('skip_download'):
|
||||
info_dict['filepath'] = temp_filename
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(encodeFilename(full_filename)))
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(full_filename))
|
||||
info_dict['__files_to_move'] = files_to_move
|
||||
replace_info_dict(self.run_pp(MoveFilesAfterDownloadPP(self, False), info_dict))
|
||||
info_dict['__write_download_archive'] = self.params.get('force_write_download_archive')
|
||||
@@ -3482,7 +3495,7 @@ class YoutubeDL:
|
||||
self.report_file_already_downloaded(dl_filename)
|
||||
|
||||
dl_filename = dl_filename or temp_filename
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(encodeFilename(full_filename)))
|
||||
info_dict['__finaldir'] = os.path.dirname(os.path.abspath(full_filename))
|
||||
|
||||
except network_exceptions as err:
|
||||
self.report_error(f'unable to download video data: {err}')
|
||||
@@ -3541,8 +3554,8 @@ class YoutubeDL:
|
||||
and info_dict.get('container') == 'm4a_dash',
|
||||
'writing DASH m4a. Only some players support this container',
|
||||
FFmpegFixupM4aPP)
|
||||
ffmpeg_fixup(downloader == 'hlsnative' and not self.params.get('hls_use_mpegts')
|
||||
or info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None,
|
||||
ffmpeg_fixup((downloader == 'hlsnative' and not self.params.get('hls_use_mpegts'))
|
||||
or (info_dict.get('is_live') and self.params.get('hls_use_mpegts') is None),
|
||||
'Possible MPEG-TS in MP4 container or malformed AAC timestamps',
|
||||
FFmpegFixupM3u8PP)
|
||||
ffmpeg_fixup(downloader == 'dashsegments'
|
||||
@@ -4297,7 +4310,7 @@ class YoutubeDL:
|
||||
else:
|
||||
try:
|
||||
self.to_screen(f'[info] Writing {label} description to: {descfn}')
|
||||
with open(encodeFilename(descfn), 'w', encoding='utf-8') as descfile:
|
||||
with open(descfn, 'w', encoding='utf-8') as descfile:
|
||||
descfile.write(ie_result['description'])
|
||||
except OSError:
|
||||
self.report_error(f'Cannot write {label} description file {descfn}')
|
||||
@@ -4381,7 +4394,9 @@ class YoutubeDL:
|
||||
return None
|
||||
|
||||
for idx, t in list(enumerate(thumbnails))[::-1]:
|
||||
thumb_ext = (f'{t["id"]}.' if multiple else '') + determine_ext(t['url'], 'jpg')
|
||||
thumb_ext = t.get('ext') or determine_ext(t['url'], 'jpg')
|
||||
if multiple:
|
||||
thumb_ext = f'{t["id"]}.{thumb_ext}'
|
||||
thumb_display_id = f'{label} thumbnail {t["id"]}'
|
||||
thumb_filename = replace_extension(filename, thumb_ext, info_dict.get('ext'))
|
||||
thumb_filename_final = replace_extension(thumb_filename_base, thumb_ext, info_dict.get('ext'))
|
||||
@@ -4397,7 +4412,7 @@ class YoutubeDL:
|
||||
try:
|
||||
uf = self.urlopen(Request(t['url'], headers=t.get('http_headers', {})))
|
||||
self.to_screen(f'[info] Writing {thumb_display_id} to: {thumb_filename}')
|
||||
with open(encodeFilename(thumb_filename), 'wb') as thumbf:
|
||||
with open(thumb_filename, 'wb') as thumbf:
|
||||
shutil.copyfileobj(uf, thumbf)
|
||||
ret.append((thumb_filename, thumb_filename_final))
|
||||
t['filepath'] = thumb_filename
|
||||
|
||||
@@ -14,7 +14,6 @@ import os
|
||||
import re
|
||||
import traceback
|
||||
|
||||
from .compat import compat_os_name
|
||||
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS, CookieLoadError
|
||||
from .downloader.external import get_external_downloader
|
||||
from .extractor import list_extractor_classes
|
||||
@@ -44,7 +43,6 @@ from .utils import (
|
||||
GeoUtils,
|
||||
PlaylistEntries,
|
||||
SameFileError,
|
||||
decodeOption,
|
||||
download_range_func,
|
||||
expand_path,
|
||||
float_or_none,
|
||||
@@ -263,9 +261,11 @@ def validate_options(opts):
|
||||
elif value in ('inf', 'infinite'):
|
||||
return float('inf')
|
||||
try:
|
||||
return int(value)
|
||||
int_value = int(value)
|
||||
except (TypeError, ValueError):
|
||||
validate(False, f'{name} retry count', value)
|
||||
validate_positive(f'{name} retry count', int_value)
|
||||
return int_value
|
||||
|
||||
opts.retries = parse_retries('download', opts.retries)
|
||||
opts.fragment_retries = parse_retries('fragment', opts.fragment_retries)
|
||||
@@ -295,18 +295,20 @@ def validate_options(opts):
|
||||
raise ValueError(f'invalid {key} retry sleep expression {expr!r}')
|
||||
|
||||
# Bytes
|
||||
def validate_bytes(name, value):
|
||||
def validate_bytes(name, value, strict_positive=False):
|
||||
if value is None:
|
||||
return None
|
||||
numeric_limit = parse_bytes(value)
|
||||
validate(numeric_limit is not None, 'rate limit', value)
|
||||
validate(numeric_limit is not None, name, value)
|
||||
if strict_positive:
|
||||
validate_positive(name, numeric_limit, True)
|
||||
return numeric_limit
|
||||
|
||||
opts.ratelimit = validate_bytes('rate limit', opts.ratelimit)
|
||||
opts.ratelimit = validate_bytes('rate limit', opts.ratelimit, True)
|
||||
opts.throttledratelimit = validate_bytes('throttled rate limit', opts.throttledratelimit)
|
||||
opts.min_filesize = validate_bytes('min filesize', opts.min_filesize)
|
||||
opts.max_filesize = validate_bytes('max filesize', opts.max_filesize)
|
||||
opts.buffersize = validate_bytes('buffer size', opts.buffersize)
|
||||
opts.buffersize = validate_bytes('buffer size', opts.buffersize, True)
|
||||
opts.http_chunk_size = validate_bytes('http chunk size', opts.http_chunk_size)
|
||||
|
||||
# Output templates
|
||||
@@ -883,8 +885,8 @@ def parse_options(argv=None):
|
||||
'listsubtitles': opts.listsubtitles,
|
||||
'subtitlesformat': opts.subtitlesformat,
|
||||
'subtitleslangs': opts.subtitleslangs,
|
||||
'matchtitle': decodeOption(opts.matchtitle),
|
||||
'rejecttitle': decodeOption(opts.rejecttitle),
|
||||
'matchtitle': opts.matchtitle,
|
||||
'rejecttitle': opts.rejecttitle,
|
||||
'max_downloads': opts.max_downloads,
|
||||
'prefer_free_formats': opts.prefer_free_formats,
|
||||
'trim_file_name': opts.trim_file_name,
|
||||
@@ -1053,7 +1055,7 @@ def _real_main(argv=None):
|
||||
ydl.warn_if_short_id(args)
|
||||
|
||||
# Show a useful error message and wait for keypress if not launched from shell on Windows
|
||||
if not args and compat_os_name == 'nt' and getattr(sys, 'frozen', False):
|
||||
if not args and os.name == 'nt' and getattr(sys, 'frozen', False):
|
||||
import ctypes.wintypes
|
||||
import msvcrt
|
||||
|
||||
@@ -1064,7 +1066,7 @@ def _real_main(argv=None):
|
||||
# If we only have a single process attached, then the executable was double clicked
|
||||
# When using `pyinstaller` with `--onefile`, two processes get attached
|
||||
is_onefile = hasattr(sys, '_MEIPASS') and os.path.basename(sys._MEIPASS).startswith('_MEI')
|
||||
if attached_processes == 1 or is_onefile and attached_processes == 2:
|
||||
if attached_processes == 1 or (is_onefile and attached_processes == 2):
|
||||
print(parser._generate_error_message(
|
||||
'Do not double-click the executable, instead call it from a command line.\n'
|
||||
'Please read the README for further information on how to use yt-dlp: '
|
||||
@@ -1111,9 +1113,9 @@ def main(argv=None):
|
||||
from .extractor import gen_extractors, list_extractors
|
||||
|
||||
__all__ = [
|
||||
'main',
|
||||
'YoutubeDL',
|
||||
'parse_options',
|
||||
'gen_extractors',
|
||||
'list_extractors',
|
||||
'main',
|
||||
'parse_options',
|
||||
]
|
||||
|
||||
@@ -3,7 +3,6 @@ from math import ceil
|
||||
|
||||
from .compat import compat_ord
|
||||
from .dependencies import Cryptodome
|
||||
from .utils import bytes_to_intlist, intlist_to_bytes
|
||||
|
||||
if Cryptodome.AES:
|
||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||
@@ -17,15 +16,15 @@ if Cryptodome.AES:
|
||||
else:
|
||||
def aes_cbc_decrypt_bytes(data, key, iv):
|
||||
""" Decrypt bytes with AES-CBC using native implementation since pycryptodome is unavailable """
|
||||
return intlist_to_bytes(aes_cbc_decrypt(*map(bytes_to_intlist, (data, key, iv))))
|
||||
return bytes(aes_cbc_decrypt(*map(list, (data, key, iv))))
|
||||
|
||||
def aes_gcm_decrypt_and_verify_bytes(data, key, tag, nonce):
|
||||
""" Decrypt bytes with AES-GCM using native implementation since pycryptodome is unavailable """
|
||||
return intlist_to_bytes(aes_gcm_decrypt_and_verify(*map(bytes_to_intlist, (data, key, tag, nonce))))
|
||||
return bytes(aes_gcm_decrypt_and_verify(*map(list, (data, key, tag, nonce))))
|
||||
|
||||
|
||||
def aes_cbc_encrypt_bytes(data, key, iv, **kwargs):
|
||||
return intlist_to_bytes(aes_cbc_encrypt(*map(bytes_to_intlist, (data, key, iv)), **kwargs))
|
||||
return bytes(aes_cbc_encrypt(*map(list, (data, key, iv)), **kwargs))
|
||||
|
||||
|
||||
BLOCK_SIZE_BYTES = 16
|
||||
@@ -221,7 +220,7 @@ def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
|
||||
j0 = [*nonce, 0, 0, 0, 1]
|
||||
else:
|
||||
fill = (BLOCK_SIZE_BYTES - (len(nonce) % BLOCK_SIZE_BYTES)) % BLOCK_SIZE_BYTES + 8
|
||||
ghash_in = nonce + [0] * fill + bytes_to_intlist((8 * len(nonce)).to_bytes(8, 'big'))
|
||||
ghash_in = nonce + [0] * fill + list((8 * len(nonce)).to_bytes(8, 'big'))
|
||||
j0 = ghash(hash_subkey, ghash_in)
|
||||
|
||||
# TODO: add nonce support to aes_ctr_decrypt
|
||||
@@ -234,9 +233,9 @@ def aes_gcm_decrypt_and_verify(data, key, tag, nonce):
|
||||
s_tag = ghash(
|
||||
hash_subkey,
|
||||
data
|
||||
+ [0] * pad_len # pad
|
||||
+ bytes_to_intlist((0 * 8).to_bytes(8, 'big') # length of associated data
|
||||
+ ((len(data) * 8).to_bytes(8, 'big'))), # length of data
|
||||
+ [0] * pad_len # pad
|
||||
+ list((0 * 8).to_bytes(8, 'big') # length of associated data
|
||||
+ ((len(data) * 8).to_bytes(8, 'big'))), # length of data
|
||||
)
|
||||
|
||||
if tag != aes_ctr_encrypt(s_tag, key, j0):
|
||||
@@ -300,8 +299,8 @@ def aes_decrypt_text(data, password, key_size_bytes):
|
||||
"""
|
||||
NONCE_LENGTH_BYTES = 8
|
||||
|
||||
data = bytes_to_intlist(base64.b64decode(data))
|
||||
password = bytes_to_intlist(password.encode())
|
||||
data = list(base64.b64decode(data))
|
||||
password = list(password.encode())
|
||||
|
||||
key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password))
|
||||
key = aes_encrypt(key[:BLOCK_SIZE_BYTES], key_expansion(key)) * (key_size_bytes // BLOCK_SIZE_BYTES)
|
||||
@@ -310,7 +309,7 @@ def aes_decrypt_text(data, password, key_size_bytes):
|
||||
cipher = data[NONCE_LENGTH_BYTES:]
|
||||
|
||||
decrypted_data = aes_ctr_decrypt(cipher, key, nonce + [0] * (BLOCK_SIZE_BYTES - NONCE_LENGTH_BYTES))
|
||||
return intlist_to_bytes(decrypted_data)
|
||||
return bytes(decrypted_data)
|
||||
|
||||
|
||||
RCON = (0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36)
|
||||
@@ -535,19 +534,17 @@ def ghash(subkey, data):
|
||||
__all__ = [
|
||||
'aes_cbc_decrypt',
|
||||
'aes_cbc_decrypt_bytes',
|
||||
'aes_ctr_decrypt',
|
||||
'aes_decrypt_text',
|
||||
'aes_decrypt',
|
||||
'aes_ecb_decrypt',
|
||||
'aes_gcm_decrypt_and_verify',
|
||||
'aes_gcm_decrypt_and_verify_bytes',
|
||||
|
||||
'aes_cbc_encrypt',
|
||||
'aes_cbc_encrypt_bytes',
|
||||
'aes_ctr_decrypt',
|
||||
'aes_ctr_encrypt',
|
||||
'aes_decrypt',
|
||||
'aes_decrypt_text',
|
||||
'aes_ecb_decrypt',
|
||||
'aes_ecb_encrypt',
|
||||
'aes_encrypt',
|
||||
|
||||
'aes_gcm_decrypt_and_verify',
|
||||
'aes_gcm_decrypt_and_verify_bytes',
|
||||
'key_expansion',
|
||||
'pad_block',
|
||||
'pkcs7_padding',
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import os
|
||||
import sys
|
||||
import xml.etree.ElementTree as etree
|
||||
|
||||
from .compat_utils import passthrough_module
|
||||
@@ -24,33 +23,14 @@ def compat_etree_fromstring(text):
|
||||
return etree.XML(text, parser=etree.XMLParser(target=_TreeBuilder()))
|
||||
|
||||
|
||||
compat_os_name = os._name if os.name == 'java' else os.name
|
||||
|
||||
|
||||
def compat_shlex_quote(s):
|
||||
from ..utils import shell_quote
|
||||
return shell_quote(s)
|
||||
|
||||
|
||||
def compat_ord(c):
|
||||
return c if isinstance(c, int) else ord(c)
|
||||
|
||||
|
||||
if compat_os_name == 'nt' and sys.version_info < (3, 8):
|
||||
# os.path.realpath on Windows does not follow symbolic links
|
||||
# prior to Python 3.8 (see https://bugs.python.org/issue9949)
|
||||
def compat_realpath(path):
|
||||
while os.path.islink(path):
|
||||
path = os.path.abspath(os.readlink(path))
|
||||
return os.path.realpath(path)
|
||||
else:
|
||||
compat_realpath = os.path.realpath
|
||||
|
||||
|
||||
# Python 3.8+ does not honor %HOME% on windows, but this breaks compatibility with youtube-dl
|
||||
# See https://github.com/yt-dlp/yt-dlp/issues/792
|
||||
# https://docs.python.org/3/library/os.path.html#os.path.expanduser
|
||||
if compat_os_name in ('nt', 'ce'):
|
||||
if os.name in ('nt', 'ce'):
|
||||
def compat_expanduser(path):
|
||||
HOME = os.environ.get('HOME')
|
||||
if not HOME:
|
||||
|
||||
@@ -8,16 +8,14 @@ passthrough_module(__name__, '.._legacy', callback=lambda attr: warnings.warn(
|
||||
DeprecationWarning(f'{__name__}.{attr} is deprecated'), stacklevel=6))
|
||||
del passthrough_module
|
||||
|
||||
import base64
|
||||
import urllib.error
|
||||
import urllib.parse
|
||||
import functools # noqa: F401
|
||||
import os
|
||||
|
||||
compat_str = str
|
||||
|
||||
compat_b64decode = base64.b64decode
|
||||
compat_os_name = os.name
|
||||
compat_realpath = os.path.realpath
|
||||
|
||||
compat_urlparse = urllib.parse
|
||||
compat_parse_qs = urllib.parse.parse_qs
|
||||
compat_urllib_parse_unquote = urllib.parse.unquote
|
||||
compat_urllib_parse_urlencode = urllib.parse.urlencode
|
||||
compat_urllib_parse_urlparse = urllib.parse.urlparse
|
||||
|
||||
def compat_shlex_quote(s):
|
||||
from ..utils import shell_quote
|
||||
return shell_quote(s)
|
||||
|
||||
@@ -30,7 +30,7 @@ from asyncio import run as compat_asyncio_run # noqa: F401
|
||||
from re import Pattern as compat_Pattern # noqa: F401
|
||||
from re import match as compat_Match # noqa: F401
|
||||
|
||||
from . import compat_expanduser, compat_HTMLParseError, compat_realpath
|
||||
from . import compat_expanduser, compat_HTMLParseError
|
||||
from .compat_utils import passthrough_module
|
||||
from ..dependencies import brotli as compat_brotli # noqa: F401
|
||||
from ..dependencies import websockets as compat_websockets # noqa: F401
|
||||
@@ -78,7 +78,7 @@ compat_kwargs = lambda kwargs: kwargs
|
||||
compat_map = map
|
||||
compat_numeric_types = (int, float, complex)
|
||||
compat_os_path_expanduser = compat_expanduser
|
||||
compat_os_path_realpath = compat_realpath
|
||||
compat_os_path_realpath = os.path.realpath
|
||||
compat_print = print
|
||||
compat_shlex_split = shlex.split
|
||||
compat_socket_create_connection = socket.create_connection
|
||||
@@ -104,5 +104,12 @@ compat_xml_parse_error = compat_xml_etree_ElementTree_ParseError = etree.ParseEr
|
||||
compat_xpath = lambda xpath: xpath
|
||||
compat_zip = zip
|
||||
workaround_optparse_bug9161 = lambda: None
|
||||
compat_str = str
|
||||
compat_b64decode = base64.b64decode
|
||||
compat_urlparse = urllib.parse
|
||||
compat_parse_qs = urllib.parse.parse_qs
|
||||
compat_urllib_parse_unquote = urllib.parse.unquote
|
||||
compat_urllib_parse_urlencode = urllib.parse.urlencode
|
||||
compat_urllib_parse_urlparse = urllib.parse.urlparse
|
||||
|
||||
legacy = []
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
# flake8: noqa: F405
|
||||
from functools import * # noqa: F403
|
||||
|
||||
from .compat_utils import passthrough_module
|
||||
|
||||
passthrough_module(__name__, 'functools')
|
||||
del passthrough_module
|
||||
@@ -7,9 +7,9 @@ passthrough_module(__name__, 'urllib.request')
|
||||
del passthrough_module
|
||||
|
||||
|
||||
from .. import compat_os_name
|
||||
import os
|
||||
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
# On older Python versions, proxies are extracted from Windows registry erroneously. [1]
|
||||
# If the https proxy in the registry does not have a scheme, urllib will incorrectly add https:// to it. [2]
|
||||
# It is unlikely that the user has actually set it to be https, so we should be fine to safely downgrade
|
||||
@@ -37,4 +37,4 @@ if compat_os_name == 'nt':
|
||||
def getproxies():
|
||||
return getproxies_environment() or getproxies_registry_patched()
|
||||
|
||||
del compat_os_name
|
||||
del os
|
||||
|
||||
@@ -25,7 +25,6 @@ from .aes import (
|
||||
aes_gcm_decrypt_and_verify_bytes,
|
||||
unpad_pkcs7,
|
||||
)
|
||||
from .compat import compat_os_name
|
||||
from .dependencies import (
|
||||
_SECRETSTORAGE_UNAVAILABLE_REASON,
|
||||
secretstorage,
|
||||
@@ -196,7 +195,10 @@ def _extract_firefox_cookies(profile, container, logger):
|
||||
|
||||
def _firefox_browser_dirs():
|
||||
if sys.platform in ('cygwin', 'win32'):
|
||||
yield os.path.expandvars(R'%APPDATA%\Mozilla\Firefox\Profiles')
|
||||
yield from map(os.path.expandvars, (
|
||||
R'%APPDATA%\Mozilla\Firefox\Profiles',
|
||||
R'%LOCALAPPDATA%\Packages\Mozilla.Firefox_n80bbvh6b1yt2\LocalCache\Roaming\Mozilla\Firefox\Profiles',
|
||||
))
|
||||
|
||||
elif sys.platform == 'darwin':
|
||||
yield os.path.expanduser('~/Library/Application Support/Firefox/Profiles')
|
||||
@@ -343,7 +345,7 @@ def _extract_chrome_cookies(browser_name, profile, keyring, logger):
|
||||
logger.debug(f'cookie version breakdown: {counts}')
|
||||
return jar
|
||||
except PermissionError as error:
|
||||
if compat_os_name == 'nt' and error.errno == 13:
|
||||
if os.name == 'nt' and error.errno == 13:
|
||||
message = 'Could not copy Chrome cookie database. See https://github.com/yt-dlp/yt-dlp/issues/7271 for more info'
|
||||
logger.error(message)
|
||||
raise DownloadError(message) # force exit
|
||||
@@ -1277,8 +1279,8 @@ class YoutubeDLCookieJar(http.cookiejar.MozillaCookieJar):
|
||||
def _really_save(self, f, ignore_discard, ignore_expires):
|
||||
now = time.time()
|
||||
for cookie in self:
|
||||
if (not ignore_discard and cookie.discard
|
||||
or not ignore_expires and cookie.is_expired(now)):
|
||||
if ((not ignore_discard and cookie.discard)
|
||||
or (not ignore_expires and cookie.is_expired(now))):
|
||||
continue
|
||||
name, value = cookie.name, cookie.value
|
||||
if value is None:
|
||||
|
||||
@@ -24,7 +24,7 @@ try:
|
||||
from Crypto.Cipher import AES, PKCS1_OAEP, Blowfish, PKCS1_v1_5 # noqa: F401
|
||||
from Crypto.Hash import CMAC, SHA1 # noqa: F401
|
||||
from Crypto.PublicKey import RSA # noqa: F401
|
||||
except ImportError:
|
||||
except (ImportError, OSError):
|
||||
__version__ = f'broken {__version__}'.strip()
|
||||
|
||||
|
||||
|
||||
@@ -20,9 +20,7 @@ from ..utils import (
|
||||
Namespace,
|
||||
RetryManager,
|
||||
classproperty,
|
||||
decodeArgument,
|
||||
deprecation_warning,
|
||||
encodeFilename,
|
||||
format_bytes,
|
||||
join_nonempty,
|
||||
parse_bytes,
|
||||
@@ -219,7 +217,7 @@ class FileDownloader:
|
||||
def temp_name(self, filename):
|
||||
"""Returns a temporary filename for the given filename."""
|
||||
if self.params.get('nopart', False) or filename == '-' or \
|
||||
(os.path.exists(encodeFilename(filename)) and not os.path.isfile(encodeFilename(filename))):
|
||||
(os.path.exists(filename) and not os.path.isfile(filename)):
|
||||
return filename
|
||||
return filename + '.part'
|
||||
|
||||
@@ -273,7 +271,7 @@ class FileDownloader:
|
||||
"""Try to set the last-modified time of the given file."""
|
||||
if last_modified_hdr is None:
|
||||
return
|
||||
if not os.path.isfile(encodeFilename(filename)):
|
||||
if not os.path.isfile(filename):
|
||||
return
|
||||
timestr = last_modified_hdr
|
||||
if timestr is None:
|
||||
@@ -432,13 +430,13 @@ class FileDownloader:
|
||||
"""
|
||||
nooverwrites_and_exists = (
|
||||
not self.params.get('overwrites', True)
|
||||
and os.path.exists(encodeFilename(filename))
|
||||
and os.path.exists(filename)
|
||||
)
|
||||
|
||||
if not hasattr(filename, 'write'):
|
||||
continuedl_and_exists = (
|
||||
self.params.get('continuedl', True)
|
||||
and os.path.isfile(encodeFilename(filename))
|
||||
and os.path.isfile(filename)
|
||||
and not self.params.get('nopart', False)
|
||||
)
|
||||
|
||||
@@ -448,7 +446,7 @@ class FileDownloader:
|
||||
self._hook_progress({
|
||||
'filename': filename,
|
||||
'status': 'finished',
|
||||
'total_bytes': os.path.getsize(encodeFilename(filename)),
|
||||
'total_bytes': os.path.getsize(filename),
|
||||
}, info_dict)
|
||||
self._finish_multiline_status()
|
||||
return True, False
|
||||
@@ -489,9 +487,7 @@ class FileDownloader:
|
||||
if not self.params.get('verbose', False):
|
||||
return
|
||||
|
||||
str_args = [decodeArgument(a) for a in args]
|
||||
|
||||
if exe is None:
|
||||
exe = os.path.basename(str_args[0])
|
||||
exe = os.path.basename(args[0])
|
||||
|
||||
self.write_debug(f'{exe} command line: {shell_quote(str_args)}')
|
||||
self.write_debug(f'{exe} command line: {shell_quote(args)}')
|
||||
|
||||
@@ -23,7 +23,6 @@ from ..utils import (
|
||||
cli_valueless_option,
|
||||
determine_ext,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
find_available_port,
|
||||
remove_end,
|
||||
traverse_obj,
|
||||
@@ -67,7 +66,7 @@ class ExternalFD(FragmentFD):
|
||||
'elapsed': time.time() - started,
|
||||
}
|
||||
if filename != '-':
|
||||
fsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
fsize = os.path.getsize(tmpfilename)
|
||||
self.try_rename(tmpfilename, filename)
|
||||
status.update({
|
||||
'downloaded_bytes': fsize,
|
||||
@@ -184,9 +183,9 @@ class ExternalFD(FragmentFD):
|
||||
dest.write(decrypt_fragment(fragment, src.read()))
|
||||
src.close()
|
||||
if not self.params.get('keep_fragments', False):
|
||||
self.try_remove(encodeFilename(fragment_filename))
|
||||
self.try_remove(fragment_filename)
|
||||
dest.close()
|
||||
self.try_remove(encodeFilename(f'{tmpfilename}.frag.urls'))
|
||||
self.try_remove(f'{tmpfilename}.frag.urls')
|
||||
return 0
|
||||
|
||||
def _call_process(self, cmd, info_dict):
|
||||
@@ -620,7 +619,7 @@ class FFmpegFD(ExternalFD):
|
||||
args += self._configuration_args(('_o1', '_o', ''))
|
||||
|
||||
args = [encodeArgument(opt) for opt in args]
|
||||
args.append(encodeFilename(ffpp._ffmpeg_filename_argument(tmpfilename), True))
|
||||
args.append(ffpp._ffmpeg_filename_argument(tmpfilename))
|
||||
self._debug_cmd(args)
|
||||
|
||||
piped = any(fmt['url'] in ('-', 'pipe:') for fmt in selected_formats)
|
||||
|
||||
@@ -9,10 +9,9 @@ import time
|
||||
from .common import FileDownloader
|
||||
from .http import HttpFD
|
||||
from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
|
||||
from ..compat import compat_os_name
|
||||
from ..networking import Request
|
||||
from ..networking.exceptions import HTTPError, IncompleteRead
|
||||
from ..utils import DownloadError, RetryManager, encodeFilename, traverse_obj
|
||||
from ..utils import DownloadError, RetryManager, traverse_obj
|
||||
from ..utils.networking import HTTPHeaderDict
|
||||
from ..utils.progress import ProgressCalculator
|
||||
|
||||
@@ -152,7 +151,7 @@ class FragmentFD(FileDownloader):
|
||||
if self.__do_ytdl_file(ctx):
|
||||
self._write_ytdl_file(ctx)
|
||||
if not self.params.get('keep_fragments', False):
|
||||
self.try_remove(encodeFilename(ctx['fragment_filename_sanitized']))
|
||||
self.try_remove(ctx['fragment_filename_sanitized'])
|
||||
del ctx['fragment_filename_sanitized']
|
||||
|
||||
def _prepare_frag_download(self, ctx):
|
||||
@@ -188,7 +187,7 @@ class FragmentFD(FileDownloader):
|
||||
})
|
||||
|
||||
if self.__do_ytdl_file(ctx):
|
||||
ytdl_file_exists = os.path.isfile(encodeFilename(self.ytdl_filename(ctx['filename'])))
|
||||
ytdl_file_exists = os.path.isfile(self.ytdl_filename(ctx['filename']))
|
||||
continuedl = self.params.get('continuedl', True)
|
||||
if continuedl and ytdl_file_exists:
|
||||
self._read_ytdl_file(ctx)
|
||||
@@ -390,7 +389,7 @@ class FragmentFD(FileDownloader):
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
pass
|
||||
|
||||
if compat_os_name == 'nt':
|
||||
if os.name == 'nt':
|
||||
def future_result(future):
|
||||
while True:
|
||||
try:
|
||||
|
||||
@@ -16,6 +16,7 @@ from ..utils import (
|
||||
update_url_query,
|
||||
urljoin,
|
||||
)
|
||||
from ..utils._utils import _request_dump_filename
|
||||
|
||||
|
||||
class HlsFD(FragmentFD):
|
||||
@@ -72,11 +73,23 @@ class HlsFD(FragmentFD):
|
||||
|
||||
def real_download(self, filename, info_dict):
|
||||
man_url = info_dict['url']
|
||||
self.to_screen(f'[{self.FD_NAME}] Downloading m3u8 manifest')
|
||||
|
||||
urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
|
||||
man_url = urlh.url
|
||||
s = urlh.read().decode('utf-8', 'ignore')
|
||||
s = info_dict.get('hls_media_playlist_data')
|
||||
if s:
|
||||
self.to_screen(f'[{self.FD_NAME}] Using m3u8 manifest from extracted info')
|
||||
else:
|
||||
self.to_screen(f'[{self.FD_NAME}] Downloading m3u8 manifest')
|
||||
urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
|
||||
man_url = urlh.url
|
||||
s_bytes = urlh.read()
|
||||
if self.params.get('write_pages'):
|
||||
dump_filename = _request_dump_filename(
|
||||
man_url, info_dict['id'], None,
|
||||
trim_length=self.params.get('trim_file_name'))
|
||||
self.to_screen(f'[{self.FD_NAME}] Saving request to {dump_filename}')
|
||||
with open(dump_filename, 'wb') as outf:
|
||||
outf.write(s_bytes)
|
||||
s = s_bytes.decode('utf-8', 'ignore')
|
||||
|
||||
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
|
||||
if can_download:
|
||||
@@ -119,12 +132,12 @@ class HlsFD(FragmentFD):
|
||||
self.to_screen(f'[{self.FD_NAME}] Fragment downloads will be delegated to {real_downloader.get_basename()}')
|
||||
|
||||
def is_ad_fragment_start(s):
|
||||
return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s
|
||||
or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad'))
|
||||
return ((s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s)
|
||||
or (s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad')))
|
||||
|
||||
def is_ad_fragment_end(s):
|
||||
return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s
|
||||
or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment'))
|
||||
return ((s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s)
|
||||
or (s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment')))
|
||||
|
||||
fragments = []
|
||||
|
||||
@@ -177,6 +190,7 @@ class HlsFD(FragmentFD):
|
||||
if external_aes_iv:
|
||||
external_aes_iv = binascii.unhexlify(remove_start(external_aes_iv, '0x').zfill(32))
|
||||
byte_range = {}
|
||||
byte_range_offset = 0
|
||||
discontinuity_count = 0
|
||||
frag_index = 0
|
||||
ad_frag_next = False
|
||||
@@ -204,6 +218,11 @@ class HlsFD(FragmentFD):
|
||||
})
|
||||
media_sequence += 1
|
||||
|
||||
# If the byte_range is truthy, reset it after appending a fragment that uses it
|
||||
if byte_range:
|
||||
byte_range_offset = byte_range['end']
|
||||
byte_range = {}
|
||||
|
||||
elif line.startswith('#EXT-X-MAP'):
|
||||
if format_index and discontinuity_count != format_index:
|
||||
continue
|
||||
@@ -217,10 +236,12 @@ class HlsFD(FragmentFD):
|
||||
if extra_segment_query:
|
||||
frag_url = update_url_query(frag_url, extra_segment_query)
|
||||
|
||||
map_byte_range = {}
|
||||
|
||||
if map_info.get('BYTERANGE'):
|
||||
splitted_byte_range = map_info.get('BYTERANGE').split('@')
|
||||
sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range['end']
|
||||
byte_range = {
|
||||
sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else 0
|
||||
map_byte_range = {
|
||||
'start': sub_range_start,
|
||||
'end': sub_range_start + int(splitted_byte_range[0]),
|
||||
}
|
||||
@@ -229,7 +250,7 @@ class HlsFD(FragmentFD):
|
||||
'frag_index': frag_index,
|
||||
'url': frag_url,
|
||||
'decrypt_info': decrypt_info,
|
||||
'byte_range': byte_range,
|
||||
'byte_range': map_byte_range,
|
||||
'media_sequence': media_sequence,
|
||||
})
|
||||
media_sequence += 1
|
||||
@@ -257,7 +278,7 @@ class HlsFD(FragmentFD):
|
||||
media_sequence = int(line[22:])
|
||||
elif line.startswith('#EXT-X-BYTERANGE'):
|
||||
splitted_byte_range = line[17:].split('@')
|
||||
sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range['end']
|
||||
sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range_offset
|
||||
byte_range = {
|
||||
'start': sub_range_start,
|
||||
'end': sub_range_start + int(splitted_byte_range[0]),
|
||||
|
||||
@@ -15,7 +15,6 @@ from ..utils import (
|
||||
ThrottledDownload,
|
||||
XAttrMetadataError,
|
||||
XAttrUnavailableError,
|
||||
encodeFilename,
|
||||
int_or_none,
|
||||
parse_http_range,
|
||||
try_call,
|
||||
@@ -58,9 +57,8 @@ class HttpFD(FileDownloader):
|
||||
|
||||
if self.params.get('continuedl', True):
|
||||
# Establish possible resume length
|
||||
if os.path.isfile(encodeFilename(ctx.tmpfilename)):
|
||||
ctx.resume_len = os.path.getsize(
|
||||
encodeFilename(ctx.tmpfilename))
|
||||
if os.path.isfile(ctx.tmpfilename):
|
||||
ctx.resume_len = os.path.getsize(ctx.tmpfilename)
|
||||
|
||||
ctx.is_resume = ctx.resume_len > 0
|
||||
|
||||
@@ -241,7 +239,7 @@ class HttpFD(FileDownloader):
|
||||
ctx.resume_len = byte_counter
|
||||
else:
|
||||
try:
|
||||
ctx.resume_len = os.path.getsize(encodeFilename(ctx.tmpfilename))
|
||||
ctx.resume_len = os.path.getsize(ctx.tmpfilename)
|
||||
except FileNotFoundError:
|
||||
ctx.resume_len = 0
|
||||
raise RetryDownload(e)
|
||||
|
||||
@@ -8,7 +8,6 @@ from ..utils import (
|
||||
Popen,
|
||||
check_executable,
|
||||
encodeArgument,
|
||||
encodeFilename,
|
||||
get_exe_version,
|
||||
)
|
||||
|
||||
@@ -179,7 +178,7 @@ class RtmpFD(FileDownloader):
|
||||
return False
|
||||
|
||||
while retval in (RD_INCOMPLETE, RD_FAILED) and not test and not live:
|
||||
prevsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
prevsize = os.path.getsize(tmpfilename)
|
||||
self.to_screen(f'[rtmpdump] Downloaded {prevsize} bytes')
|
||||
time.sleep(5.0) # This seems to be needed
|
||||
args = [*basic_args, '--resume']
|
||||
@@ -187,7 +186,7 @@ class RtmpFD(FileDownloader):
|
||||
args += ['--skip', '1']
|
||||
args = [encodeArgument(a) for a in args]
|
||||
retval = run_rtmpdump(args)
|
||||
cursize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
cursize = os.path.getsize(tmpfilename)
|
||||
if prevsize == cursize and retval == RD_FAILED:
|
||||
break
|
||||
# Some rtmp streams seem abort after ~ 99.8%. Don't complain for those
|
||||
@@ -196,7 +195,7 @@ class RtmpFD(FileDownloader):
|
||||
retval = RD_SUCCESS
|
||||
break
|
||||
if retval == RD_SUCCESS or (test and retval == RD_INCOMPLETE):
|
||||
fsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
fsize = os.path.getsize(tmpfilename)
|
||||
self.to_screen(f'[rtmpdump] Downloaded {fsize} bytes')
|
||||
self.try_rename(tmpfilename, filename)
|
||||
self._hook_progress({
|
||||
|
||||
@@ -2,7 +2,7 @@ import os
|
||||
import subprocess
|
||||
|
||||
from .common import FileDownloader
|
||||
from ..utils import check_executable, encodeFilename
|
||||
from ..utils import check_executable
|
||||
|
||||
|
||||
class RtspFD(FileDownloader):
|
||||
@@ -26,7 +26,7 @@ class RtspFD(FileDownloader):
|
||||
|
||||
retval = subprocess.call(args)
|
||||
if retval == 0:
|
||||
fsize = os.path.getsize(encodeFilename(tmpfilename))
|
||||
fsize = os.path.getsize(tmpfilename)
|
||||
self.to_screen(f'\r[{args[0]}] {fsize} bytes')
|
||||
self.try_rename(tmpfilename, filename)
|
||||
self._hook_progress({
|
||||
|
||||
@@ -123,8 +123,8 @@ class YoutubeLiveChatFD(FragmentFD):
|
||||
data,
|
||||
lambda x: x['continuationContents']['liveChatContinuation'], dict) or {}
|
||||
|
||||
func = (info_dict['protocol'] == 'youtube_live_chat' and parse_actions_live
|
||||
or frag_index == 1 and try_refresh_replay_beginning
|
||||
func = ((info_dict['protocol'] == 'youtube_live_chat' and parse_actions_live)
|
||||
or (frag_index == 1 and try_refresh_replay_beginning)
|
||||
or parse_actions_replay)
|
||||
return (True, *func(live_chat_continuation))
|
||||
except HTTPError as err:
|
||||
|
||||
@@ -208,6 +208,10 @@ from .bandcamp import (
|
||||
BandcampUserIE,
|
||||
BandcampWeeklyIE,
|
||||
)
|
||||
from .bandlab import (
|
||||
BandlabIE,
|
||||
BandlabPlaylistIE,
|
||||
)
|
||||
from .bannedvideo import BannedVideoIE
|
||||
from .bbc import (
|
||||
BBCIE,
|
||||
@@ -252,6 +256,7 @@ from .bilibili import (
|
||||
BilibiliCheeseIE,
|
||||
BilibiliCheeseSeasonIE,
|
||||
BilibiliCollectionListIE,
|
||||
BiliBiliDynamicIE,
|
||||
BilibiliFavoritesListIE,
|
||||
BiliBiliIE,
|
||||
BiliBiliPlayerIE,
|
||||
@@ -436,12 +441,6 @@ from .crowdbunker import (
|
||||
CrowdBunkerIE,
|
||||
)
|
||||
from .crtvg import CrtvgIE
|
||||
from .crunchyroll import (
|
||||
CrunchyrollArtistIE,
|
||||
CrunchyrollBetaIE,
|
||||
CrunchyrollBetaShowIE,
|
||||
CrunchyrollMusicIE,
|
||||
)
|
||||
from .cspan import (
|
||||
CSpanCongressIE,
|
||||
CSpanIE,
|
||||
@@ -455,7 +454,10 @@ from .curiositystream import (
|
||||
CuriosityStreamIE,
|
||||
CuriosityStreamSeriesIE,
|
||||
)
|
||||
from .cwtv import CWTVIE
|
||||
from .cwtv import (
|
||||
CWTVIE,
|
||||
CWTVMovieIE,
|
||||
)
|
||||
from .cybrary import (
|
||||
CybraryCourseIE,
|
||||
CybraryIE,
|
||||
@@ -506,6 +508,7 @@ from .dfb import DFBIE
|
||||
from .dhm import DHMIE
|
||||
from .digitalconcerthall import DigitalConcertHallIE
|
||||
from .digiteka import DigitekaIE
|
||||
from .digiview import DigiviewIE
|
||||
from .discogs import DiscogsReleasePlaylistIE
|
||||
from .disney import DisneyIE
|
||||
from .dispeak import DigitallySpeakingIE
|
||||
@@ -551,6 +554,7 @@ from .dropout import (
|
||||
DropoutIE,
|
||||
DropoutSeasonIE,
|
||||
)
|
||||
from .drtalks import DrTalksIE
|
||||
from .drtuber import DrTuberIE
|
||||
from .drtv import (
|
||||
DRTVIE,
|
||||
@@ -580,6 +584,10 @@ from .egghead import (
|
||||
EggheadCourseIE,
|
||||
EggheadLessonIE,
|
||||
)
|
||||
from .eggs import (
|
||||
EggsArtistIE,
|
||||
EggsIE,
|
||||
)
|
||||
from .eighttracks import EightTracksIE
|
||||
from .eitb import EitbIE
|
||||
from .elementorembed import ElementorEmbedIE
|
||||
@@ -695,11 +703,6 @@ from .frontendmasters import (
|
||||
FrontendMastersLessonIE,
|
||||
)
|
||||
from .fujitv import FujiTVFODPlus7IE
|
||||
from .funimation import (
|
||||
FunimationIE,
|
||||
FunimationPageIE,
|
||||
FunimationShowIE,
|
||||
)
|
||||
from .funk import FunkIE
|
||||
from .funker530 import Funker530IE
|
||||
from .fuyintv import FuyinTVIE
|
||||
@@ -708,6 +711,7 @@ from .gab import (
|
||||
GabTVIE,
|
||||
)
|
||||
from .gaia import GaiaIE
|
||||
from .gamedevtv import GameDevTVDashboardIE
|
||||
from .gamejolt import (
|
||||
GameJoltCommunityIE,
|
||||
GameJoltGameIE,
|
||||
@@ -941,6 +945,10 @@ from .kaltura import KalturaIE
|
||||
from .kankanews import KankaNewsIE
|
||||
from .karaoketv import KaraoketvIE
|
||||
from .kelbyone import KelbyOneIE
|
||||
from .kenh14 import (
|
||||
Kenh14PlaylistIE,
|
||||
Kenh14VideoIE,
|
||||
)
|
||||
from .khanacademy import (
|
||||
KhanAcademyIE,
|
||||
KhanAcademyUnitIE,
|
||||
@@ -1130,12 +1138,6 @@ from .microsoftembed import (
|
||||
MicrosoftMediusIE,
|
||||
)
|
||||
from .microsoftstream import MicrosoftStreamIE
|
||||
from .mildom import (
|
||||
MildomClipIE,
|
||||
MildomIE,
|
||||
MildomUserVodIE,
|
||||
MildomVodIE,
|
||||
)
|
||||
from .minds import (
|
||||
MindsChannelIE,
|
||||
MindsGroupIE,
|
||||
@@ -1155,6 +1157,7 @@ from .mitele import MiTeleIE
|
||||
from .mixch import (
|
||||
MixchArchiveIE,
|
||||
MixchIE,
|
||||
MixchMovieIE,
|
||||
)
|
||||
from .mixcloud import (
|
||||
MixcloudIE,
|
||||
@@ -1274,6 +1277,10 @@ from .nebula import (
|
||||
)
|
||||
from .nekohacker import NekoHackerIE
|
||||
from .nerdcubed import NerdCubedFeedIE
|
||||
from .nest import (
|
||||
NestClipIE,
|
||||
NestIE,
|
||||
)
|
||||
from .neteasemusic import (
|
||||
NetEaseMusicAlbumIE,
|
||||
NetEaseMusicDjRadioIE,
|
||||
@@ -1516,8 +1523,8 @@ from .pgatour import PGATourIE
|
||||
from .philharmoniedeparis import PhilharmonieDeParisIE
|
||||
from .phoenix import PhoenixIE
|
||||
from .photobucket import PhotobucketIE
|
||||
from .pialive import PiaLiveIE
|
||||
from .piapro import PiaproIE
|
||||
from .piaulizaportal import PIAULIZAPortalIE
|
||||
from .picarto import (
|
||||
PicartoIE,
|
||||
PicartoVodIE,
|
||||
@@ -1528,6 +1535,10 @@ from .pinterest import (
|
||||
PinterestCollectionIE,
|
||||
PinterestIE,
|
||||
)
|
||||
from .piramidetv import (
|
||||
PiramideTVChannelIE,
|
||||
PiramideTVIE,
|
||||
)
|
||||
from .pixivsketch import (
|
||||
PixivSketchIE,
|
||||
PixivSketchUserIE,
|
||||
@@ -1547,16 +1558,13 @@ from .pluralsight import (
|
||||
PluralsightIE,
|
||||
)
|
||||
from .plutotv import PlutoTVIE
|
||||
from .plvideo import PlVideoIE
|
||||
from .podbayfm import (
|
||||
PodbayFMChannelIE,
|
||||
PodbayFMIE,
|
||||
)
|
||||
from .podchaser import PodchaserIE
|
||||
from .podomatic import PodomaticIE
|
||||
from .pokemon import (
|
||||
PokemonIE,
|
||||
PokemonWatchIE,
|
||||
)
|
||||
from .pokergo import (
|
||||
PokerGoCollectionIE,
|
||||
PokerGoIE,
|
||||
@@ -1647,6 +1655,7 @@ from .radiokapital import (
|
||||
RadioKapitalIE,
|
||||
RadioKapitalShowIE,
|
||||
)
|
||||
from .radioradicale import RadioRadicaleIE
|
||||
from .radiozet import RadioZetPodcastIE
|
||||
from .radlive import (
|
||||
RadLiveChannelIE,
|
||||
@@ -1938,9 +1947,7 @@ from .spotify import (
|
||||
)
|
||||
from .spreaker import (
|
||||
SpreakerIE,
|
||||
SpreakerPageIE,
|
||||
SpreakerShowIE,
|
||||
SpreakerShowPageIE,
|
||||
)
|
||||
from .springboardplatform import SpringboardPlatformIE
|
||||
from .sprout import SproutIE
|
||||
@@ -1982,6 +1989,10 @@ from .streetvoice import StreetVoiceIE
|
||||
from .stretchinternet import StretchInternetIE
|
||||
from .stripchat import StripchatIE
|
||||
from .stv import STVPlayerIE
|
||||
from .subsplash import (
|
||||
SubsplashIE,
|
||||
SubsplashPlaylistIE,
|
||||
)
|
||||
from .substack import SubstackIE
|
||||
from .sunporno import SunPornoIE
|
||||
from .sverigesradio import (
|
||||
@@ -2251,6 +2262,10 @@ from .ufctv import (
|
||||
)
|
||||
from .ukcolumn import UkColumnIE
|
||||
from .uktvplay import UKTVPlayIE
|
||||
from .uliza import (
|
||||
UlizaPlayerIE,
|
||||
UlizaPortalIE,
|
||||
)
|
||||
from .umg import UMGDeIE
|
||||
from .unistra import UnistraIE
|
||||
from .unity import UnityIE
|
||||
@@ -2279,10 +2294,6 @@ from .utreon import UtreonIE
|
||||
from .varzesh3 import Varzesh3IE
|
||||
from .vbox7 import Vbox7IE
|
||||
from .veo import VeoIE
|
||||
from .veoh import (
|
||||
VeohIE,
|
||||
VeohUserIE,
|
||||
)
|
||||
from .vesti import VestiIE
|
||||
from .vevo import (
|
||||
VevoIE,
|
||||
@@ -2355,10 +2366,6 @@ from .vimm import (
|
||||
VimmIE,
|
||||
VimmRecordingIE,
|
||||
)
|
||||
from .vine import (
|
||||
VineIE,
|
||||
VineUserIE,
|
||||
)
|
||||
from .viously import ViouslyIE
|
||||
from .viqeo import ViqeoIE
|
||||
from .viu import (
|
||||
|
||||
@@ -6,7 +6,6 @@ import hmac
|
||||
import io
|
||||
import json
|
||||
import re
|
||||
import struct
|
||||
import time
|
||||
import urllib.parse
|
||||
import uuid
|
||||
@@ -18,10 +17,8 @@ from ..networking.exceptions import TransportError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
OnDemandPagedList,
|
||||
bytes_to_intlist,
|
||||
decode_base_n,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
time_seconds,
|
||||
traverse_obj,
|
||||
update_url_query,
|
||||
@@ -72,15 +69,15 @@ class AbemaLicenseRH(RequestHandler):
|
||||
})
|
||||
|
||||
res = decode_base_n(license_response['k'], table=self._STRTABLE)
|
||||
encvideokey = bytes_to_intlist(struct.pack('>QQ', res >> 64, res & 0xffffffffffffffff))
|
||||
encvideokey = list(res.to_bytes(16, 'big'))
|
||||
|
||||
h = hmac.new(
|
||||
binascii.unhexlify(self._HKEY),
|
||||
(license_response['cid'] + self.ie._DEVICE_ID).encode(),
|
||||
digestmod=hashlib.sha256)
|
||||
enckey = bytes_to_intlist(h.digest())
|
||||
enckey = list(h.digest())
|
||||
|
||||
return intlist_to_bytes(aes_ecb_decrypt(encvideokey, enckey))
|
||||
return bytes(aes_ecb_decrypt(encvideokey, enckey))
|
||||
|
||||
|
||||
class AbemaTVBaseIE(InfoExtractor):
|
||||
@@ -424,14 +421,15 @@ class AbemaTVIE(AbemaTVBaseIE):
|
||||
|
||||
|
||||
class AbemaTVTitleIE(AbemaTVBaseIE):
|
||||
_VALID_URL = r'https?://abema\.tv/video/title/(?P<id>[^?/]+)'
|
||||
_VALID_URL = r'https?://abema\.tv/video/title/(?P<id>[^?/#]+)/?(?:\?(?:[^#]+&)?s=(?P<season>[^&#]+))?'
|
||||
_PAGE_SIZE = 25
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://abema.tv/video/title/90-1597',
|
||||
'url': 'https://abema.tv/video/title/90-1887',
|
||||
'info_dict': {
|
||||
'id': '90-1597',
|
||||
'id': '90-1887',
|
||||
'title': 'シャッフルアイランド',
|
||||
'description': 'md5:61b2425308f41a5282a926edda66f178',
|
||||
},
|
||||
'playlist_mincount': 2,
|
||||
}, {
|
||||
@@ -439,41 +437,54 @@ class AbemaTVTitleIE(AbemaTVBaseIE):
|
||||
'info_dict': {
|
||||
'id': '193-132',
|
||||
'title': '真心が届く~僕とスターのオフィス・ラブ!?~',
|
||||
'description': 'md5:9b59493d1f3a792bafbc7319258e7af8',
|
||||
},
|
||||
'playlist_mincount': 16,
|
||||
}, {
|
||||
'url': 'https://abema.tv/video/title/25-102',
|
||||
'url': 'https://abema.tv/video/title/25-1nzan-whrxe',
|
||||
'info_dict': {
|
||||
'id': '25-102',
|
||||
'title': 'ソードアート・オンライン アリシゼーション',
|
||||
'id': '25-1nzan-whrxe',
|
||||
'title': 'ソードアート・オンライン',
|
||||
'description': 'md5:c094904052322e6978495532bdbf06e6',
|
||||
},
|
||||
'playlist_mincount': 24,
|
||||
'playlist_mincount': 25,
|
||||
}, {
|
||||
'url': 'https://abema.tv/video/title/26-2mzbynr-cph?s=26-2mzbynr-cph_s40',
|
||||
'info_dict': {
|
||||
'title': '〈物語〉シリーズ',
|
||||
'id': '26-2mzbynr-cph',
|
||||
'description': 'md5:e67873de1c88f360af1f0a4b84847a52',
|
||||
},
|
||||
'playlist_count': 59,
|
||||
}]
|
||||
|
||||
def _fetch_page(self, playlist_id, series_version, page):
|
||||
def _fetch_page(self, playlist_id, series_version, season_id, page):
|
||||
query = {
|
||||
'seriesVersion': series_version,
|
||||
'offset': str(page * self._PAGE_SIZE),
|
||||
'order': 'seq',
|
||||
'limit': str(self._PAGE_SIZE),
|
||||
}
|
||||
if season_id:
|
||||
query['seasonId'] = season_id
|
||||
programs = self._call_api(
|
||||
f'v1/video/series/{playlist_id}/programs', playlist_id,
|
||||
note=f'Downloading page {page + 1}',
|
||||
query={
|
||||
'seriesVersion': series_version,
|
||||
'offset': str(page * self._PAGE_SIZE),
|
||||
'order': 'seq',
|
||||
'limit': str(self._PAGE_SIZE),
|
||||
})
|
||||
query=query)
|
||||
yield from (
|
||||
self.url_result(f'https://abema.tv/video/episode/{x}')
|
||||
for x in traverse_obj(programs, ('programs', ..., 'id')))
|
||||
|
||||
def _entries(self, playlist_id, series_version):
|
||||
def _entries(self, playlist_id, series_version, season_id):
|
||||
return OnDemandPagedList(
|
||||
functools.partial(self._fetch_page, playlist_id, series_version),
|
||||
functools.partial(self._fetch_page, playlist_id, series_version, season_id),
|
||||
self._PAGE_SIZE)
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id = self._match_id(url)
|
||||
playlist_id, season_id = self._match_valid_url(url).group('id', 'season')
|
||||
series_info = self._call_api(f'v1/video/series/{playlist_id}', playlist_id)
|
||||
|
||||
return self.playlist_result(
|
||||
self._entries(playlist_id, series_info['version']), playlist_id=playlist_id,
|
||||
self._entries(playlist_id, series_info['version'], season_id), playlist_id=playlist_id,
|
||||
playlist_title=series_info.get('title'),
|
||||
playlist_description=series_info.get('content'))
|
||||
|
||||
@@ -43,14 +43,14 @@ class ACastIE(ACastBaseIE):
|
||||
_VALID_URL = r'''(?x:
|
||||
https?://
|
||||
(?:
|
||||
(?:(?:embed|www)\.)?acast\.com/|
|
||||
(?:(?:embed|www|shows)\.)?acast\.com/|
|
||||
play\.acast\.com/s/
|
||||
)
|
||||
(?P<channel>[^/]+)/(?P<id>[^/#?"]+)
|
||||
(?P<channel>[^/?#]+)/(?:episodes/)?(?P<id>[^/#?"]+)
|
||||
)'''
|
||||
_EMBED_REGEX = [rf'(?x)<iframe[^>]+\bsrc=[\'"](?P<url>{_VALID_URL})']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
|
||||
'url': 'https://shows.acast.com/sparpodcast/episodes/2.raggarmordet-rosterurdetforflutna',
|
||||
'info_dict': {
|
||||
'id': '2a92b283-1a75-4ad8-8396-499c641de0d9',
|
||||
'ext': 'mp3',
|
||||
@@ -59,7 +59,7 @@ class ACastIE(ACastBaseIE):
|
||||
'timestamp': 1477346700,
|
||||
'upload_date': '20161024',
|
||||
'duration': 2766,
|
||||
'creator': 'Third Ear Studio',
|
||||
'creators': ['Third Ear Studio'],
|
||||
'series': 'Spår',
|
||||
'episode': '2. Raggarmordet - Röster ur det förflutna',
|
||||
'thumbnail': 'https://assets.pippa.io/shows/616ebe1886d7b1398620b943/616ebe33c7e6e70013cae7da.jpg',
|
||||
@@ -74,6 +74,9 @@ class ACastIE(ACastBaseIE):
|
||||
}, {
|
||||
'url': 'https://play.acast.com/s/rattegangspodden/s04e09styckmordetihelenelund-del2-2',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.acast.com/sparpodcast/2.raggarmordet-rosterurdetforflutna',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://play.acast.com/s/sparpodcast/2a92b283-1a75-4ad8-8396-499c641de0d9',
|
||||
'only_matching': True,
|
||||
@@ -110,7 +113,7 @@ class ACastChannelIE(ACastBaseIE):
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://
|
||||
(?:
|
||||
(?:www\.)?acast\.com/|
|
||||
(?:(?:www|shows)\.)?acast\.com/|
|
||||
play\.acast\.com/s/
|
||||
)
|
||||
(?P<id>[^/#?]+)
|
||||
@@ -120,12 +123,15 @@ class ACastChannelIE(ACastBaseIE):
|
||||
'info_dict': {
|
||||
'id': '4efc5294-5385-4847-98bd-519799ce5786',
|
||||
'title': 'Today in Focus',
|
||||
'description': 'md5:c09ce28c91002ce4ffce71d6504abaae',
|
||||
'description': 'md5:feca253de9947634605080cd9eeea2bf',
|
||||
},
|
||||
'playlist_mincount': 200,
|
||||
}, {
|
||||
'url': 'http://play.acast.com/s/ft-banking-weekly',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://shows.acast.com/sparpodcast',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -11,11 +11,9 @@ from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
ass_subtitles_timecode,
|
||||
bytes_to_intlist,
|
||||
bytes_to_long,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
join_nonempty,
|
||||
long_to_bytes,
|
||||
parse_iso8601,
|
||||
@@ -198,16 +196,16 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
|
||||
|
||||
links_url = try_get(options, lambda x: x['video']['url']) or (video_base_url + 'link')
|
||||
self._K = ''.join(random.choices('0123456789abcdef', k=16))
|
||||
message = bytes_to_intlist(json.dumps({
|
||||
message = list(json.dumps({
|
||||
'k': self._K,
|
||||
't': token,
|
||||
}))
|
||||
}).encode())
|
||||
|
||||
# Sometimes authentication fails for no good reason, retry with
|
||||
# a different random padding
|
||||
links_data = None
|
||||
for _ in range(3):
|
||||
padded_message = intlist_to_bytes(pkcs1pad(message, 128))
|
||||
padded_message = bytes(pkcs1pad(message, 128))
|
||||
n, e = self._RSA_KEY
|
||||
encrypted_message = long_to_bytes(pow(bytes_to_long(padded_message), e, n))
|
||||
authorization = base64.b64encode(encrypted_message).decode()
|
||||
@@ -234,7 +232,7 @@ Format: Marked,Start,End,Style,Name,MarginL,MarginR,MarginV,Effect,Text'''
|
||||
|
||||
error = self._parse_json(e.cause.response.read(), video_id)
|
||||
message = error.get('message')
|
||||
if e.cause.code == 403 and error.get('code') == 'player-bad-geolocation-country':
|
||||
if e.cause.status == 403 and error.get('code') == 'player-bad-geolocation-country':
|
||||
self.raise_geo_restricted(msg=message)
|
||||
raise ExtractorError(message)
|
||||
else:
|
||||
|
||||
@@ -1362,7 +1362,7 @@ class AdobePassIE(InfoExtractor): # XXX: Conventionally, base classes should en
|
||||
|
||||
def _download_webpage_handle(self, *args, **kwargs):
|
||||
headers = self.geo_verification_headers()
|
||||
headers.update(kwargs.get('headers', {}))
|
||||
headers.update(kwargs.get('headers') or {})
|
||||
kwargs['headers'] = headers
|
||||
return super()._download_webpage_handle(
|
||||
*args, **kwargs)
|
||||
|
||||
@@ -66,6 +66,14 @@ class AfreecaTVBaseIE(InfoExtractor):
|
||||
extensions={'legacy_ssl': True}), display_id,
|
||||
'Downloading API JSON', 'Unable to download API JSON')
|
||||
|
||||
@staticmethod
|
||||
def _fixup_thumb(thumb_url):
|
||||
if not url_or_none(thumb_url):
|
||||
return None
|
||||
# Core would determine_ext as 'php' from the url, so we need to provide the real ext
|
||||
# See: https://github.com/yt-dlp/yt-dlp/issues/11537
|
||||
return [{'url': thumb_url, 'ext': 'jpg'}]
|
||||
|
||||
|
||||
class AfreecaTVIE(AfreecaTVBaseIE):
|
||||
IE_NAME = 'soop'
|
||||
@@ -155,7 +163,7 @@ class AfreecaTVIE(AfreecaTVBaseIE):
|
||||
'uploader': ('writer_nick', {str}),
|
||||
'uploader_id': ('bj_id', {str}),
|
||||
'duration': ('total_file_duration', {int_or_none(scale=1000)}),
|
||||
'thumbnail': ('thumb', {url_or_none}),
|
||||
'thumbnails': ('thumb', {self._fixup_thumb}),
|
||||
})
|
||||
|
||||
entries = []
|
||||
@@ -226,8 +234,7 @@ class AfreecaTVCatchStoryIE(AfreecaTVBaseIE):
|
||||
|
||||
return self.playlist_result(self._entries(data), video_id)
|
||||
|
||||
@staticmethod
|
||||
def _entries(data):
|
||||
def _entries(self, data):
|
||||
# 'files' is always a list with 1 element
|
||||
yield from traverse_obj(data, (
|
||||
'data', lambda _, v: v['story_type'] == 'catch',
|
||||
@@ -238,7 +245,7 @@ class AfreecaTVCatchStoryIE(AfreecaTVBaseIE):
|
||||
'title': ('title', {str}),
|
||||
'uploader': ('writer_nick', {str}),
|
||||
'uploader_id': ('writer_id', {str}),
|
||||
'thumbnail': ('thumb', {url_or_none}),
|
||||
'thumbnails': ('thumb', {self._fixup_thumb}),
|
||||
'timestamp': ('write_timestamp', {int_or_none}),
|
||||
}))
|
||||
|
||||
|
||||
@@ -8,10 +8,8 @@ import time
|
||||
from .common import InfoExtractor
|
||||
from ..aes import aes_encrypt
|
||||
from ..utils import (
|
||||
bytes_to_intlist,
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
intlist_to_bytes,
|
||||
join_nonempty,
|
||||
smuggle_url,
|
||||
strip_jsonp,
|
||||
@@ -234,8 +232,8 @@ class AnvatoIE(InfoExtractor):
|
||||
server_time = self._server_time(access_key, video_id)
|
||||
input_data = f'{server_time}~{md5_text(video_data_url)}~{md5_text(server_time)}'
|
||||
|
||||
auth_secret = intlist_to_bytes(aes_encrypt(
|
||||
bytes_to_intlist(input_data[:64]), bytes_to_intlist(self._AUTH_KEY)))
|
||||
auth_secret = bytes(aes_encrypt(
|
||||
list(input_data[:64].encode()), list(self._AUTH_KEY)))
|
||||
query = {
|
||||
'X-Anvato-Adst-Auth': base64.b64encode(auth_secret).decode('ascii'),
|
||||
'rtyp': 'fp',
|
||||
|
||||
@@ -205,6 +205,26 @@ class ArchiveOrgIE(InfoExtractor):
|
||||
},
|
||||
},
|
||||
],
|
||||
}, {
|
||||
# The reviewbody is None for one of the reviews; just need to extract data without crashing
|
||||
'url': 'https://archive.org/details/gd95-04-02.sbd.11622.sbeok.shnf/gd95-04-02d1t04.shn',
|
||||
'info_dict': {
|
||||
'id': 'gd95-04-02.sbd.11622.sbeok.shnf/gd95-04-02d1t04.shn',
|
||||
'ext': 'mp3',
|
||||
'title': 'Stuck Inside of Mobile with the Memphis Blues Again',
|
||||
'creators': ['Grateful Dead'],
|
||||
'duration': 338.31,
|
||||
'track': 'Stuck Inside of Mobile with the Memphis Blues Again',
|
||||
'description': 'md5:764348a470b986f1217ffd38d6ac7b72',
|
||||
'display_id': 'gd95-04-02d1t04.shn',
|
||||
'location': 'Pyramid Arena',
|
||||
'uploader': 'jon@archive.org',
|
||||
'album': '1995-04-02 - Pyramid Arena',
|
||||
'upload_date': '20040519',
|
||||
'track_number': 4,
|
||||
'release_date': '19950402',
|
||||
'timestamp': 1084927901,
|
||||
},
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
@@ -335,7 +355,7 @@ class ArchiveOrgIE(InfoExtractor):
|
||||
info['comments'].append({
|
||||
'id': review.get('review_id'),
|
||||
'author': review.get('reviewer'),
|
||||
'text': str_or_none(review.get('reviewtitle'), '') + '\n\n' + review.get('reviewbody'),
|
||||
'text': join_nonempty('reviewtitle', 'reviewbody', from_dict=review, delim='\n\n'),
|
||||
'timestamp': unified_timestamp(review.get('createdate')),
|
||||
'parent': 'root'})
|
||||
|
||||
|
||||
437
yt_dlp/extractor/bandlab.py
Normal file
437
yt_dlp/extractor/bandlab.py
Normal file
@@ -0,0 +1,437 @@
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
format_field,
|
||||
int_or_none,
|
||||
parse_iso8601,
|
||||
parse_qs,
|
||||
truncate_string,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj, value
|
||||
|
||||
|
||||
class BandlabBaseIE(InfoExtractor):
|
||||
def _call_api(self, endpoint, asset_id, **kwargs):
|
||||
headers = kwargs.pop('headers', None) or {}
|
||||
return self._download_json(
|
||||
f'https://www.bandlab.com/api/v1.3/{endpoint}/{asset_id}',
|
||||
asset_id, headers={
|
||||
'accept': 'application/json',
|
||||
'referer': 'https://www.bandlab.com/',
|
||||
'x-client-id': 'BandLab-Web',
|
||||
'x-client-version': '10.1.124',
|
||||
**headers,
|
||||
}, **kwargs)
|
||||
|
||||
def _parse_revision(self, revision_data, url=None):
|
||||
return {
|
||||
'vcodec': 'none',
|
||||
'media_type': 'revision',
|
||||
'extractor_key': BandlabIE.ie_key(),
|
||||
'extractor': BandlabIE.IE_NAME,
|
||||
**traverse_obj(revision_data, {
|
||||
'webpage_url': (
|
||||
'id', ({value(url)}, {format_field(template='https://www.bandlab.com/revision/%s')}), filter, any),
|
||||
'id': (('revisionId', 'id'), {str}, any),
|
||||
'title': ('song', 'name', {str}),
|
||||
'track': ('song', 'name', {str}),
|
||||
'url': ('mixdown', 'file', {url_or_none}),
|
||||
'thumbnail': ('song', 'picture', 'url', {url_or_none}),
|
||||
'description': ('description', {str}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
'timestamp': ('createdOn', {parse_iso8601}),
|
||||
'duration': ('mixdown', 'duration', {float_or_none}),
|
||||
'view_count': ('counters', 'plays', {int_or_none}),
|
||||
'like_count': ('counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'genres': ('genres', ..., 'name', {str}),
|
||||
}),
|
||||
}
|
||||
|
||||
def _parse_track(self, track_data, url=None):
|
||||
return {
|
||||
'vcodec': 'none',
|
||||
'media_type': 'track',
|
||||
'extractor_key': BandlabIE.ie_key(),
|
||||
'extractor': BandlabIE.IE_NAME,
|
||||
**traverse_obj(track_data, {
|
||||
'webpage_url': (
|
||||
'id', ({value(url)}, {format_field(template='https://www.bandlab.com/post/%s')}), filter, any),
|
||||
'id': (('revisionId', 'id'), {str}, any),
|
||||
'url': ('track', 'sample', 'audioUrl', {url_or_none}),
|
||||
'title': ('track', 'name', {str}),
|
||||
'track': ('track', 'name', {str}),
|
||||
'description': ('caption', {str}),
|
||||
'thumbnail': ('track', 'picture', ('original', 'url'), {url_or_none}, any),
|
||||
'view_count': ('counters', 'plays', {int_or_none}),
|
||||
'like_count': ('counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'duration': ('track', 'sample', 'duration', {float_or_none}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
'timestamp': ('createdOn', {parse_iso8601}),
|
||||
}),
|
||||
}
|
||||
|
||||
def _parse_video(self, video_data, url=None):
|
||||
return {
|
||||
'media_type': 'video',
|
||||
'extractor_key': BandlabIE.ie_key(),
|
||||
'extractor': BandlabIE.IE_NAME,
|
||||
**traverse_obj(video_data, {
|
||||
'id': ('id', {str}),
|
||||
'webpage_url': (
|
||||
'id', ({value(url)}, {format_field(template='https://www.bandlab.com/post/%s')}), filter, any),
|
||||
'url': ('video', 'url', {url_or_none}),
|
||||
'title': ('caption', {lambda x: x.replace('\n', ' ')}, {truncate_string(left=50)}),
|
||||
'description': ('caption', {str}),
|
||||
'thumbnail': ('video', 'picture', 'url', {url_or_none}),
|
||||
'view_count': ('video', 'counters', 'plays', {int_or_none}),
|
||||
'like_count': ('video', 'counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'duration': ('video', 'duration', {float_or_none}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
}),
|
||||
}
|
||||
|
||||
|
||||
class BandlabIE(BandlabBaseIE):
|
||||
_VALID_URL = [
|
||||
r'https?://(?:www\.)?bandlab.com/(?P<url_type>track|post|revision)/(?P<id>[\da-f_-]+)',
|
||||
r'https?://(?:www\.)?bandlab.com/(?P<url_type>embed)/\?(?:[^#]*&)?id=(?P<id>[\da-f-]+)',
|
||||
]
|
||||
_EMBED_REGEX = [rf'<iframe[^>]+src=[\'"](?P<url>{_VALID_URL[1]})[\'"]']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.bandlab.com/track/04b37e88dba24967b9dac8eb8567ff39_07d7f906fc96ee11b75e000d3a428fff',
|
||||
'md5': '46f7b43367dd268bbcf0bbe466753b2c',
|
||||
'info_dict': {
|
||||
'id': '02d7f906-fc96-ee11-b75e-000d3a428fff',
|
||||
'ext': 'm4a',
|
||||
'uploader_id': 'ender_milze',
|
||||
'track': 'sweet black',
|
||||
'description': 'composed by juanjn3737',
|
||||
'timestamp': 1702171963,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'duration': 54.629999999999995,
|
||||
'title': 'sweet black',
|
||||
'upload_date': '20231210',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/fa082beb-b856-4730-9170-a57e4e32cc2c/',
|
||||
'genres': ['Lofi'],
|
||||
'uploader': 'ender milze',
|
||||
'comment_count': int,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Same track as above but post URL
|
||||
'url': 'https://www.bandlab.com/post/07d7f906-fc96-ee11-b75e-000d3a428fff',
|
||||
'md5': '46f7b43367dd268bbcf0bbe466753b2c',
|
||||
'info_dict': {
|
||||
'id': '02d7f906-fc96-ee11-b75e-000d3a428fff',
|
||||
'ext': 'm4a',
|
||||
'uploader_id': 'ender_milze',
|
||||
'track': 'sweet black',
|
||||
'description': 'composed by juanjn3737',
|
||||
'timestamp': 1702171973,
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'duration': 54.629999999999995,
|
||||
'title': 'sweet black',
|
||||
'upload_date': '20231210',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/fa082beb-b856-4730-9170-a57e4e32cc2c/',
|
||||
'genres': ['Lofi'],
|
||||
'uploader': 'ender milze',
|
||||
'comment_count': int,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# SharedKey Example
|
||||
'url': 'https://www.bandlab.com/track/048916c2-c6da-ee11-85f9-6045bd2e11f9?sharedKey=0NNWX8qYAEmI38lWAzCNDA',
|
||||
'md5': '15174b57c44440e2a2008be9cae00250',
|
||||
'info_dict': {
|
||||
'id': '038916c2-c6da-ee11-85f9-6045bd2e11f9',
|
||||
'ext': 'm4a',
|
||||
'comment_count': int,
|
||||
'genres': ['Other'],
|
||||
'uploader_id': 'user8353034818103753',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/51b18363-da23-4b9b-a29c-2933a3e561ca/',
|
||||
'timestamp': 1709625771,
|
||||
'track': 'PodcastMaerchen4b',
|
||||
'duration': 468.14,
|
||||
'view_count': int,
|
||||
'description': 'Podcast: Neues aus der Märchenwelt',
|
||||
'like_count': int,
|
||||
'upload_date': '20240305',
|
||||
'uploader': 'Erna Wageneder',
|
||||
'title': 'PodcastMaerchen4b',
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Different Revision selected
|
||||
'url': 'https://www.bandlab.com/track/130343fc-148b-ea11-96d2-0003ffd1fc09?revId=110343fc-148b-ea11-96d2-0003ffd1fc09',
|
||||
'md5': '74e055ef9325d63f37088772fbfe4454',
|
||||
'info_dict': {
|
||||
'id': '110343fc-148b-ea11-96d2-0003ffd1fc09',
|
||||
'ext': 'm4a',
|
||||
'timestamp': 1588273294,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/users/b612e533-e4f7-4542-9f50-3fcfd8dd822c/',
|
||||
'description': 'Final Revision.',
|
||||
'title': 'Replay ( Instrumental)',
|
||||
'uploader': 'David R Sparks',
|
||||
'uploader_id': 'davesnothome69',
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'track': 'Replay ( Instrumental)',
|
||||
'genres': ['Rock'],
|
||||
'upload_date': '20200430',
|
||||
'like_count': int,
|
||||
'duration': 279.43,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Video
|
||||
'url': 'https://www.bandlab.com/post/5cdf9036-3857-ef11-991a-6045bd36e0d9',
|
||||
'md5': '8caa2ef28e86c1dacf167293cfdbeba9',
|
||||
'info_dict': {
|
||||
'id': '5cdf9036-3857-ef11-991a-6045bd36e0d9',
|
||||
'ext': 'mp4',
|
||||
'duration': 44.705,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/videos/67c6cef1-cef6-40d3-831e-a55bc1dcb972/',
|
||||
'comment_count': int,
|
||||
'title': 'backing vocals',
|
||||
'uploader_id': 'marliashya',
|
||||
'uploader': 'auraa',
|
||||
'like_count': int,
|
||||
'description': 'backing vocals',
|
||||
'media_type': 'video',
|
||||
},
|
||||
}, {
|
||||
# Embed Example
|
||||
'url': 'https://www.bandlab.com/embed/?blur=false&id=014de0a4-7d82-ea11-a94c-0003ffd19c0f',
|
||||
'md5': 'a4ad05cb68c54faaed9b0a8453a8cf4a',
|
||||
'info_dict': {
|
||||
'id': '014de0a4-7d82-ea11-a94c-0003ffd19c0f',
|
||||
'ext': 'm4a',
|
||||
'comment_count': int,
|
||||
'genres': ['Electronic'],
|
||||
'uploader': 'Charlie Henson',
|
||||
'timestamp': 1587328674,
|
||||
'upload_date': '20200419',
|
||||
'view_count': int,
|
||||
'track': 'Positronic Meltdown',
|
||||
'duration': 318.55,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/87165bc3-5439-496e-b1f7-a9f13b541ff2/',
|
||||
'description': 'Checkout my tracks at AOMX http://aomxsounds.com/',
|
||||
'uploader_id': 'microfreaks',
|
||||
'title': 'Positronic Meltdown',
|
||||
'like_count': int,
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}, {
|
||||
# Track without revisions available
|
||||
'url': 'https://www.bandlab.com/track/55767ac51789ea11a94c0003ffd1fc09_2f007b0a37b94ec7a69bc25ae15108a5',
|
||||
'md5': 'f05d68a3769952c2d9257c473e14c15f',
|
||||
'info_dict': {
|
||||
'id': '55767ac51789ea11a94c0003ffd1fc09_2f007b0a37b94ec7a69bc25ae15108a5',
|
||||
'ext': 'm4a',
|
||||
'track': 'insame',
|
||||
'like_count': int,
|
||||
'duration': 84.03,
|
||||
'title': 'insame',
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'uploader': 'Sorakime',
|
||||
'uploader_id': 'sorakime',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/users/572a351a-0f3a-4c6a-ac39-1a5defdeeb1c/',
|
||||
'timestamp': 1691162128,
|
||||
'upload_date': '20230804',
|
||||
'media_type': 'track',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.bandlab.com/revision/014de0a4-7d82-ea11-a94c-0003ffd19c0f',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_WEBPAGE_TESTS = [{
|
||||
'url': 'https://phantomluigi.github.io/',
|
||||
'info_dict': {
|
||||
'id': 'e14223c3-7871-ef11-bdfd-000d3a980db3',
|
||||
'ext': 'm4a',
|
||||
'view_count': int,
|
||||
'upload_date': '20240913',
|
||||
'uploader_id': 'phantommusicofficial',
|
||||
'timestamp': 1726194897,
|
||||
'uploader': 'Phantom',
|
||||
'comment_count': int,
|
||||
'genres': ['Progresive Rock'],
|
||||
'description': 'md5:a38cd668f7a2843295ef284114f18429',
|
||||
'duration': 225.23,
|
||||
'like_count': int,
|
||||
'title': 'Vermilion Pt. 2 (Cover)',
|
||||
'track': 'Vermilion Pt. 2 (Cover)',
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/songs/62b10750-7aef-4f42-ad08-1af52f577e97/',
|
||||
'media_type': 'revision',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id, url_type = self._match_valid_url(url).group('id', 'url_type')
|
||||
|
||||
qs = parse_qs(url)
|
||||
revision_id = traverse_obj(qs, (('revId', 'id'), 0, any))
|
||||
if url_type == 'revision':
|
||||
revision_id = display_id
|
||||
|
||||
revision_data = None
|
||||
if not revision_id:
|
||||
post_data = self._call_api(
|
||||
'posts', display_id, note='Downloading post data',
|
||||
query=traverse_obj(qs, {'sharedKey': ('sharedKey', 0)}))
|
||||
|
||||
revision_id = traverse_obj(post_data, (('revisionId', ('revision', 'id')), {str}, any))
|
||||
revision_data = traverse_obj(post_data, ('revision', {dict}))
|
||||
|
||||
if not revision_data and not revision_id:
|
||||
post_type = post_data.get('type')
|
||||
if post_type == 'Video':
|
||||
return self._parse_video(post_data, url=url)
|
||||
if post_type == 'Track':
|
||||
return self._parse_track(post_data, url=url)
|
||||
raise ExtractorError(f'Could not extract data for post type {post_type!r}')
|
||||
|
||||
if not revision_data:
|
||||
revision_data = self._call_api(
|
||||
'revisions', revision_id, note='Downloading revision data', query={'edit': 'false'})
|
||||
|
||||
return self._parse_revision(revision_data, url=url)
|
||||
|
||||
|
||||
class BandlabPlaylistIE(BandlabBaseIE):
|
||||
_VALID_URL = [
|
||||
r'https?://(?:www\.)?bandlab.com/(?:[\w]+/)?(?P<type>albums|collections)/(?P<id>[\da-f-]+)',
|
||||
r'https?://(?:www\.)?bandlab.com/(?P<type>embed)/collection/\?(?:[^#]*&)?id=(?P<id>[\da-f-]+)',
|
||||
]
|
||||
_EMBED_REGEX = [rf'<iframe[^>]+src=[\'"](?P<url>{_VALID_URL[1]})[\'"]']
|
||||
_TESTS = [{
|
||||
'url': 'https://www.bandlab.com/davesnothome69/albums/89b79ea6-de42-ed11-b495-00224845aac7',
|
||||
'info_dict': {
|
||||
'thumbnail': 'https://bl-prod-images.azureedge.net/v1.3/albums/69507ff3-579a-45be-afca-9e87eddec944/',
|
||||
'release_date': '20221003',
|
||||
'title': 'Remnants',
|
||||
'album': 'Remnants',
|
||||
'like_count': int,
|
||||
'album_type': 'LP',
|
||||
'description': 'A collection of some feel good, rock hits.',
|
||||
'comment_count': int,
|
||||
'view_count': int,
|
||||
'id': '89b79ea6-de42-ed11-b495-00224845aac7',
|
||||
'uploader': 'David R Sparks',
|
||||
'uploader_id': 'davesnothome69',
|
||||
},
|
||||
'playlist_count': 10,
|
||||
}, {
|
||||
'url': 'https://www.bandlab.com/slytheband/collections/955102d4-1040-ef11-86c3-000d3a42581b',
|
||||
'info_dict': {
|
||||
'id': '955102d4-1040-ef11-86c3-000d3a42581b',
|
||||
'timestamp': 1720762659,
|
||||
'view_count': int,
|
||||
'title': 'My Shit 🖤',
|
||||
'uploader_id': 'slytheband',
|
||||
'uploader': '𝓢𝓛𝓨',
|
||||
'upload_date': '20240712',
|
||||
'like_count': int,
|
||||
'thumbnail': 'https://bandlabimages.azureedge.net/v1.0/collections/2c64ca12-b180-4b76-8587-7a8da76bddc8/',
|
||||
},
|
||||
'playlist_count': 15,
|
||||
}, {
|
||||
# Embeds can contain both albums and collections with the same URL pattern. This is an album
|
||||
'url': 'https://www.bandlab.com/embed/collection/?id=12cc6f7f-951b-ee11-907c-00224844f303',
|
||||
'info_dict': {
|
||||
'id': '12cc6f7f-951b-ee11-907c-00224844f303',
|
||||
'release_date': '20230706',
|
||||
'description': 'This is a collection of songs I created when I had an Amiga computer.',
|
||||
'view_count': int,
|
||||
'title': 'Mark Salud The Amiga Collection',
|
||||
'uploader_id': 'mssirmooth1962',
|
||||
'comment_count': int,
|
||||
'thumbnail': 'https://bl-prod-images.azureedge.net/v1.3/albums/d618bd7b-0537-40d5-bdd8-61b066e77d59/',
|
||||
'like_count': int,
|
||||
'uploader': 'Mark Salud',
|
||||
'album': 'Mark Salud The Amiga Collection',
|
||||
'album_type': 'LP',
|
||||
},
|
||||
'playlist_count': 24,
|
||||
}, {
|
||||
# Tracks without revision id
|
||||
'url': 'https://www.bandlab.com/embed/collection/?id=e98aafb5-d932-ee11-b8f0-00224844c719',
|
||||
'info_dict': {
|
||||
'like_count': int,
|
||||
'uploader_id': 'sorakime',
|
||||
'comment_count': int,
|
||||
'uploader': 'Sorakime',
|
||||
'view_count': int,
|
||||
'description': 'md5:4ec31c568a5f5a5a2b17572ea64c3825',
|
||||
'release_date': '20230812',
|
||||
'title': 'Art',
|
||||
'album': 'Art',
|
||||
'album_type': 'Album',
|
||||
'id': 'e98aafb5-d932-ee11-b8f0-00224844c719',
|
||||
'thumbnail': 'https://bl-prod-images.azureedge.net/v1.3/albums/20c890de-e94a-4422-828a-2da6377a13c8/',
|
||||
},
|
||||
'playlist_count': 13,
|
||||
}, {
|
||||
'url': 'https://www.bandlab.com/albums/89b79ea6-de42-ed11-b495-00224845aac7',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _entries(self, album_data):
|
||||
for post in traverse_obj(album_data, ('posts', lambda _, v: v['type'])):
|
||||
post_type = post['type']
|
||||
if post_type == 'Revision':
|
||||
yield self._parse_revision(post.get('revision'))
|
||||
elif post_type == 'Track':
|
||||
yield self._parse_track(post)
|
||||
elif post_type == 'Video':
|
||||
yield self._parse_video(post)
|
||||
else:
|
||||
self.report_warning(f'Skipping unknown post type: "{post_type}"')
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id, playlist_type = self._match_valid_url(url).group('id', 'type')
|
||||
|
||||
endpoints = {
|
||||
'albums': ['albums'],
|
||||
'collections': ['collections'],
|
||||
'embed': ['collections', 'albums'],
|
||||
}.get(playlist_type)
|
||||
for endpoint in endpoints:
|
||||
playlist_data = self._call_api(
|
||||
endpoint, playlist_id, note=f'Downloading {endpoint[:-1]} data',
|
||||
fatal=False, expected_status=404)
|
||||
if not playlist_data.get('errorCode'):
|
||||
playlist_type = endpoint
|
||||
break
|
||||
if error_code := playlist_data.get('errorCode'):
|
||||
raise ExtractorError(f'Could not find playlist data. Error code: "{error_code}"')
|
||||
|
||||
return self.playlist_result(
|
||||
self._entries(playlist_data), playlist_id,
|
||||
**traverse_obj(playlist_data, {
|
||||
'title': ('name', {str}),
|
||||
'description': ('description', {str}),
|
||||
'uploader': ('creator', 'name', {str}),
|
||||
'uploader_id': ('creator', 'username', {str}),
|
||||
'timestamp': ('createdOn', {parse_iso8601}),
|
||||
'release_date': ('releaseDate', {lambda x: x.replace('-', '')}, filter),
|
||||
'thumbnail': ('picture', ('original', 'url'), {url_or_none}, any),
|
||||
'like_count': ('counters', 'likes', {int_or_none}),
|
||||
'comment_count': ('counters', 'comments', {int_or_none}),
|
||||
'view_count': ('counters', 'plays', {int_or_none}),
|
||||
}),
|
||||
**(traverse_obj(playlist_data, {
|
||||
'album': ('name', {str}),
|
||||
'album_type': ('type', {str}),
|
||||
}) if playlist_type == 'albums' else {}))
|
||||
@@ -4,7 +4,9 @@ import hashlib
|
||||
import itertools
|
||||
import json
|
||||
import math
|
||||
import random
|
||||
import re
|
||||
import string
|
||||
import time
|
||||
import urllib.parse
|
||||
import uuid
|
||||
@@ -18,7 +20,6 @@ from ..utils import (
|
||||
InAdvancePagedList,
|
||||
OnDemandPagedList,
|
||||
bool_or_none,
|
||||
clean_html,
|
||||
determine_ext,
|
||||
filter_dict,
|
||||
float_or_none,
|
||||
@@ -63,7 +64,7 @@ class BilibiliBaseIE(InfoExtractor):
|
||||
'support_formats', lambda _, v: v['quality'] not in parsed_qualities))], delim=', ')
|
||||
if missing_formats:
|
||||
self.to_screen(
|
||||
f'Format(s) {missing_formats} are missing; you have to login or '
|
||||
f'Format(s) {missing_formats} are missing; you have to '
|
||||
f'become a premium member to download them. {self._login_hint()}')
|
||||
|
||||
def extract_formats(self, play_info):
|
||||
@@ -165,14 +166,18 @@ class BilibiliBaseIE(InfoExtractor):
|
||||
params['w_rid'] = hashlib.md5(f'{query}{self._get_wbi_key(video_id)}'.encode()).hexdigest()
|
||||
return params
|
||||
|
||||
def _download_playinfo(self, bvid, cid, headers=None, qn=None):
|
||||
params = {'bvid': bvid, 'cid': cid, 'fnval': 4048}
|
||||
if qn:
|
||||
params['qn'] = qn
|
||||
def _download_playinfo(self, bvid, cid, headers=None, query=None):
|
||||
params = {'bvid': bvid, 'cid': cid, 'fnval': 4048, **(query or {})}
|
||||
if self.is_logged_in:
|
||||
params.pop('try_look', None)
|
||||
if qn := params.get('qn'):
|
||||
note = f'Downloading video format {qn} for cid {cid}'
|
||||
else:
|
||||
note = f'Downloading video formats for cid {cid}'
|
||||
|
||||
return self._download_json(
|
||||
'https://api.bilibili.com/x/player/wbi/playurl', bvid,
|
||||
query=self._sign_wbi(params, bvid), headers=headers,
|
||||
note=f'Downloading video formats for cid {cid} {qn or ""}')['data']
|
||||
query=self._sign_wbi(params, bvid), headers=headers, note=note)['data']
|
||||
|
||||
def json2srt(self, json_data):
|
||||
srt_data = ''
|
||||
@@ -191,7 +196,7 @@ class BilibiliBaseIE(InfoExtractor):
|
||||
}
|
||||
|
||||
video_info = self._download_json(
|
||||
'https://api.bilibili.com/x/player/v2', video_id,
|
||||
'https://api.bilibili.com/x/player/wbi/v2', video_id,
|
||||
query={'aid': aid, 'cid': cid} if aid else {'bvid': video_id, 'cid': cid},
|
||||
note=f'Extracting subtitle info {cid}', headers=self._HEADERS)
|
||||
if traverse_obj(video_info, ('data', 'need_login_subtitle')):
|
||||
@@ -207,7 +212,7 @@ class BilibiliBaseIE(InfoExtractor):
|
||||
|
||||
def _get_chapters(self, aid, cid):
|
||||
chapters = aid and cid and self._download_json(
|
||||
'https://api.bilibili.com/x/player/v2', aid, query={'aid': aid, 'cid': cid},
|
||||
'https://api.bilibili.com/x/player/wbi/v2', aid, query={'aid': aid, 'cid': cid},
|
||||
note='Extracting chapters', fatal=False, headers=self._HEADERS)
|
||||
return traverse_obj(chapters, ('data', 'view_points', ..., {
|
||||
'title': 'content',
|
||||
@@ -286,7 +291,7 @@ class BilibiliBaseIE(InfoExtractor):
|
||||
('data', 'interaction', 'graph_version', {int_or_none}))
|
||||
cid_edges = self._get_divisions(video_id, graph_version, {1: {'cid': cid}}, 1)
|
||||
for cid, edges in cid_edges.items():
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers)
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1})
|
||||
yield {
|
||||
**metainfo,
|
||||
'id': f'{video_id}_{cid}',
|
||||
@@ -639,40 +644,29 @@ class BiliBiliIE(BilibiliBaseIE):
|
||||
headers['Referer'] = url
|
||||
|
||||
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id)
|
||||
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
|
||||
self.raise_login_required()
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -404:
|
||||
raise ExtractorError(
|
||||
'This video may be deleted or geo-restricted. '
|
||||
'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
|
||||
|
||||
is_festival = 'videoData' not in initial_state
|
||||
if is_festival:
|
||||
video_data = initial_state['videoInfo']
|
||||
else:
|
||||
play_info_obj = self._search_json(
|
||||
r'window\.__playinfo__\s*=', webpage, 'play info', video_id, fatal=False)
|
||||
if not play_info_obj:
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
|
||||
self.raise_login_required()
|
||||
if traverse_obj(initial_state, ('error', 'trueCode')) == -404:
|
||||
raise ExtractorError(
|
||||
'This video may be deleted or geo-restricted. '
|
||||
'You might want to try a VPN or a proxy server (with --proxy)', expected=True)
|
||||
play_info = traverse_obj(play_info_obj, ('data', {dict}))
|
||||
if not play_info:
|
||||
if traverse_obj(play_info_obj, 'code') == 87007:
|
||||
toast = get_element_by_class('tips-toast', webpage) or ''
|
||||
msg = clean_html(
|
||||
f'{get_element_by_class("belongs-to", toast) or ""},'
|
||||
+ (get_element_by_class('level', toast) or ''))
|
||||
raise ExtractorError(
|
||||
f'This is a supporter-only video: {msg}. {self._login_hint()}', expected=True)
|
||||
raise ExtractorError('Failed to extract play info')
|
||||
video_data = initial_state['videoData']
|
||||
|
||||
video_id, title = video_data['bvid'], video_data.get('title')
|
||||
|
||||
# Bilibili anthologies are similar to playlists but all videos share the same video ID as the anthology itself.
|
||||
page_list_json = not is_festival and traverse_obj(
|
||||
page_list_json = (not is_festival and traverse_obj(
|
||||
self._download_json(
|
||||
'https://api.bilibili.com/x/player/pagelist', video_id,
|
||||
fatal=False, query={'bvid': video_id, 'jsonp': 'jsonp'},
|
||||
note='Extracting videos in anthology', headers=headers),
|
||||
'data', expected_type=list) or []
|
||||
'data', expected_type=list)) or []
|
||||
is_anthology = len(page_list_json) > 1
|
||||
|
||||
part_id = int_or_none(parse_qs(url).get('p', [None])[-1])
|
||||
@@ -691,8 +685,6 @@ class BiliBiliIE(BilibiliBaseIE):
|
||||
|
||||
festival_info = {}
|
||||
if is_festival:
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers)
|
||||
|
||||
festival_info = traverse_obj(initial_state, {
|
||||
'uploader': ('videoInfo', 'upName'),
|
||||
'uploader_id': ('videoInfo', 'upMid', {str_or_none}),
|
||||
@@ -727,62 +719,79 @@ class BiliBiliIE(BilibiliBaseIE):
|
||||
self._get_interactive_entries(video_id, cid, metainfo, headers=headers), **metainfo,
|
||||
duration=traverse_obj(initial_state, ('videoData', 'duration', {int_or_none})),
|
||||
__post_extractor=self.extract_comments(aid))
|
||||
else:
|
||||
formats = self.extract_formats(play_info)
|
||||
|
||||
if not traverse_obj(play_info, ('dash')):
|
||||
# we only have legacy formats and need additional work
|
||||
has_qn = lambda x: x in traverse_obj(formats, (..., 'quality'))
|
||||
for qn in traverse_obj(play_info, ('accept_quality', lambda _, v: not has_qn(v), {int})):
|
||||
formats.extend(traverse_obj(
|
||||
self.extract_formats(self._download_playinfo(video_id, cid, headers=headers, qn=qn)),
|
||||
lambda _, v: not has_qn(v['quality'])))
|
||||
self._check_missing_formats(play_info, formats)
|
||||
flv_formats = traverse_obj(formats, lambda _, v: v['fragments'])
|
||||
if flv_formats and len(flv_formats) < len(formats):
|
||||
# Flv and mp4 are incompatible due to `multi_video` workaround, so drop one
|
||||
if not self._configuration_arg('prefer_multi_flv'):
|
||||
dropped_fmts = ', '.join(
|
||||
f'{f.get("format_note")} ({f.get("format_id")})' for f in flv_formats)
|
||||
formats = traverse_obj(formats, lambda _, v: not v.get('fragments'))
|
||||
if dropped_fmts:
|
||||
self.to_screen(
|
||||
f'Dropping incompatible flv format(s) {dropped_fmts} since mp4 is available. '
|
||||
'To extract flv, pass --extractor-args "bilibili:prefer_multi_flv"')
|
||||
else:
|
||||
formats = traverse_obj(
|
||||
# XXX: Filtering by extractor-arg is for testing purposes
|
||||
formats, lambda _, v: v['quality'] == int(self._configuration_arg('prefer_multi_flv')[0]),
|
||||
) or [max(flv_formats, key=lambda x: x['quality'])]
|
||||
play_info = None
|
||||
if self.is_logged_in:
|
||||
play_info = traverse_obj(
|
||||
self._search_json(r'window\.__playinfo__\s*=', webpage, 'play info', video_id, default=None),
|
||||
('data', {dict}))
|
||||
if not play_info:
|
||||
play_info = self._download_playinfo(video_id, cid, headers=headers, query={'try_look': 1})
|
||||
formats = self.extract_formats(play_info)
|
||||
|
||||
if traverse_obj(formats, (0, 'fragments')):
|
||||
# We have flv formats, which are individual short videos with their own timestamps and metainfo
|
||||
# Binary concatenation corrupts their timestamps, so we need a `multi_video` workaround
|
||||
return {
|
||||
**metainfo,
|
||||
'_type': 'multi_video',
|
||||
'entries': [{
|
||||
'id': f'{metainfo["id"]}_{idx}',
|
||||
'title': metainfo['title'],
|
||||
'http_headers': metainfo['http_headers'],
|
||||
'formats': [{
|
||||
**fragment,
|
||||
'format_id': formats[0].get('format_id'),
|
||||
}],
|
||||
'subtitles': self.extract_subtitles(video_id, cid) if idx == 0 else None,
|
||||
'__post_extractor': self.extract_comments(aid) if idx == 0 else None,
|
||||
} for idx, fragment in enumerate(formats[0]['fragments'])],
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
}
|
||||
else:
|
||||
return {
|
||||
**metainfo,
|
||||
'formats': formats,
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
'chapters': self._get_chapters(aid, cid),
|
||||
'subtitles': self.extract_subtitles(video_id, cid),
|
||||
'__post_extractor': self.extract_comments(aid),
|
||||
}
|
||||
if video_data.get('is_upower_exclusive'):
|
||||
high_level = traverse_obj(initial_state, ('elecFullInfo', 'show_info', 'high_level', {dict})) or {}
|
||||
msg = f'{join_nonempty("title", "sub_title", from_dict=high_level, delim=",")}. {self._login_hint()}'
|
||||
if not formats:
|
||||
raise ExtractorError(f'This is a supporter-only video: {msg}', expected=True)
|
||||
if '试看' in traverse_obj(play_info, ('accept_description', ..., {str})):
|
||||
self.report_warning(
|
||||
f'This is a supporter-only video, only the preview will be extracted: {msg}',
|
||||
video_id=video_id)
|
||||
|
||||
if not traverse_obj(play_info, 'dash'):
|
||||
# we only have legacy formats and need additional work
|
||||
has_qn = lambda x: x in traverse_obj(formats, (..., 'quality'))
|
||||
for qn in traverse_obj(play_info, ('accept_quality', lambda _, v: not has_qn(v), {int})):
|
||||
formats.extend(traverse_obj(
|
||||
self.extract_formats(self._download_playinfo(video_id, cid, headers=headers, query={'qn': qn})),
|
||||
lambda _, v: not has_qn(v['quality'])))
|
||||
self._check_missing_formats(play_info, formats)
|
||||
flv_formats = traverse_obj(formats, lambda _, v: v['fragments'])
|
||||
if flv_formats and len(flv_formats) < len(formats):
|
||||
# Flv and mp4 are incompatible due to `multi_video` workaround, so drop one
|
||||
if not self._configuration_arg('prefer_multi_flv'):
|
||||
dropped_fmts = ', '.join(
|
||||
f'{f.get("format_note")} ({f.get("format_id")})' for f in flv_formats)
|
||||
formats = traverse_obj(formats, lambda _, v: not v.get('fragments'))
|
||||
if dropped_fmts:
|
||||
self.to_screen(
|
||||
f'Dropping incompatible flv format(s) {dropped_fmts} since mp4 is available. '
|
||||
'To extract flv, pass --extractor-args "bilibili:prefer_multi_flv"')
|
||||
else:
|
||||
formats = traverse_obj(
|
||||
# XXX: Filtering by extractor-arg is for testing purposes
|
||||
formats, lambda _, v: v['quality'] == int(self._configuration_arg('prefer_multi_flv')[0]),
|
||||
) or [max(flv_formats, key=lambda x: x['quality'])]
|
||||
|
||||
if traverse_obj(formats, (0, 'fragments')):
|
||||
# We have flv formats, which are individual short videos with their own timestamps and metainfo
|
||||
# Binary concatenation corrupts their timestamps, so we need a `multi_video` workaround
|
||||
return {
|
||||
**metainfo,
|
||||
'_type': 'multi_video',
|
||||
'entries': [{
|
||||
'id': f'{metainfo["id"]}_{idx}',
|
||||
'title': metainfo['title'],
|
||||
'http_headers': metainfo['http_headers'],
|
||||
'formats': [{
|
||||
**fragment,
|
||||
'format_id': formats[0].get('format_id'),
|
||||
}],
|
||||
'subtitles': self.extract_subtitles(video_id, cid) if idx == 0 else None,
|
||||
'__post_extractor': self.extract_comments(aid) if idx == 0 else None,
|
||||
} for idx, fragment in enumerate(formats[0]['fragments'])],
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
}
|
||||
|
||||
return {
|
||||
**metainfo,
|
||||
'formats': formats,
|
||||
'duration': float_or_none(play_info.get('timelength'), scale=1000),
|
||||
'chapters': self._get_chapters(aid, cid),
|
||||
'subtitles': self.extract_subtitles(video_id, cid),
|
||||
'__post_extractor': self.extract_comments(aid),
|
||||
}
|
||||
|
||||
|
||||
class BiliBiliBangumiIE(BilibiliBaseIE):
|
||||
@@ -860,10 +869,16 @@ class BiliBiliBangumiIE(BilibiliBaseIE):
|
||||
self.raise_login_required('This video is for premium members only')
|
||||
|
||||
headers['Referer'] = url
|
||||
play_info = self._download_json(
|
||||
'https://api.bilibili.com/pgc/player/web/v2/playurl', episode_id,
|
||||
'Extracting episode', query={'fnval': '4048', 'ep_id': episode_id},
|
||||
headers=headers)
|
||||
|
||||
play_info = (
|
||||
self._search_json(
|
||||
r'playurlSSRData\s*=', webpage, 'embedded page info', episode_id,
|
||||
end_pattern='\n', default=None)
|
||||
or self._download_json(
|
||||
'https://api.bilibili.com/pgc/player/web/v2/playurl', episode_id,
|
||||
'Extracting episode', query={'fnval': 12240, 'ep_id': episode_id},
|
||||
headers=headers))
|
||||
|
||||
premium_only = play_info.get('code') == -10403
|
||||
play_info = traverse_obj(play_info, ('result', 'video_info', {dict})) or {}
|
||||
|
||||
@@ -1164,28 +1179,26 @@ class BilibiliSpaceBaseIE(BilibiliBaseIE):
|
||||
|
||||
|
||||
class BilibiliSpaceVideoIE(BilibiliSpaceBaseIE):
|
||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<id>\d+)(?P<video>/video)?/?(?:[?#]|$)'
|
||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<id>\d+)(?P<video>(?:/upload)?/video)?/?(?:[?#]|$)'
|
||||
_TESTS = [{
|
||||
'url': 'https://space.bilibili.com/3985676/video',
|
||||
'info_dict': {
|
||||
'id': '3985676',
|
||||
},
|
||||
'playlist_mincount': 178,
|
||||
'skip': 'login required',
|
||||
}, {
|
||||
'url': 'https://space.bilibili.com/313580179/video',
|
||||
'info_dict': {
|
||||
'id': '313580179',
|
||||
},
|
||||
'playlist_mincount': 92,
|
||||
'skip': 'login required',
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id, is_video_url = self._match_valid_url(url).group('id', 'video')
|
||||
if not is_video_url:
|
||||
self.to_screen('A channel URL was given. Only the channel\'s videos will be downloaded. '
|
||||
'To download audios, add a "/audio" to the URL')
|
||||
'To download audios, add a "/upload/audio" to the URL')
|
||||
|
||||
def fetch_page(page_idx):
|
||||
query = {
|
||||
@@ -1198,6 +1211,12 @@ class BilibiliSpaceVideoIE(BilibiliSpaceBaseIE):
|
||||
'ps': 30,
|
||||
'tid': 0,
|
||||
'web_location': 1550101,
|
||||
'dm_img_list': '[]',
|
||||
'dm_img_str': base64.b64encode(
|
||||
''.join(random.choices(string.printable, k=random.randint(16, 64))).encode())[:-2].decode(),
|
||||
'dm_cover_img_str': base64.b64encode(
|
||||
''.join(random.choices(string.printable, k=random.randint(32, 128))).encode())[:-2].decode(),
|
||||
'dm_img_inter': '{"ds":[],"wh":[6093,6631,31],"of":[430,760,380]}',
|
||||
}
|
||||
|
||||
try:
|
||||
@@ -1208,14 +1227,14 @@ class BilibiliSpaceVideoIE(BilibiliSpaceBaseIE):
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 412:
|
||||
raise ExtractorError(
|
||||
'Request is blocked by server (412), please add cookies, wait and try later.', expected=True)
|
||||
'Request is blocked by server (412), please wait and try later.', expected=True)
|
||||
raise
|
||||
status_code = response['code']
|
||||
if status_code == -401:
|
||||
raise ExtractorError(
|
||||
'Request is blocked by server (401), please add cookies, wait and try later.', expected=True)
|
||||
elif status_code == -352 and not self.is_logged_in:
|
||||
self.raise_login_required('Request is rejected, you need to login to access playlist')
|
||||
'Request is blocked by server (401), please wait and try later.', expected=True)
|
||||
elif status_code == -352:
|
||||
raise ExtractorError('Request is rejected by server (352)', expected=True)
|
||||
elif status_code != 0:
|
||||
raise ExtractorError(f'Request failed ({status_code}): {response.get("message") or "Unknown error"}')
|
||||
return response['data']
|
||||
@@ -1237,9 +1256,9 @@ class BilibiliSpaceVideoIE(BilibiliSpaceBaseIE):
|
||||
|
||||
|
||||
class BilibiliSpaceAudioIE(BilibiliSpaceBaseIE):
|
||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<id>\d+)/audio'
|
||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<id>\d+)/(?:upload/)?audio'
|
||||
_TESTS = [{
|
||||
'url': 'https://space.bilibili.com/313580179/audio',
|
||||
'url': 'https://space.bilibili.com/313580179/upload/audio',
|
||||
'info_dict': {
|
||||
'id': '313580179',
|
||||
},
|
||||
@@ -1262,7 +1281,8 @@ class BilibiliSpaceAudioIE(BilibiliSpaceBaseIE):
|
||||
}
|
||||
|
||||
def get_entries(page_data):
|
||||
for entry in page_data.get('data', []):
|
||||
# data is None when the playlist is empty
|
||||
for entry in page_data.get('data') or []:
|
||||
yield self.url_result(f'https://www.bilibili.com/audio/au{entry["id"]}', BilibiliAudioIE, entry['id'])
|
||||
|
||||
metadata, paged_list = self._extract_playlist(fetch_page, get_metadata, get_entries)
|
||||
@@ -1286,30 +1306,43 @@ class BilibiliSpaceListBaseIE(BilibiliSpaceBaseIE):
|
||||
|
||||
|
||||
class BilibiliCollectionListIE(BilibiliSpaceListBaseIE):
|
||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<mid>\d+)/channel/collectiondetail/?\?sid=(?P<sid>\d+)'
|
||||
_VALID_URL = [
|
||||
r'https?://space\.bilibili\.com/(?P<mid>\d+)/channel/collectiondetail/?\?sid=(?P<sid>\d+)',
|
||||
r'https?://space\.bilibili\.com/(?P<mid>\d+)/lists/(?P<sid>\d+)',
|
||||
]
|
||||
_TESTS = [{
|
||||
'url': 'https://space.bilibili.com/2142762/channel/collectiondetail?sid=57445',
|
||||
'url': 'https://space.bilibili.com/2142762/lists/3662502?type=season',
|
||||
'info_dict': {
|
||||
'id': '2142762_57445',
|
||||
'title': '【完结】《底特律 变人》全结局流程解说',
|
||||
'description': '',
|
||||
'id': '2142762_3662502',
|
||||
'title': '合集·《黑神话悟空》流程解说',
|
||||
'description': '黑神话悟空 相关节目',
|
||||
'uploader': '老戴在此',
|
||||
'uploader_id': '2142762',
|
||||
'timestamp': int,
|
||||
'upload_date': str,
|
||||
'thumbnail': 'https://archive.biliimg.com/bfs/archive/e0e543ae35ad3df863ea7dea526bc32e70f4c091.jpg',
|
||||
'thumbnail': 'https://archive.biliimg.com/bfs/archive/22302e17dc849dd4533606d71bc89df162c3a9bf.jpg',
|
||||
},
|
||||
'playlist_mincount': 31,
|
||||
'playlist_mincount': 62,
|
||||
}, {
|
||||
'url': 'https://space.bilibili.com/2142762/lists/3662502',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://space.bilibili.com/2142762/channel/collectiondetail?sid=57445',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
@classmethod
|
||||
def suitable(cls, url):
|
||||
return False if BilibiliSeriesListIE.suitable(url) else super().suitable(url)
|
||||
|
||||
def _real_extract(self, url):
|
||||
mid, sid = self._match_valid_url(url).group('mid', 'sid')
|
||||
playlist_id = f'{mid}_{sid}'
|
||||
|
||||
def fetch_page(page_idx):
|
||||
return self._download_json(
|
||||
'https://api.bilibili.com/x/polymer/space/seasons_archives_list',
|
||||
playlist_id, note=f'Downloading page {page_idx}',
|
||||
'https://api.bilibili.com/x/polymer/web-space/seasons_archives_list',
|
||||
playlist_id, note=f'Downloading page {page_idx}', headers={'Referer': url},
|
||||
query={'mid': mid, 'season_id': sid, 'page_num': page_idx + 1, 'page_size': 30})['data']
|
||||
|
||||
def get_metadata(page_data):
|
||||
@@ -1336,9 +1369,12 @@ class BilibiliCollectionListIE(BilibiliSpaceListBaseIE):
|
||||
|
||||
|
||||
class BilibiliSeriesListIE(BilibiliSpaceListBaseIE):
|
||||
_VALID_URL = r'https?://space\.bilibili\.com/(?P<mid>\d+)/channel/seriesdetail/?\?\bsid=(?P<sid>\d+)'
|
||||
_VALID_URL = [
|
||||
r'https?://space\.bilibili\.com/(?P<mid>\d+)/channel/seriesdetail/?\?\bsid=(?P<sid>\d+)',
|
||||
r'https?://space\.bilibili\.com/(?P<mid>\d+)/lists/(?P<sid>\d+)/?\?(?:[^#]+&)?type=series(?:[&#]|$)',
|
||||
]
|
||||
_TESTS = [{
|
||||
'url': 'https://space.bilibili.com/1958703906/channel/seriesdetail?sid=547718&ctype=0',
|
||||
'url': 'https://space.bilibili.com/1958703906/lists/547718?type=series',
|
||||
'info_dict': {
|
||||
'id': '1958703906_547718',
|
||||
'title': '直播回放',
|
||||
@@ -1351,6 +1387,9 @@ class BilibiliSeriesListIE(BilibiliSpaceListBaseIE):
|
||||
'modified_date': str,
|
||||
},
|
||||
'playlist_mincount': 513,
|
||||
}, {
|
||||
'url': 'https://space.bilibili.com/1958703906/channel/seriesdetail?sid=547718&ctype=0',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
@@ -1369,7 +1408,7 @@ class BilibiliSeriesListIE(BilibiliSpaceListBaseIE):
|
||||
def fetch_page(page_idx):
|
||||
return self._download_json(
|
||||
'https://api.bilibili.com/x/series/archives',
|
||||
playlist_id, note=f'Downloading page {page_idx}',
|
||||
playlist_id, note=f'Downloading page {page_idx}', headers={'Referer': url},
|
||||
query={'mid': mid, 'series_id': sid, 'pn': page_idx + 1, 'ps': 30})['data']
|
||||
|
||||
def get_metadata(page_data):
|
||||
@@ -1848,6 +1887,47 @@ class BiliBiliPlayerIE(InfoExtractor):
|
||||
ie=BiliBiliIE.ie_key(), video_id=video_id)
|
||||
|
||||
|
||||
class BiliBiliDynamicIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:t\.bilibili\.com|(?:www\.)?bilibili\.com/opus)/(?P<id>\d+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://t.bilibili.com/998134289197432852',
|
||||
'info_dict': {
|
||||
'id': 'BV1TAmBYVEJr',
|
||||
'ext': 'mp4',
|
||||
'uploader_id': '1192648858',
|
||||
'comment_count': int,
|
||||
'_old_archive_ids': ['bilibili 113457567568273_part1'],
|
||||
'thumbnail': 'http://i2.hdslb.com/bfs/archive/50091efd965d9f13ff6814f7ad374f90ab21e77d.jpg',
|
||||
'duration': 929.238,
|
||||
'upload_date': '20241110',
|
||||
'uploader': '何同学工作室',
|
||||
'like_count': int,
|
||||
'view_count': int,
|
||||
'title': '美国小朋友就玩这个?!何同学工作室11月开箱',
|
||||
'description': '本期产品信息:\n机器狗\n气味模拟器\nCloudboom Strike LS\n无弦吉他\n蓝牙磁带音箱\n神奇画板',
|
||||
'timestamp': 1731232800,
|
||||
'tags': list,
|
||||
'chapters': list,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
post_id = self._match_id(url)
|
||||
# Without the newer chrome UA, the API will return an error (-352)
|
||||
post_data = self._download_json(
|
||||
'https://api.bilibili.com/x/polymer/web-dynamic/v1/detail', post_id,
|
||||
query={'id': post_id}, headers={
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36',
|
||||
})
|
||||
video_url = traverse_obj(post_data, (
|
||||
'data', 'item', (None, 'orig'), 'modules', 'module_dynamic',
|
||||
(('major', ('archive', 'pgc')), ('additional', ('reserve', 'common'))),
|
||||
'jump_url', {url_or_none}, any, {self._proto_relative_url}))
|
||||
if not video_url or (self.suitable(video_url) and post_id == self._match_id(video_url)):
|
||||
raise ExtractorError('No valid video URL found', expected=True)
|
||||
return self.url_result(video_url)
|
||||
|
||||
|
||||
class BiliIntlBaseIE(InfoExtractor):
|
||||
_API_URL = 'https://api.bilibili.tv/intl/gateway'
|
||||
_NETRC_MACHINE = 'biliintl'
|
||||
|
||||
@@ -88,7 +88,7 @@ class BlueskyIE(InfoExtractor):
|
||||
},
|
||||
}, {
|
||||
'url': 'https://bsky.app/profile/de1.pds.tentacle.expert/post/3l3w4tnezek2e',
|
||||
'md5': '1af9c7fda061cf7593bbffca89e43d1c',
|
||||
'md5': 'cc0110ed1f6b0247caac8234cc1e861d',
|
||||
'info_dict': {
|
||||
'id': '3l3w4tnezek2e',
|
||||
'ext': 'mp4',
|
||||
@@ -133,6 +133,8 @@ class BlueskyIE(InfoExtractor):
|
||||
'channel_follower_count': int,
|
||||
'categories': ['Entertainment'],
|
||||
'tags': [],
|
||||
'chapters': list,
|
||||
'heatmap': 'count:100',
|
||||
},
|
||||
'add_ie': ['Youtube'],
|
||||
}, {
|
||||
@@ -184,14 +186,14 @@ class BlueskyIE(InfoExtractor):
|
||||
},
|
||||
},
|
||||
}, {
|
||||
'url': 'https://bsky.app/profile/alt.bun.how/post/3l7rdfxhyds2f',
|
||||
'url': 'https://bsky.app/profile/cinny.bun.how/post/3l7rdfxhyds2f',
|
||||
'md5': '8775118b235cf9fa6b5ad30f95cda75c',
|
||||
'info_dict': {
|
||||
'id': '3l7rdfxhyds2f',
|
||||
'ext': 'mp4',
|
||||
'uploader': 'cinnamon',
|
||||
'uploader_id': 'alt.bun.how',
|
||||
'uploader_url': 'https://bsky.app/profile/alt.bun.how',
|
||||
'uploader_id': 'cinny.bun.how',
|
||||
'uploader_url': 'https://bsky.app/profile/cinny.bun.how',
|
||||
'channel_id': 'did:plc:7x6rtuenkuvxq3zsvffp2ide',
|
||||
'channel_url': 'https://bsky.app/profile/did:plc:7x6rtuenkuvxq3zsvffp2ide',
|
||||
'thumbnail': r're:https://video.bsky.app/watch/.*\.jpg$',
|
||||
@@ -284,17 +286,19 @@ class BlueskyIE(InfoExtractor):
|
||||
services, ('service', lambda _, x: x['type'] == 'AtprotoPersonalDataServer',
|
||||
'serviceEndpoint', {url_or_none}, any)) or 'https://bsky.social'
|
||||
|
||||
def _real_extract(self, url):
|
||||
handle, video_id = self._match_valid_url(url).group('handle', 'id')
|
||||
|
||||
post = self._download_json(
|
||||
def _extract_post(self, handle, post_id):
|
||||
return self._download_json(
|
||||
'https://public.api.bsky.app/xrpc/app.bsky.feed.getPostThread',
|
||||
video_id, query={
|
||||
'uri': f'at://{handle}/app.bsky.feed.post/{video_id}',
|
||||
post_id, query={
|
||||
'uri': f'at://{handle}/app.bsky.feed.post/{post_id}',
|
||||
'depth': 0,
|
||||
'parentHeight': 0,
|
||||
})['thread']['post']
|
||||
|
||||
def _real_extract(self, url):
|
||||
handle, video_id = self._match_valid_url(url).group('handle', 'id')
|
||||
post = self._extract_post(handle, video_id)
|
||||
|
||||
entries = []
|
||||
# app.bsky.embed.video.view/app.bsky.embed.external.view
|
||||
entries.extend(self._extract_videos(post, video_id))
|
||||
@@ -341,6 +345,7 @@ class BlueskyIE(InfoExtractor):
|
||||
|
||||
formats.append({
|
||||
'format_id': 'blob',
|
||||
'quality': 1,
|
||||
'url': update_url_query(
|
||||
self._BLOB_URL_TMPL.format(endpoint), {'did': did, 'cid': video_cid}),
|
||||
**traverse_obj(root, (*embed_path, 'aspectRatio', {
|
||||
|
||||
@@ -31,6 +31,7 @@ from ..utils import (
|
||||
update_url_query,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class BrightcoveLegacyIE(InfoExtractor):
|
||||
@@ -935,8 +936,8 @@ class BrightcoveNewIE(BrightcoveNewBaseIE):
|
||||
|
||||
if content_type == 'playlist':
|
||||
return self.playlist_result(
|
||||
[self._parse_brightcove_metadata(vid, vid.get('id'), headers)
|
||||
for vid in json_data.get('videos', []) if vid.get('id')],
|
||||
(self._parse_brightcove_metadata(vid, vid['id'], headers)
|
||||
for vid in traverse_obj(json_data, ('videos', lambda _, v: v['id']))),
|
||||
json_data.get('id'), json_data.get('name'),
|
||||
json_data.get('description'))
|
||||
|
||||
|
||||
@@ -14,16 +14,18 @@ from ..utils import (
|
||||
js_to_json,
|
||||
mimetype2ext,
|
||||
orderedSet,
|
||||
parse_age_limit,
|
||||
parse_iso8601,
|
||||
replace_extension,
|
||||
smuggle_url,
|
||||
strip_or_none,
|
||||
traverse_obj,
|
||||
try_get,
|
||||
unified_timestamp,
|
||||
update_url,
|
||||
url_basename,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import require, traverse_obj, trim_str
|
||||
|
||||
|
||||
class CBCIE(InfoExtractor):
|
||||
@@ -516,9 +518,43 @@ class CBCPlayerPlaylistIE(InfoExtractor):
|
||||
return self.playlist_result(entries(), playlist_id)
|
||||
|
||||
|
||||
class CBCGemIE(InfoExtractor):
|
||||
class CBCGemBaseIE(InfoExtractor):
|
||||
_NETRC_MACHINE = 'cbcgem'
|
||||
_GEO_COUNTRIES = ['CA']
|
||||
|
||||
def _call_show_api(self, item_id, display_id=None):
|
||||
return self._download_json(
|
||||
f'https://services.radio-canada.ca/ott/catalog/v2/gem/show/{item_id}',
|
||||
display_id or item_id, query={'device': 'web'})
|
||||
|
||||
def _extract_item_info(self, item_info):
|
||||
episode_number = None
|
||||
title = traverse_obj(item_info, ('title', {str}))
|
||||
if title and (mobj := re.match(r'(?P<episode>\d+)\. (?P<title>.+)', title)):
|
||||
episode_number = int_or_none(mobj.group('episode'))
|
||||
title = mobj.group('title')
|
||||
|
||||
return {
|
||||
'episode_number': episode_number,
|
||||
**traverse_obj(item_info, {
|
||||
'id': ('url', {str}),
|
||||
'episode_id': ('url', {str}),
|
||||
'description': ('description', {str}),
|
||||
'thumbnail': ('images', 'card', 'url', {url_or_none}, {update_url(query=None)}),
|
||||
'episode_number': ('episodeNumber', {int_or_none}),
|
||||
'duration': ('metadata', 'duration', {int_or_none}),
|
||||
'release_timestamp': ('metadata', 'airDate', {unified_timestamp}),
|
||||
'timestamp': ('metadata', 'availabilityDate', {unified_timestamp}),
|
||||
'age_limit': ('metadata', 'rating', {trim_str(start='C')}, {parse_age_limit}),
|
||||
}),
|
||||
'episode': title,
|
||||
'title': title,
|
||||
}
|
||||
|
||||
|
||||
class CBCGemIE(CBCGemBaseIE):
|
||||
IE_NAME = 'gem.cbc.ca'
|
||||
_VALID_URL = r'https?://gem\.cbc\.ca/(?:media/)?(?P<id>[0-9a-z-]+/s[0-9]+[a-z][0-9]+)'
|
||||
_VALID_URL = r'https?://gem\.cbc\.ca/(?:media/)?(?P<id>[0-9a-z-]+/s(?P<season>[0-9]+)[a-z][0-9]+)'
|
||||
_TESTS = [{
|
||||
# This is a normal, public, TV show video
|
||||
'url': 'https://gem.cbc.ca/media/schitts-creek/s06e01',
|
||||
@@ -529,7 +565,7 @@ class CBCGemIE(InfoExtractor):
|
||||
'description': 'md5:929868d20021c924020641769eb3e7f1',
|
||||
'thumbnail': r're:https://images\.radio-canada\.ca/[^#?]+/cbc_schitts_creek_season_06e01_thumbnail_v01\.jpg',
|
||||
'duration': 1324,
|
||||
'categories': ['comedy'],
|
||||
'genres': ['Comédie et humour'],
|
||||
'series': 'Schitt\'s Creek',
|
||||
'season': 'Season 6',
|
||||
'season_number': 6,
|
||||
@@ -537,9 +573,10 @@ class CBCGemIE(InfoExtractor):
|
||||
'episode_number': 1,
|
||||
'episode_id': 'schitts-creek/s06e01',
|
||||
'upload_date': '20210618',
|
||||
'timestamp': 1623988800,
|
||||
'timestamp': 1623974400,
|
||||
'release_date': '20200107',
|
||||
'release_timestamp': 1578427200,
|
||||
'release_timestamp': 1578355200,
|
||||
'age_limit': 14,
|
||||
},
|
||||
'params': {'format': 'bv'},
|
||||
}, {
|
||||
@@ -557,12 +594,13 @@ class CBCGemIE(InfoExtractor):
|
||||
'episode_number': 1,
|
||||
'episode': 'The Cup Runneth Over',
|
||||
'episode_id': 'schitts-creek/s01e01',
|
||||
'duration': 1309,
|
||||
'categories': ['comedy'],
|
||||
'duration': 1308,
|
||||
'genres': ['Comédie et humour'],
|
||||
'upload_date': '20210617',
|
||||
'timestamp': 1623902400,
|
||||
'release_date': '20151124',
|
||||
'release_timestamp': 1448323200,
|
||||
'timestamp': 1623888000,
|
||||
'release_date': '20151123',
|
||||
'release_timestamp': 1448236800,
|
||||
'age_limit': 14,
|
||||
},
|
||||
'params': {'format': 'bv'},
|
||||
}, {
|
||||
@@ -570,9 +608,7 @@ class CBCGemIE(InfoExtractor):
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
_GEO_COUNTRIES = ['CA']
|
||||
_TOKEN_API_KEY = '3f4beddd-2061-49b0-ae80-6f1f2ed65b37'
|
||||
_NETRC_MACHINE = 'cbcgem'
|
||||
_claims_token = None
|
||||
|
||||
def _new_claims_token(self, email, password):
|
||||
@@ -634,10 +670,12 @@ class CBCGemIE(InfoExtractor):
|
||||
self._claims_token = self.cache.load(self._NETRC_MACHINE, 'claims_token')
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
video_info = self._download_json(
|
||||
f'https://services.radio-canada.ca/ott/cbc-api/v2/assets/{video_id}',
|
||||
video_id, expected_status=426)
|
||||
video_id, season_number = self._match_valid_url(url).group('id', 'season')
|
||||
video_info = self._call_show_api(video_id)
|
||||
item_info = traverse_obj(video_info, (
|
||||
'content', ..., 'lineups', ..., 'items',
|
||||
lambda _, v: v['url'] == video_id, any, {require('item info')}))
|
||||
media_id = item_info['idMedia']
|
||||
|
||||
email, password = self._get_login_info()
|
||||
if email and password:
|
||||
@@ -645,7 +683,20 @@ class CBCGemIE(InfoExtractor):
|
||||
headers = {'x-claims-token': claims_token}
|
||||
else:
|
||||
headers = {}
|
||||
m3u8_info = self._download_json(video_info['playSession']['url'], video_id, headers=headers)
|
||||
|
||||
m3u8_info = self._download_json(
|
||||
'https://services.radio-canada.ca/media/validation/v2/',
|
||||
video_id, headers=headers, query={
|
||||
'appCode': 'gem',
|
||||
'connectionType': 'hd',
|
||||
'deviceType': 'ipad',
|
||||
'multibitrate': 'true',
|
||||
'output': 'json',
|
||||
'tech': 'hls',
|
||||
'manifestVersion': '2',
|
||||
'manifestType': 'desktop',
|
||||
'idMedia': media_id,
|
||||
})
|
||||
|
||||
if m3u8_info.get('errorCode') == 1:
|
||||
self.raise_geo_restricted(countries=['CA'])
|
||||
@@ -671,26 +722,20 @@ class CBCGemIE(InfoExtractor):
|
||||
fmt['preference'] = -2
|
||||
|
||||
return {
|
||||
'season_number': int_or_none(season_number),
|
||||
**traverse_obj(video_info, {
|
||||
'series': ('title', {str}),
|
||||
'season_number': ('structuredMetadata', 'partofSeason', 'seasonNumber', {int_or_none}),
|
||||
'genres': ('structuredMetadata', 'genre', ..., {str}),
|
||||
}),
|
||||
**self._extract_item_info(item_info),
|
||||
'id': video_id,
|
||||
'episode_id': video_id,
|
||||
'formats': formats,
|
||||
**traverse_obj(video_info, {
|
||||
'title': ('title', {str}),
|
||||
'episode': ('title', {str}),
|
||||
'description': ('description', {str}),
|
||||
'thumbnail': ('image', {url_or_none}),
|
||||
'series': ('series', {str}),
|
||||
'season_number': ('season', {int_or_none}),
|
||||
'episode_number': ('episode', {int_or_none}),
|
||||
'duration': ('duration', {int_or_none}),
|
||||
'categories': ('category', {str}, all),
|
||||
'release_timestamp': ('airDate', {int_or_none(scale=1000)}),
|
||||
'timestamp': ('availableDate', {int_or_none(scale=1000)}),
|
||||
}),
|
||||
}
|
||||
|
||||
|
||||
class CBCGemPlaylistIE(InfoExtractor):
|
||||
class CBCGemPlaylistIE(CBCGemBaseIE):
|
||||
IE_NAME = 'gem.cbc.ca:playlist'
|
||||
_VALID_URL = r'https?://gem\.cbc\.ca/(?:media/)?(?P<id>(?P<show>[0-9a-z-]+)/s(?P<season>[0-9]+))/?(?:[?#]|$)'
|
||||
_TESTS = [{
|
||||
@@ -700,70 +745,35 @@ class CBCGemPlaylistIE(InfoExtractor):
|
||||
'info_dict': {
|
||||
'id': 'schitts-creek/s06',
|
||||
'title': 'Season 6',
|
||||
'description': 'md5:6a92104a56cbeb5818cc47884d4326a2',
|
||||
'series': 'Schitt\'s Creek',
|
||||
'season_number': 6,
|
||||
'season': 'Season 6',
|
||||
'thumbnail': 'https://images.radio-canada.ca/v1/synps-cbc/season/perso/cbc_schitts_creek_season_06_carousel_v03.jpg?impolicy=ott&im=Resize=(_Size_)&quality=75',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://gem.cbc.ca/schitts-creek/s06',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_API_BASE = 'https://services.radio-canada.ca/ott/cbc-api/v2/shows/'
|
||||
|
||||
def _entries(self, season_info):
|
||||
for episode in traverse_obj(season_info, ('items', lambda _, v: v['url'])):
|
||||
yield self.url_result(
|
||||
f'https://gem.cbc.ca/media/{episode["url"]}', CBCGemIE,
|
||||
**self._extract_item_info(episode))
|
||||
|
||||
def _real_extract(self, url):
|
||||
match = self._match_valid_url(url)
|
||||
season_id = match.group('id')
|
||||
show = match.group('show')
|
||||
show_info = self._download_json(self._API_BASE + show, season_id, expected_status=426)
|
||||
season = int(match.group('season'))
|
||||
season_id, show, season = self._match_valid_url(url).group('id', 'show', 'season')
|
||||
show_info = self._call_show_api(show, display_id=season_id)
|
||||
season_info = traverse_obj(show_info, (
|
||||
'content', ..., 'lineups',
|
||||
lambda _, v: v['seasonNumber'] == int(season), any, {require('season info')}))
|
||||
|
||||
season_info = next((s for s in show_info['seasons'] if s.get('season') == season), None)
|
||||
|
||||
if season_info is None:
|
||||
raise ExtractorError(f'Couldn\'t find season {season} of {show}')
|
||||
|
||||
episodes = []
|
||||
for episode in season_info['assets']:
|
||||
episodes.append({
|
||||
'_type': 'url_transparent',
|
||||
'ie_key': 'CBCGem',
|
||||
'url': 'https://gem.cbc.ca/media/' + episode['id'],
|
||||
'id': episode['id'],
|
||||
'title': episode.get('title'),
|
||||
'description': episode.get('description'),
|
||||
'thumbnail': episode.get('image'),
|
||||
'series': episode.get('series'),
|
||||
'season_number': episode.get('season'),
|
||||
'season': season_info['title'],
|
||||
'season_id': season_info.get('id'),
|
||||
'episode_number': episode.get('episode'),
|
||||
'episode': episode.get('title'),
|
||||
'episode_id': episode['id'],
|
||||
'duration': episode.get('duration'),
|
||||
'categories': [episode.get('category')],
|
||||
})
|
||||
|
||||
thumbnail = None
|
||||
tn_uri = season_info.get('image')
|
||||
# the-national was observed to use a "data:image/png;base64"
|
||||
# URI for their 'image' value. The image was 1x1, and is
|
||||
# probably just a placeholder, so it is ignored.
|
||||
if tn_uri is not None and not tn_uri.startswith('data:'):
|
||||
thumbnail = tn_uri
|
||||
|
||||
return {
|
||||
'_type': 'playlist',
|
||||
'entries': episodes,
|
||||
'id': season_id,
|
||||
'title': season_info['title'],
|
||||
'description': season_info.get('description'),
|
||||
'thumbnail': thumbnail,
|
||||
'series': show_info.get('title'),
|
||||
'season_number': season_info.get('season'),
|
||||
'season': season_info['title'],
|
||||
}
|
||||
return self.playlist_result(
|
||||
self._entries(season_info), season_id,
|
||||
**traverse_obj(season_info, {
|
||||
'title': ('title', {str}),
|
||||
'season': ('title', {str}),
|
||||
'season_number': ('seasonNumber', {int_or_none}),
|
||||
}), series=traverse_obj(show_info, ('title', {str})))
|
||||
|
||||
|
||||
class CBCGemLiveIE(InfoExtractor):
|
||||
|
||||
@@ -5,11 +5,12 @@ from ..utils import (
|
||||
ExtractorError,
|
||||
lowercase_escape,
|
||||
url_or_none,
|
||||
urlencode_postdata,
|
||||
)
|
||||
|
||||
|
||||
class ChaturbateIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:[^/]+\.)?chaturbate\.com/(?:fullvideo/?\?.*?\bb=)?(?P<id>[^/?&#]+)'
|
||||
_VALID_URL = r'https?://(?:[^/]+\.)?chaturbate\.(?P<tld>com|eu|global)/(?:fullvideo/?\?.*?\bb=)?(?P<id>[^/?&#]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.chaturbate.com/siswet19/',
|
||||
'info_dict': {
|
||||
@@ -29,16 +30,58 @@ class ChaturbateIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'https://en.chaturbate.com/siswet19/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://chaturbate.eu/siswet19/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://chaturbate.eu/fullvideo/?b=caylin',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://chaturbate.global/siswet19/',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
_ROOM_OFFLINE = 'Room is currently offline'
|
||||
_ERROR_MAP = {
|
||||
'offline': 'Room is currently offline',
|
||||
'private': 'Room is currently in a private show',
|
||||
'away': 'Performer is currently away',
|
||||
'password protected': 'Room is password protected',
|
||||
'hidden': 'Hidden session in progress',
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
def _extract_from_api(self, video_id, tld):
|
||||
response = self._download_json(
|
||||
f'https://chaturbate.{tld}/get_edge_hls_url_ajax/', video_id,
|
||||
data=urlencode_postdata({'room_slug': video_id}),
|
||||
headers={
|
||||
**self.geo_verification_headers(),
|
||||
'X-Requested-With': 'XMLHttpRequest',
|
||||
'Accept': 'application/json',
|
||||
}, fatal=False, impersonate=True) or {}
|
||||
|
||||
m3u8_url = response.get('url')
|
||||
if not m3u8_url:
|
||||
status = response.get('room_status')
|
||||
if error := self._ERROR_MAP.get(status):
|
||||
raise ExtractorError(error, expected=True)
|
||||
if status == 'public':
|
||||
self.raise_geo_restricted()
|
||||
self.report_warning(f'Got status "{status}" from API; falling back to webpage extraction')
|
||||
return None
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': video_id,
|
||||
'thumbnail': f'https://roomimg.stream.highwebmedia.com/ri/{video_id}.jpg',
|
||||
'is_live': True,
|
||||
'age_limit': 18,
|
||||
'formats': self._extract_m3u8_formats(m3u8_url, video_id, ext='mp4', live=True),
|
||||
}
|
||||
|
||||
def _extract_from_html(self, video_id, tld):
|
||||
webpage = self._download_webpage(
|
||||
f'https://chaturbate.com/{video_id}/', video_id,
|
||||
headers=self.geo_verification_headers())
|
||||
f'https://chaturbate.{tld}/{video_id}/', video_id,
|
||||
headers=self.geo_verification_headers(), impersonate=True)
|
||||
|
||||
found_m3u8_urls = []
|
||||
|
||||
@@ -76,8 +119,8 @@ class ChaturbateIE(InfoExtractor):
|
||||
webpage, 'error', group='error', default=None)
|
||||
if not error:
|
||||
if any(p in webpage for p in (
|
||||
self._ROOM_OFFLINE, 'offline_tipping', 'tip_offline')):
|
||||
error = self._ROOM_OFFLINE
|
||||
self._ERROR_MAP['offline'], 'offline_tipping', 'tip_offline')):
|
||||
error = self._ERROR_MAP['offline']
|
||||
if error:
|
||||
raise ExtractorError(error, expected=True)
|
||||
raise ExtractorError('Unable to find stream URL')
|
||||
@@ -104,3 +147,7 @@ class ChaturbateIE(InfoExtractor):
|
||||
'is_live': True,
|
||||
'formats': formats,
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id, tld = self._match_valid_url(url).group('id', 'tld')
|
||||
return self._extract_from_api(video_id, tld) or self._extract_from_html(video_id, tld)
|
||||
|
||||
@@ -8,7 +8,7 @@ class CloudflareStreamIE(InfoExtractor):
|
||||
_DOMAIN_RE = r'(?:cloudflarestream\.com|(?:videodelivery|bytehighway)\.net)'
|
||||
_EMBED_RE = rf'(?:embed\.|{_SUBDOMAIN_RE}){_DOMAIN_RE}/embed/[^/?#]+\.js\?(?:[^#]+&)?video='
|
||||
_ID_RE = r'[\da-f]{32}|eyJ[\w-]+\.[\w-]+\.[\w-]+'
|
||||
_VALID_URL = rf'https?://(?:{_SUBDOMAIN_RE}{_DOMAIN_RE}/|{_EMBED_RE})(?P<id>{_ID_RE})'
|
||||
_VALID_URL = rf'https?://(?:{_SUBDOMAIN_RE}(?P<domain>{_DOMAIN_RE})/|{_EMBED_RE})(?P<id>{_ID_RE})'
|
||||
_EMBED_REGEX = [
|
||||
rf'<script[^>]+\bsrc=(["\'])(?P<url>(?:https?:)?//{_EMBED_RE}(?:{_ID_RE})(?:(?!\1).)*)\1',
|
||||
rf'<iframe[^>]+\bsrc=["\'](?P<url>https?://{_SUBDOMAIN_RE}{_DOMAIN_RE}/[\da-f]{{32}})',
|
||||
@@ -19,7 +19,7 @@ class CloudflareStreamIE(InfoExtractor):
|
||||
'id': '31c9291ab41fac05471db4e73aa11717',
|
||||
'ext': 'mp4',
|
||||
'title': '31c9291ab41fac05471db4e73aa11717',
|
||||
'thumbnail': 'https://videodelivery.net/31c9291ab41fac05471db4e73aa11717/thumbnails/thumbnail.jpg',
|
||||
'thumbnail': 'https://cloudflarestream.com/31c9291ab41fac05471db4e73aa11717/thumbnails/thumbnail.jpg',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'm3u8',
|
||||
@@ -30,7 +30,7 @@ class CloudflareStreamIE(InfoExtractor):
|
||||
'id': '0e8e040aec776862e1d632a699edf59e',
|
||||
'ext': 'mp4',
|
||||
'title': '0e8e040aec776862e1d632a699edf59e',
|
||||
'thumbnail': 'https://videodelivery.net/0e8e040aec776862e1d632a699edf59e/thumbnails/thumbnail.jpg',
|
||||
'thumbnail': 'https://cloudflarestream.com/0e8e040aec776862e1d632a699edf59e/thumbnails/thumbnail.jpg',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://watch.cloudflarestream.com/9df17203414fd1db3e3ed74abbe936c1',
|
||||
@@ -54,7 +54,7 @@ class CloudflareStreamIE(InfoExtractor):
|
||||
'id': 'eaef9dea5159cf968be84241b5cedfe7',
|
||||
'ext': 'mp4',
|
||||
'title': 'eaef9dea5159cf968be84241b5cedfe7',
|
||||
'thumbnail': 'https://videodelivery.net/eaef9dea5159cf968be84241b5cedfe7/thumbnails/thumbnail.jpg',
|
||||
'thumbnail': 'https://cloudflarestream.com/eaef9dea5159cf968be84241b5cedfe7/thumbnails/thumbnail.jpg',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'm3u8',
|
||||
@@ -62,8 +62,9 @@ class CloudflareStreamIE(InfoExtractor):
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
domain = 'bytehighway.net' if 'bytehighway.net/' in url else 'videodelivery.net'
|
||||
video_id, domain = self._match_valid_url(url).group('id', 'domain')
|
||||
if domain != 'bytehighway.net':
|
||||
domain = 'cloudflarestream.com'
|
||||
base_url = f'https://{domain}/{video_id}/'
|
||||
if '.' in video_id:
|
||||
video_id = self._parse_json(base64.urlsafe_b64decode(
|
||||
|
||||
@@ -2,7 +2,6 @@ import base64
|
||||
import collections
|
||||
import functools
|
||||
import getpass
|
||||
import hashlib
|
||||
import http.client
|
||||
import http.cookiejar
|
||||
import http.cookies
|
||||
@@ -25,7 +24,6 @@ import xml.etree.ElementTree
|
||||
from ..compat import (
|
||||
compat_etree_fromstring,
|
||||
compat_expanduser,
|
||||
compat_os_name,
|
||||
urllib_req_to_req,
|
||||
)
|
||||
from ..cookies import LenientSimpleCookie
|
||||
@@ -79,7 +77,6 @@ from ..utils import (
|
||||
parse_iso8601,
|
||||
parse_m3u8_attributes,
|
||||
parse_resolution,
|
||||
sanitize_filename,
|
||||
sanitize_url,
|
||||
smuggle_url,
|
||||
str_or_none,
|
||||
@@ -101,6 +98,7 @@ from ..utils import (
|
||||
xpath_text,
|
||||
xpath_with_ns,
|
||||
)
|
||||
from ..utils._utils import _request_dump_filename
|
||||
|
||||
|
||||
class InfoExtractor:
|
||||
@@ -202,6 +200,11 @@ class InfoExtractor:
|
||||
fragment_base_url
|
||||
* "duration" (optional, int or float)
|
||||
* "filesize" (optional, int)
|
||||
* hls_media_playlist_data
|
||||
The M3U8 media playlist data as a string.
|
||||
Only use if the data must be modified during extraction and
|
||||
the native HLS downloader should bypass requesting the URL.
|
||||
Does not apply if ffmpeg is used as external downloader
|
||||
* is_from_start Is a live format that can be downloaded
|
||||
from the start. Boolean
|
||||
* preference Order number of this format. If this field is
|
||||
@@ -279,6 +282,7 @@ class InfoExtractor:
|
||||
thumbnails: A list of dictionaries, with the following entries:
|
||||
* "id" (optional, string) - Thumbnail format ID
|
||||
* "url"
|
||||
* "ext" (optional, string) - actual image extension if not given in URL
|
||||
* "preference" (optional, int) - quality of the image
|
||||
* "width" (optional, int)
|
||||
* "height" (optional, int)
|
||||
@@ -1017,23 +1021,6 @@ class InfoExtractor:
|
||||
'Visit http://blocklist.rkn.gov.ru/ for a block reason.',
|
||||
expected=True)
|
||||
|
||||
def _request_dump_filename(self, url, video_id, data=None):
|
||||
if data is not None:
|
||||
data = hashlib.md5(data).hexdigest()
|
||||
basen = join_nonempty(video_id, data, url, delim='_')
|
||||
trim_length = self.get_param('trim_file_name') or 240
|
||||
if len(basen) > trim_length:
|
||||
h = '___' + hashlib.md5(basen.encode()).hexdigest()
|
||||
basen = basen[:trim_length - len(h)] + h
|
||||
filename = sanitize_filename(f'{basen}.dump', restricted=True)
|
||||
# Working around MAX_PATH limitation on Windows (see
|
||||
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx)
|
||||
if compat_os_name == 'nt':
|
||||
absfilepath = os.path.abspath(filename)
|
||||
if len(absfilepath) > 259:
|
||||
filename = fR'\\?\{absfilepath}'
|
||||
return filename
|
||||
|
||||
def __decode_webpage(self, webpage_bytes, encoding, headers):
|
||||
if not encoding:
|
||||
encoding = self._guess_encoding_from_content(headers.get('Content-Type', ''), webpage_bytes)
|
||||
@@ -1062,7 +1049,9 @@ class InfoExtractor:
|
||||
if self.get_param('write_pages'):
|
||||
if isinstance(url_or_request, Request):
|
||||
data = self._create_request(url_or_request, data).data
|
||||
filename = self._request_dump_filename(urlh.url, video_id, data)
|
||||
filename = _request_dump_filename(
|
||||
urlh.url, video_id, data,
|
||||
trim_length=self.get_param('trim_file_name'))
|
||||
self.to_screen(f'Saving request to {filename}')
|
||||
with open(filename, 'wb') as outf:
|
||||
outf.write(webpage_bytes)
|
||||
@@ -1123,7 +1112,9 @@ class InfoExtractor:
|
||||
impersonate=None, require_impersonation=False):
|
||||
if self.get_param('load_pages'):
|
||||
url_or_request = self._create_request(url_or_request, data, headers, query)
|
||||
filename = self._request_dump_filename(url_or_request.url, video_id, url_or_request.data)
|
||||
filename = _request_dump_filename(
|
||||
url_or_request.url, video_id, url_or_request.data,
|
||||
trim_length=self.get_param('trim_file_name'))
|
||||
self.to_screen(f'Loading request from {filename}')
|
||||
try:
|
||||
with open(filename, 'rb') as dumpf:
|
||||
@@ -1854,12 +1845,26 @@ class InfoExtractor:
|
||||
|
||||
@staticmethod
|
||||
def _remove_duplicate_formats(formats):
|
||||
format_urls = set()
|
||||
seen_urls = set()
|
||||
seen_fragment_urls = set()
|
||||
unique_formats = []
|
||||
for f in formats:
|
||||
if f['url'] not in format_urls:
|
||||
format_urls.add(f['url'])
|
||||
fragments = f.get('fragments')
|
||||
if callable(fragments):
|
||||
unique_formats.append(f)
|
||||
|
||||
elif fragments:
|
||||
fragment_urls = frozenset(
|
||||
fragment.get('url') or urljoin(f['fragment_base_url'], fragment['path'])
|
||||
for fragment in fragments)
|
||||
if fragment_urls not in seen_fragment_urls:
|
||||
seen_fragment_urls.add(fragment_urls)
|
||||
unique_formats.append(f)
|
||||
|
||||
elif f['url'] not in seen_urls:
|
||||
seen_urls.add(f['url'])
|
||||
unique_formats.append(f)
|
||||
|
||||
formats[:] = unique_formats
|
||||
|
||||
def _is_valid_url(self, url, video_id, item='video', headers={}):
|
||||
@@ -3767,7 +3772,7 @@ class InfoExtractor:
|
||||
""" Merge subtitle dictionaries, language by language. """
|
||||
if target is None:
|
||||
target = {}
|
||||
for d in dicts:
|
||||
for d in filter(None, dicts):
|
||||
for lang, subs in d.items():
|
||||
target[lang] = cls._merge_subtitle_items(target.get(lang, []), subs)
|
||||
return target
|
||||
@@ -3789,7 +3794,7 @@ class InfoExtractor:
|
||||
def mark_watched(self, *args, **kwargs):
|
||||
if not self.get_param('mark_watched', False):
|
||||
return
|
||||
if self.supports_login() and self._get_login_info()[0] is not None or self._cookies_passed:
|
||||
if (self.supports_login() and self._get_login_info()[0] is not None) or self._cookies_passed:
|
||||
self._mark_watched(*args, **kwargs)
|
||||
|
||||
def _mark_watched(self, *args, **kwargs):
|
||||
|
||||
@@ -1,692 +0,0 @@
|
||||
import base64
|
||||
import uuid
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking import Request
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
float_or_none,
|
||||
format_field,
|
||||
int_or_none,
|
||||
jwt_decode_hs256,
|
||||
parse_age_limit,
|
||||
parse_count,
|
||||
parse_iso8601,
|
||||
qualities,
|
||||
time_seconds,
|
||||
traverse_obj,
|
||||
url_or_none,
|
||||
urlencode_postdata,
|
||||
)
|
||||
|
||||
|
||||
class CrunchyrollBaseIE(InfoExtractor):
|
||||
_BASE_URL = 'https://www.crunchyroll.com'
|
||||
_API_BASE = 'https://api.crunchyroll.com'
|
||||
_NETRC_MACHINE = 'crunchyroll'
|
||||
_SWITCH_USER_AGENT = 'Crunchyroll/1.8.0 Nintendo Switch/12.3.12.0 UE4/4.27'
|
||||
_REFRESH_TOKEN = None
|
||||
_AUTH_HEADERS = None
|
||||
_AUTH_EXPIRY = None
|
||||
_API_ENDPOINT = None
|
||||
_BASIC_AUTH = 'Basic ' + base64.b64encode(':'.join((
|
||||
't-kdgp2h8c3jub8fn0fq',
|
||||
'yfLDfMfrYvKXh4JXS1LEI2cCqu1v5Wan',
|
||||
)).encode()).decode()
|
||||
_IS_PREMIUM = None
|
||||
_LOCALE_LOOKUP = {
|
||||
'ar': 'ar-SA',
|
||||
'de': 'de-DE',
|
||||
'': 'en-US',
|
||||
'es': 'es-419',
|
||||
'es-es': 'es-ES',
|
||||
'fr': 'fr-FR',
|
||||
'it': 'it-IT',
|
||||
'pt-br': 'pt-BR',
|
||||
'pt-pt': 'pt-PT',
|
||||
'ru': 'ru-RU',
|
||||
'hi': 'hi-IN',
|
||||
}
|
||||
|
||||
def _set_auth_info(self, response):
|
||||
CrunchyrollBaseIE._IS_PREMIUM = 'cr_premium' in traverse_obj(response, ('access_token', {jwt_decode_hs256}, 'benefits', ...))
|
||||
CrunchyrollBaseIE._AUTH_HEADERS = {'Authorization': response['token_type'] + ' ' + response['access_token']}
|
||||
CrunchyrollBaseIE._AUTH_EXPIRY = time_seconds(seconds=traverse_obj(response, ('expires_in', {float_or_none}), default=300) - 10)
|
||||
|
||||
def _request_token(self, headers, data, note='Requesting token', errnote='Failed to request token'):
|
||||
try:
|
||||
return self._download_json(
|
||||
f'{self._BASE_URL}/auth/v1/token', None, note=note, errnote=errnote,
|
||||
headers=headers, data=urlencode_postdata(data), impersonate=True)
|
||||
except ExtractorError as error:
|
||||
if not isinstance(error.cause, HTTPError) or error.cause.status != 403:
|
||||
raise
|
||||
if target := error.cause.response.extensions.get('impersonate'):
|
||||
raise ExtractorError(f'Got HTTP Error 403 when using impersonate target "{target}"')
|
||||
raise ExtractorError(
|
||||
'Request blocked by Cloudflare. '
|
||||
'Install the required impersonation dependency if possible, '
|
||||
'or else navigate to Crunchyroll in your browser, '
|
||||
'then pass the fresh cookies (with --cookies-from-browser or --cookies) '
|
||||
'and your browser\'s User-Agent (with --user-agent)', expected=True)
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
if not CrunchyrollBaseIE._REFRESH_TOKEN:
|
||||
CrunchyrollBaseIE._REFRESH_TOKEN = self.cache.load(self._NETRC_MACHINE, username)
|
||||
if CrunchyrollBaseIE._REFRESH_TOKEN:
|
||||
return
|
||||
|
||||
try:
|
||||
login_response = self._request_token(
|
||||
headers={'Authorization': self._BASIC_AUTH}, data={
|
||||
'username': username,
|
||||
'password': password,
|
||||
'grant_type': 'password',
|
||||
'scope': 'offline_access',
|
||||
}, note='Logging in', errnote='Failed to log in')
|
||||
except ExtractorError as error:
|
||||
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
raise ExtractorError('Invalid username and/or password', expected=True)
|
||||
raise
|
||||
|
||||
CrunchyrollBaseIE._REFRESH_TOKEN = login_response['refresh_token']
|
||||
self.cache.store(self._NETRC_MACHINE, username, CrunchyrollBaseIE._REFRESH_TOKEN)
|
||||
self._set_auth_info(login_response)
|
||||
|
||||
def _update_auth(self):
|
||||
if CrunchyrollBaseIE._AUTH_HEADERS and CrunchyrollBaseIE._AUTH_EXPIRY > time_seconds():
|
||||
return
|
||||
|
||||
auth_headers = {'Authorization': self._BASIC_AUTH}
|
||||
if CrunchyrollBaseIE._REFRESH_TOKEN:
|
||||
data = {
|
||||
'refresh_token': CrunchyrollBaseIE._REFRESH_TOKEN,
|
||||
'grant_type': 'refresh_token',
|
||||
'scope': 'offline_access',
|
||||
}
|
||||
else:
|
||||
data = {'grant_type': 'client_id'}
|
||||
auth_headers['ETP-Anonymous-ID'] = uuid.uuid4()
|
||||
try:
|
||||
auth_response = self._request_token(auth_headers, data)
|
||||
except ExtractorError as error:
|
||||
username, password = self._get_login_info()
|
||||
if not username or not isinstance(error.cause, HTTPError) or error.cause.status != 400:
|
||||
raise
|
||||
self.to_screen('Refresh token has expired. Re-logging in')
|
||||
CrunchyrollBaseIE._REFRESH_TOKEN = None
|
||||
self.cache.store(self._NETRC_MACHINE, username, None)
|
||||
self._perform_login(username, password)
|
||||
return
|
||||
|
||||
self._set_auth_info(auth_response)
|
||||
|
||||
def _locale_from_language(self, language):
|
||||
config_locale = self._configuration_arg('metadata', ie_key=CrunchyrollBetaIE, casesense=True)
|
||||
return config_locale[0] if config_locale else self._LOCALE_LOOKUP.get(language)
|
||||
|
||||
def _call_base_api(self, endpoint, internal_id, lang, note=None, query={}):
|
||||
self._update_auth()
|
||||
|
||||
if not endpoint.startswith('/'):
|
||||
endpoint = f'/{endpoint}'
|
||||
|
||||
query = query.copy()
|
||||
locale = self._locale_from_language(lang)
|
||||
if locale:
|
||||
query['locale'] = locale
|
||||
|
||||
return self._download_json(
|
||||
f'{self._BASE_URL}{endpoint}', internal_id, note or f'Calling API: {endpoint}',
|
||||
headers=CrunchyrollBaseIE._AUTH_HEADERS, query=query)
|
||||
|
||||
def _call_api(self, path, internal_id, lang, note='api', query={}):
|
||||
if not path.startswith(f'/content/v2/{self._API_ENDPOINT}/'):
|
||||
path = f'/content/v2/{self._API_ENDPOINT}/{path}'
|
||||
|
||||
try:
|
||||
result = self._call_base_api(
|
||||
path, internal_id, lang, f'Downloading {note} JSON ({self._API_ENDPOINT})', query=query)
|
||||
except ExtractorError as error:
|
||||
if isinstance(error.cause, HTTPError) and error.cause.status == 404:
|
||||
return None
|
||||
raise
|
||||
|
||||
if not result:
|
||||
raise ExtractorError(f'Unexpected response when downloading {note} JSON')
|
||||
return result
|
||||
|
||||
def _extract_chapters(self, internal_id):
|
||||
# if no skip events are available, a 403 xml error is returned
|
||||
skip_events = self._download_json(
|
||||
f'https://static.crunchyroll.com/skip-events/production/{internal_id}.json',
|
||||
internal_id, note='Downloading chapter info', fatal=False, errnote=False)
|
||||
if not skip_events:
|
||||
return None
|
||||
|
||||
chapters = []
|
||||
for event in ('recap', 'intro', 'credits', 'preview'):
|
||||
start = traverse_obj(skip_events, (event, 'start', {float_or_none}))
|
||||
end = traverse_obj(skip_events, (event, 'end', {float_or_none}))
|
||||
# some chapters have no start and/or ending time, they will just be ignored
|
||||
if start is None or end is None:
|
||||
continue
|
||||
chapters.append({'title': event.capitalize(), 'start_time': start, 'end_time': end})
|
||||
|
||||
return chapters
|
||||
|
||||
def _extract_stream(self, identifier, display_id=None):
|
||||
if not display_id:
|
||||
display_id = identifier
|
||||
|
||||
self._update_auth()
|
||||
headers = {**CrunchyrollBaseIE._AUTH_HEADERS, 'User-Agent': self._SWITCH_USER_AGENT}
|
||||
try:
|
||||
stream_response = self._download_json(
|
||||
f'https://cr-play-service.prd.crunchyrollsvc.com/v1/{identifier}/console/switch/play',
|
||||
display_id, note='Downloading stream info', errnote='Failed to download stream info', headers=headers)
|
||||
except ExtractorError as error:
|
||||
if self.get_param('ignore_no_formats_error'):
|
||||
self.report_warning(error.orig_msg)
|
||||
return [], {}
|
||||
elif isinstance(error.cause, HTTPError) and error.cause.status == 420:
|
||||
raise ExtractorError(
|
||||
'You have reached the rate-limit for active streams; try again later', expected=True)
|
||||
raise
|
||||
|
||||
available_formats = {'': ('', '', stream_response['url'])}
|
||||
for hardsub_lang, stream in traverse_obj(stream_response, ('hardSubs', {dict.items}, lambda _, v: v[1]['url'])):
|
||||
available_formats[hardsub_lang] = (f'hardsub-{hardsub_lang}', hardsub_lang, stream['url'])
|
||||
|
||||
requested_hardsubs = [('' if val == 'none' else val) for val in (self._configuration_arg('hardsub') or ['none'])]
|
||||
hardsub_langs = [lang for lang in available_formats if lang]
|
||||
if hardsub_langs and 'all' not in requested_hardsubs:
|
||||
full_format_langs = set(requested_hardsubs)
|
||||
self.to_screen(f'Available hardsub languages: {", ".join(hardsub_langs)}')
|
||||
self.to_screen(
|
||||
'To extract formats of a hardsub language, use '
|
||||
'"--extractor-args crunchyrollbeta:hardsub=<language_code or all>". '
|
||||
'See https://github.com/yt-dlp/yt-dlp#crunchyrollbeta-crunchyroll for more info',
|
||||
only_once=True)
|
||||
else:
|
||||
full_format_langs = set(map(str.lower, available_formats))
|
||||
|
||||
audio_locale = traverse_obj(stream_response, ('audioLocale', {str}))
|
||||
hardsub_preference = qualities(requested_hardsubs[::-1])
|
||||
formats, subtitles = [], {}
|
||||
for format_id, hardsub_lang, stream_url in available_formats.values():
|
||||
if hardsub_lang.lower() in full_format_langs:
|
||||
adaptive_formats, dash_subs = self._extract_mpd_formats_and_subtitles(
|
||||
stream_url, display_id, mpd_id=format_id, headers=CrunchyrollBaseIE._AUTH_HEADERS,
|
||||
fatal=False, note=f'Downloading {f"{format_id} " if hardsub_lang else ""}MPD manifest')
|
||||
self._merge_subtitles(dash_subs, target=subtitles)
|
||||
else:
|
||||
continue # XXX: Update this if meta mpd formats work; will be tricky with token invalidation
|
||||
for f in adaptive_formats:
|
||||
if f.get('acodec') != 'none':
|
||||
f['language'] = audio_locale
|
||||
f['quality'] = hardsub_preference(hardsub_lang.lower())
|
||||
formats.extend(adaptive_formats)
|
||||
|
||||
for locale, subtitle in traverse_obj(stream_response, (('subtitles', 'captions'), {dict.items}, ...)):
|
||||
subtitles.setdefault(locale, []).append(traverse_obj(subtitle, {'url': 'url', 'ext': 'format'}))
|
||||
|
||||
# Invalidate stream token to avoid rate-limit
|
||||
error_msg = 'Unable to invalidate stream token; you may experience rate-limiting'
|
||||
if stream_token := stream_response.get('token'):
|
||||
self._request_webpage(Request(
|
||||
f'https://cr-play-service.prd.crunchyrollsvc.com/v1/token/{identifier}/{stream_token}/inactive',
|
||||
headers=headers, method='PATCH'), display_id, 'Invalidating stream token', error_msg, fatal=False)
|
||||
else:
|
||||
self.report_warning(error_msg)
|
||||
|
||||
return formats, subtitles
|
||||
|
||||
|
||||
class CrunchyrollCmsBaseIE(CrunchyrollBaseIE):
|
||||
_API_ENDPOINT = 'cms'
|
||||
_CMS_EXPIRY = None
|
||||
|
||||
def _call_cms_api_signed(self, path, internal_id, lang, note='api'):
|
||||
if not CrunchyrollCmsBaseIE._CMS_EXPIRY or CrunchyrollCmsBaseIE._CMS_EXPIRY <= time_seconds():
|
||||
response = self._call_base_api('index/v2', None, lang, 'Retrieving signed policy')['cms_web']
|
||||
CrunchyrollCmsBaseIE._CMS_QUERY = {
|
||||
'Policy': response['policy'],
|
||||
'Signature': response['signature'],
|
||||
'Key-Pair-Id': response['key_pair_id'],
|
||||
}
|
||||
CrunchyrollCmsBaseIE._CMS_BUCKET = response['bucket']
|
||||
CrunchyrollCmsBaseIE._CMS_EXPIRY = parse_iso8601(response['expires']) - 10
|
||||
|
||||
if not path.startswith('/cms/v2'):
|
||||
path = f'/cms/v2{CrunchyrollCmsBaseIE._CMS_BUCKET}/{path}'
|
||||
|
||||
return self._call_base_api(
|
||||
path, internal_id, lang, f'Downloading {note} JSON (signed cms)', query=CrunchyrollCmsBaseIE._CMS_QUERY)
|
||||
|
||||
|
||||
class CrunchyrollBetaIE(CrunchyrollCmsBaseIE):
|
||||
IE_NAME = 'crunchyroll'
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://(?:beta\.|www\.)?crunchyroll\.com/
|
||||
(?:(?P<lang>\w{2}(?:-\w{2})?)/)?
|
||||
watch/(?!concert|musicvideo)(?P<id>\w+)'''
|
||||
_TESTS = [{
|
||||
# Premium only
|
||||
'url': 'https://www.crunchyroll.com/watch/GY2P1Q98Y/to-the-future',
|
||||
'info_dict': {
|
||||
'id': 'GY2P1Q98Y',
|
||||
'ext': 'mp4',
|
||||
'duration': 1380.241,
|
||||
'timestamp': 1459632600,
|
||||
'description': 'md5:a022fbec4fbb023d43631032c91ed64b',
|
||||
'title': 'World Trigger Episode 73 – To the Future',
|
||||
'upload_date': '20160402',
|
||||
'series': 'World Trigger',
|
||||
'series_id': 'GR757DMKY',
|
||||
'season': 'World Trigger',
|
||||
'season_id': 'GR9P39NJ6',
|
||||
'season_number': 1,
|
||||
'episode': 'To the Future',
|
||||
'episode_number': 73,
|
||||
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'chapters': 'count:2',
|
||||
'age_limit': 14,
|
||||
'like_count': int,
|
||||
'dislike_count': int,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'm3u8',
|
||||
'extractor_args': {'crunchyrollbeta': {'hardsub': ['de-DE']}},
|
||||
'format': 'bv[format_id~=hardsub]',
|
||||
},
|
||||
}, {
|
||||
# Premium only
|
||||
'url': 'https://www.crunchyroll.com/watch/GYE5WKQGR',
|
||||
'info_dict': {
|
||||
'id': 'GYE5WKQGR',
|
||||
'ext': 'mp4',
|
||||
'duration': 366.459,
|
||||
'timestamp': 1476788400,
|
||||
'description': 'md5:74b67283ffddd75f6e224ca7dc031e76',
|
||||
'title': 'SHELTER – Porter Robinson presents Shelter the Animation',
|
||||
'upload_date': '20161018',
|
||||
'series': 'SHELTER',
|
||||
'series_id': 'GYGG09WWY',
|
||||
'season': 'SHELTER',
|
||||
'season_id': 'GR09MGK4R',
|
||||
'season_number': 1,
|
||||
'episode': 'Porter Robinson presents Shelter the Animation',
|
||||
'episode_number': 0,
|
||||
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'age_limit': 14,
|
||||
'like_count': int,
|
||||
'dislike_count': int,
|
||||
},
|
||||
'params': {'skip_download': True},
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/GJWU2VKK3/cherry-blossom-meeting-and-a-coming-blizzard',
|
||||
'info_dict': {
|
||||
'id': 'GJWU2VKK3',
|
||||
'ext': 'mp4',
|
||||
'duration': 1420.054,
|
||||
'description': 'md5:2d1c67c0ec6ae514d9c30b0b99a625cd',
|
||||
'title': 'The Ice Guy and His Cool Female Colleague Episode 1 – Cherry Blossom Meeting and a Coming Blizzard',
|
||||
'series': 'The Ice Guy and His Cool Female Colleague',
|
||||
'series_id': 'GW4HM75NP',
|
||||
'season': 'The Ice Guy and His Cool Female Colleague',
|
||||
'season_id': 'GY9PC21VE',
|
||||
'season_number': 1,
|
||||
'episode': 'Cherry Blossom Meeting and a Coming Blizzard',
|
||||
'episode_number': 1,
|
||||
'chapters': 'count:2',
|
||||
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'timestamp': 1672839000,
|
||||
'upload_date': '20230104',
|
||||
'age_limit': 14,
|
||||
'like_count': int,
|
||||
'dislike_count': int,
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/GM8F313NQ',
|
||||
'info_dict': {
|
||||
'id': 'GM8F313NQ',
|
||||
'ext': 'mp4',
|
||||
'title': 'Garakowa -Restore the World-',
|
||||
'description': 'md5:8d2f8b6b9dd77d87810882e7d2ee5608',
|
||||
'duration': 3996.104,
|
||||
'age_limit': 13,
|
||||
'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
'skip': 'no longer exists',
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/G62PEZ2E6',
|
||||
'info_dict': {
|
||||
'id': 'G62PEZ2E6',
|
||||
'description': 'md5:8d2f8b6b9dd77d87810882e7d2ee5608',
|
||||
'age_limit': 13,
|
||||
'duration': 65.138,
|
||||
'title': 'Garakowa -Restore the World-',
|
||||
},
|
||||
'playlist_mincount': 5,
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/de/watch/GY2P1Q98Y',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://beta.crunchyroll.com/pt-br/watch/G8WUN8VKP/the-ruler-of-conspiracy',
|
||||
'only_matching': True,
|
||||
}]
|
||||
# We want to support lazy playlist filtering and movie listings cannot be inside a playlist
|
||||
_RETURN_TYPE = 'video'
|
||||
|
||||
def _real_extract(self, url):
|
||||
lang, internal_id = self._match_valid_url(url).group('lang', 'id')
|
||||
|
||||
# We need to use unsigned API call to allow ratings query string
|
||||
response = traverse_obj(self._call_api(
|
||||
f'objects/{internal_id}', internal_id, lang, 'object info', {'ratings': 'true'}), ('data', 0, {dict}))
|
||||
if not response:
|
||||
raise ExtractorError(f'No video with id {internal_id} could be found (possibly region locked?)', expected=True)
|
||||
|
||||
object_type = response.get('type')
|
||||
if object_type == 'episode':
|
||||
result = self._transform_episode_response(response)
|
||||
|
||||
elif object_type == 'movie':
|
||||
result = self._transform_movie_response(response)
|
||||
|
||||
elif object_type == 'movie_listing':
|
||||
first_movie_id = traverse_obj(response, ('movie_listing_metadata', 'first_movie_id'))
|
||||
if not self._yes_playlist(internal_id, first_movie_id):
|
||||
return self.url_result(f'{self._BASE_URL}/{lang}watch/{first_movie_id}', CrunchyrollBetaIE, first_movie_id)
|
||||
|
||||
def entries():
|
||||
movies = self._call_api(f'movie_listings/{internal_id}/movies', internal_id, lang, 'movie list')
|
||||
for movie_response in traverse_obj(movies, ('data', ...)):
|
||||
yield self.url_result(
|
||||
f'{self._BASE_URL}/{lang}watch/{movie_response["id"]}',
|
||||
CrunchyrollBetaIE, **self._transform_movie_response(movie_response))
|
||||
|
||||
return self.playlist_result(entries(), **self._transform_movie_response(response))
|
||||
|
||||
else:
|
||||
raise ExtractorError(f'Unknown object type {object_type}')
|
||||
|
||||
if not self._IS_PREMIUM and traverse_obj(response, (f'{object_type}_metadata', 'is_premium_only')):
|
||||
message = f'This {object_type} is for premium members only'
|
||||
if CrunchyrollBaseIE._REFRESH_TOKEN:
|
||||
self.raise_no_formats(message, expected=True, video_id=internal_id)
|
||||
else:
|
||||
self.raise_login_required(message, method='password', metadata_available=True)
|
||||
else:
|
||||
result['formats'], result['subtitles'] = self._extract_stream(internal_id)
|
||||
|
||||
result['chapters'] = self._extract_chapters(internal_id)
|
||||
|
||||
def calculate_count(item):
|
||||
return parse_count(''.join((item['displayed'], item.get('unit') or '')))
|
||||
|
||||
result.update(traverse_obj(response, ('rating', {
|
||||
'like_count': ('up', {calculate_count}),
|
||||
'dislike_count': ('down', {calculate_count}),
|
||||
})))
|
||||
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def _transform_episode_response(data):
|
||||
metadata = traverse_obj(data, (('episode_metadata', None), {dict}), get_all=False) or {}
|
||||
return {
|
||||
'id': data['id'],
|
||||
'title': ' \u2013 '.join((
|
||||
('{}{}'.format(
|
||||
format_field(metadata, 'season_title'),
|
||||
format_field(metadata, 'episode', ' Episode %s'))),
|
||||
format_field(data, 'title'))),
|
||||
**traverse_obj(data, {
|
||||
'episode': ('title', {str}),
|
||||
'description': ('description', {str}, {lambda x: x.replace(r'\r\n', '\n')}),
|
||||
'thumbnails': ('images', 'thumbnail', ..., ..., {
|
||||
'url': ('source', {url_or_none}),
|
||||
'width': ('width', {int_or_none}),
|
||||
'height': ('height', {int_or_none}),
|
||||
}),
|
||||
}),
|
||||
**traverse_obj(metadata, {
|
||||
'duration': ('duration_ms', {float_or_none(scale=1000)}),
|
||||
'timestamp': ('upload_date', {parse_iso8601}),
|
||||
'series': ('series_title', {str}),
|
||||
'series_id': ('series_id', {str}),
|
||||
'season': ('season_title', {str}),
|
||||
'season_id': ('season_id', {str}),
|
||||
'season_number': ('season_number', ({int}, {float_or_none})),
|
||||
'episode_number': ('sequence_number', ({int}, {float_or_none})),
|
||||
'age_limit': ('maturity_ratings', -1, {parse_age_limit}),
|
||||
'language': ('audio_locale', {str}),
|
||||
}, get_all=False),
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def _transform_movie_response(data):
|
||||
metadata = traverse_obj(data, (('movie_metadata', 'movie_listing_metadata', None), {dict}), get_all=False) or {}
|
||||
return {
|
||||
'id': data['id'],
|
||||
**traverse_obj(data, {
|
||||
'title': ('title', {str}),
|
||||
'description': ('description', {str}, {lambda x: x.replace(r'\r\n', '\n')}),
|
||||
'thumbnails': ('images', 'thumbnail', ..., ..., {
|
||||
'url': ('source', {url_or_none}),
|
||||
'width': ('width', {int_or_none}),
|
||||
'height': ('height', {int_or_none}),
|
||||
}),
|
||||
}),
|
||||
**traverse_obj(metadata, {
|
||||
'duration': ('duration_ms', {float_or_none(scale=1000)}),
|
||||
'age_limit': ('maturity_ratings', -1, {parse_age_limit}),
|
||||
}),
|
||||
}
|
||||
|
||||
|
||||
class CrunchyrollBetaShowIE(CrunchyrollCmsBaseIE):
|
||||
IE_NAME = 'crunchyroll:playlist'
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://(?:beta\.|www\.)?crunchyroll\.com/
|
||||
(?P<lang>(?:\w{2}(?:-\w{2})?/)?)
|
||||
series/(?P<id>\w+)'''
|
||||
_TESTS = [{
|
||||
'url': 'https://www.crunchyroll.com/series/GY19NQ2QR/Girl-Friend-BETA',
|
||||
'info_dict': {
|
||||
'id': 'GY19NQ2QR',
|
||||
'title': 'Girl Friend BETA',
|
||||
'description': 'md5:99c1b22ee30a74b536a8277ced8eb750',
|
||||
# XXX: `thumbnail` does not get set from `thumbnails` in playlist
|
||||
# 'thumbnail': r're:^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'age_limit': 14,
|
||||
},
|
||||
'playlist_mincount': 10,
|
||||
}, {
|
||||
'url': 'https://beta.crunchyroll.com/it/series/GY19NQ2QR',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
lang, internal_id = self._match_valid_url(url).group('lang', 'id')
|
||||
|
||||
def entries():
|
||||
seasons_response = self._call_cms_api_signed(f'seasons?series_id={internal_id}', internal_id, lang, 'seasons')
|
||||
for season in traverse_obj(seasons_response, ('items', ..., {dict})):
|
||||
episodes_response = self._call_cms_api_signed(
|
||||
f'episodes?season_id={season["id"]}', season['id'], lang, 'episode list')
|
||||
for episode_response in traverse_obj(episodes_response, ('items', ..., {dict})):
|
||||
yield self.url_result(
|
||||
f'{self._BASE_URL}/{lang}watch/{episode_response["id"]}',
|
||||
CrunchyrollBetaIE, **CrunchyrollBetaIE._transform_episode_response(episode_response))
|
||||
|
||||
return self.playlist_result(
|
||||
entries(), internal_id,
|
||||
**traverse_obj(self._call_api(f'series/{internal_id}', internal_id, lang, 'series'), ('data', 0, {
|
||||
'title': ('title', {str}),
|
||||
'description': ('description', {lambda x: x.replace(r'\r\n', '\n')}),
|
||||
'age_limit': ('maturity_ratings', -1, {parse_age_limit}),
|
||||
'thumbnails': ('images', ..., ..., ..., {
|
||||
'url': ('source', {url_or_none}),
|
||||
'width': ('width', {int_or_none}),
|
||||
'height': ('height', {int_or_none}),
|
||||
}),
|
||||
})))
|
||||
|
||||
|
||||
class CrunchyrollMusicIE(CrunchyrollBaseIE):
|
||||
IE_NAME = 'crunchyroll:music'
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://(?:www\.)?crunchyroll\.com/
|
||||
(?P<lang>(?:\w{2}(?:-\w{2})?/)?)
|
||||
watch/(?P<type>concert|musicvideo)/(?P<id>\w+)'''
|
||||
_TESTS = [{
|
||||
'url': 'https://www.crunchyroll.com/de/watch/musicvideo/MV5B02C79',
|
||||
'info_dict': {
|
||||
'ext': 'mp4',
|
||||
'id': 'MV5B02C79',
|
||||
'display_id': 'egaono-hana',
|
||||
'title': 'Egaono Hana',
|
||||
'track': 'Egaono Hana',
|
||||
'artists': ['Goose house'],
|
||||
'thumbnail': r're:(?i)^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'genres': ['J-Pop'],
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/musicvideo/MV88BB7F2C',
|
||||
'info_dict': {
|
||||
'ext': 'mp4',
|
||||
'id': 'MV88BB7F2C',
|
||||
'display_id': 'crossing-field',
|
||||
'title': 'Crossing Field',
|
||||
'track': 'Crossing Field',
|
||||
'artists': ['LiSA'],
|
||||
'thumbnail': r're:(?i)^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'genres': ['Anime'],
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
'skip': 'no longer exists',
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/concert/MC2E2AC135',
|
||||
'info_dict': {
|
||||
'ext': 'mp4',
|
||||
'id': 'MC2E2AC135',
|
||||
'display_id': 'live-is-smile-always-364joker-at-yokohama-arena',
|
||||
'title': 'LiVE is Smile Always-364+JOKER- at YOKOHAMA ARENA',
|
||||
'track': 'LiVE is Smile Always-364+JOKER- at YOKOHAMA ARENA',
|
||||
'artists': ['LiSA'],
|
||||
'thumbnail': r're:(?i)^https://www.crunchyroll.com/imgsrv/.*\.jpeg?$',
|
||||
'description': 'md5:747444e7e6300907b7a43f0a0503072e',
|
||||
'genres': ['J-Pop'],
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/de/watch/musicvideo/MV5B02C79/egaono-hana',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/concert/MC2E2AC135/live-is-smile-always-364joker-at-yokohama-arena',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/watch/musicvideo/MV88BB7F2C/crossing-field',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_API_ENDPOINT = 'music'
|
||||
|
||||
def _real_extract(self, url):
|
||||
lang, internal_id, object_type = self._match_valid_url(url).group('lang', 'id', 'type')
|
||||
path, name = {
|
||||
'concert': ('concerts', 'concert info'),
|
||||
'musicvideo': ('music_videos', 'music video info'),
|
||||
}[object_type]
|
||||
response = traverse_obj(self._call_api(f'{path}/{internal_id}', internal_id, lang, name), ('data', 0, {dict}))
|
||||
if not response:
|
||||
raise ExtractorError(f'No video with id {internal_id} could be found (possibly region locked?)', expected=True)
|
||||
|
||||
result = self._transform_music_response(response)
|
||||
|
||||
if not self._IS_PREMIUM and response.get('isPremiumOnly'):
|
||||
message = f'This {response.get("type") or "media"} is for premium members only'
|
||||
if CrunchyrollBaseIE._REFRESH_TOKEN:
|
||||
self.raise_no_formats(message, expected=True, video_id=internal_id)
|
||||
else:
|
||||
self.raise_login_required(message, method='password', metadata_available=True)
|
||||
else:
|
||||
result['formats'], _ = self._extract_stream(f'music/{internal_id}', internal_id)
|
||||
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def _transform_music_response(data):
|
||||
return {
|
||||
'id': data['id'],
|
||||
**traverse_obj(data, {
|
||||
'display_id': 'slug',
|
||||
'title': 'title',
|
||||
'track': 'title',
|
||||
'artists': ('artist', 'name', all),
|
||||
'description': ('description', {str}, {lambda x: x.replace(r'\r\n', '\n') or None}),
|
||||
'thumbnails': ('images', ..., ..., {
|
||||
'url': ('source', {url_or_none}),
|
||||
'width': ('width', {int_or_none}),
|
||||
'height': ('height', {int_or_none}),
|
||||
}),
|
||||
'genres': ('genres', ..., 'displayValue'),
|
||||
'age_limit': ('maturity_ratings', -1, {parse_age_limit}),
|
||||
}),
|
||||
}
|
||||
|
||||
|
||||
class CrunchyrollArtistIE(CrunchyrollBaseIE):
|
||||
IE_NAME = 'crunchyroll:artist'
|
||||
_VALID_URL = r'''(?x)
|
||||
https?://(?:www\.)?crunchyroll\.com/
|
||||
(?P<lang>(?:\w{2}(?:-\w{2})?/)?)
|
||||
artist/(?P<id>\w{10})'''
|
||||
_TESTS = [{
|
||||
'url': 'https://www.crunchyroll.com/artist/MA179CB50D',
|
||||
'info_dict': {
|
||||
'id': 'MA179CB50D',
|
||||
'title': 'LiSA',
|
||||
'genres': ['Anime', 'J-Pop', 'Rock'],
|
||||
'description': 'md5:16d87de61a55c3f7d6c454b73285938e',
|
||||
},
|
||||
'playlist_mincount': 83,
|
||||
}, {
|
||||
'url': 'https://www.crunchyroll.com/artist/MA179CB50D/lisa',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_API_ENDPOINT = 'music'
|
||||
|
||||
def _real_extract(self, url):
|
||||
lang, internal_id = self._match_valid_url(url).group('lang', 'id')
|
||||
response = traverse_obj(self._call_api(
|
||||
f'artists/{internal_id}', internal_id, lang, 'artist info'), ('data', 0))
|
||||
|
||||
def entries():
|
||||
for attribute, path in [('concerts', 'concert'), ('videos', 'musicvideo')]:
|
||||
for internal_id in traverse_obj(response, (attribute, ...)):
|
||||
yield self.url_result(f'{self._BASE_URL}/watch/{path}/{internal_id}', CrunchyrollMusicIE, internal_id)
|
||||
|
||||
return self.playlist_result(entries(), **self._transform_artist_response(response))
|
||||
|
||||
@staticmethod
|
||||
def _transform_artist_response(data):
|
||||
return {
|
||||
'id': data['id'],
|
||||
**traverse_obj(data, {
|
||||
'title': 'name',
|
||||
'description': ('description', {str}, {lambda x: x.replace(r'\r\n', '\n')}),
|
||||
'thumbnails': ('images', ..., ..., {
|
||||
'url': ('source', {url_or_none}),
|
||||
'width': ('width', {int_or_none}),
|
||||
'height': ('height', {int_or_none}),
|
||||
}),
|
||||
'genres': ('genres', ..., 'displayValue'),
|
||||
}),
|
||||
}
|
||||
@@ -1,14 +1,27 @@
|
||||
import json
|
||||
import re
|
||||
import urllib.parse
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import orderedSet
|
||||
from .ninecninemedia import NineCNineMediaIE
|
||||
from ..utils import extract_attributes, orderedSet
|
||||
from ..utils.traversal import find_element, traverse_obj
|
||||
|
||||
|
||||
class CTVNewsIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:.+?\.)?ctvnews\.ca/(?:video\?(?:clip|playlist|bin)Id=|.*?)(?P<id>[0-9.]+)'
|
||||
_BASE_REGEX = r'https?://(?:[^.]+\.)?ctvnews\.ca/'
|
||||
_VIDEO_ID_RE = r'(?P<id>\d{5,})'
|
||||
_PLAYLIST_ID_RE = r'(?P<id>\d\.\d{5,})'
|
||||
_VALID_URL = [
|
||||
rf'{_BASE_REGEX}video/c{_VIDEO_ID_RE}',
|
||||
rf'{_BASE_REGEX}video(?:-gallery)?/?\?clipId={_VIDEO_ID_RE}',
|
||||
rf'{_BASE_REGEX}video/?\?(?:playlist|bin)Id={_PLAYLIST_ID_RE}',
|
||||
rf'{_BASE_REGEX}(?!video/)[^?#]*?{_PLAYLIST_ID_RE}/?(?:$|[?#])',
|
||||
rf'{_BASE_REGEX}(?!video/)[^?#]+\?binId={_PLAYLIST_ID_RE}',
|
||||
]
|
||||
_TESTS = [{
|
||||
'url': 'http://www.ctvnews.ca/video?clipId=901995',
|
||||
'md5': '9b8624ba66351a23e0b6e1391971f9af',
|
||||
'md5': 'b608f466c7fa24b9666c6439d766ab7e',
|
||||
'info_dict': {
|
||||
'id': '901995',
|
||||
'ext': 'flv',
|
||||
@@ -16,6 +29,33 @@ class CTVNewsIE(InfoExtractor):
|
||||
'description': 'md5:958dd3b4f5bbbf0ed4d045c790d89285',
|
||||
'timestamp': 1467286284,
|
||||
'upload_date': '20160630',
|
||||
'categories': [],
|
||||
'season_number': 0,
|
||||
'season': 'Season 0',
|
||||
'tags': [],
|
||||
'series': 'CTV News National | Archive | Stories 2',
|
||||
'season_id': '57981',
|
||||
'thumbnail': r're:https?://.*\.jpg$',
|
||||
'duration': 764.631,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://barrie.ctvnews.ca/video/c3030933-here_s-what_s-making-news-for-nov--15?binId=1272429',
|
||||
'md5': '8b8c2b33c5c1803e3c26bc74ff8694d5',
|
||||
'info_dict': {
|
||||
'id': '3030933',
|
||||
'ext': 'flv',
|
||||
'title': 'Here’s what’s making news for Nov. 15',
|
||||
'description': 'Here are the top stories we’re working on for CTV News at 11 for Nov. 15',
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2021_2_22_a602e68e-1514-410e-a67a-e1f7cccbacab_png_2000x1125.jpg',
|
||||
'season_id': '58104',
|
||||
'season_number': 0,
|
||||
'tags': [],
|
||||
'season': 'Season 0',
|
||||
'categories': [],
|
||||
'series': 'CTV News Barrie',
|
||||
'upload_date': '20241116',
|
||||
'duration': 42.943,
|
||||
'timestamp': 1731722452,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.ctvnews.ca/video?playlistId=1.2966224',
|
||||
@@ -31,6 +71,72 @@ class CTVNewsIE(InfoExtractor):
|
||||
'id': '1.2876780',
|
||||
},
|
||||
'playlist_mincount': 100,
|
||||
}, {
|
||||
'url': 'https://www.ctvnews.ca/it-s-been-23-years-since-toronto-called-in-the-army-after-a-major-snowstorm-1.5736957',
|
||||
'info_dict':
|
||||
{
|
||||
'id': '1.5736957',
|
||||
},
|
||||
'playlist_mincount': 6,
|
||||
}, {
|
||||
'url': 'https://www.ctvnews.ca/business/respondents-to-bank-of-canada-questionnaire-largely-oppose-creating-a-digital-loonie-1.6665797',
|
||||
'md5': '24bc4b88cdc17d8c3fc01dfc228ab72c',
|
||||
'info_dict': {
|
||||
'id': '2695026',
|
||||
'ext': 'flv',
|
||||
'season_id': '89852',
|
||||
'series': 'From CTV News Channel',
|
||||
'description': 'md5:796a985a23cacc7e1e2fafefd94afd0a',
|
||||
'season': '2023',
|
||||
'title': 'Bank of Canada asks public about digital currency',
|
||||
'categories': [],
|
||||
'tags': [],
|
||||
'upload_date': '20230526',
|
||||
'season_number': 2023,
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
|
||||
'timestamp': 1685105157,
|
||||
'duration': 253.553,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://stox.ctvnews.ca/video-gallery?clipId=582589',
|
||||
'md5': '135cc592df607d29dddc931f1b756ae2',
|
||||
'info_dict': {
|
||||
'id': '582589',
|
||||
'ext': 'flv',
|
||||
'categories': [],
|
||||
'timestamp': 1427906183,
|
||||
'season_number': 0,
|
||||
'duration': 125.559,
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
|
||||
'series': 'CTV News Stox',
|
||||
'description': 'CTV original footage of the rise and fall of the Berlin Wall.',
|
||||
'title': 'Berlin Wall',
|
||||
'season_id': '63817',
|
||||
'season': 'Season 0',
|
||||
'tags': [],
|
||||
'upload_date': '20150401',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://ottawa.ctvnews.ca/features/regional-contact/regional-contact-archive?binId=1.1164587#3023759',
|
||||
'md5': 'a14c0603557decc6531260791c23cc5e',
|
||||
'info_dict': {
|
||||
'id': '3023759',
|
||||
'ext': 'flv',
|
||||
'season_number': 2024,
|
||||
'timestamp': 1731798000,
|
||||
'season': '2024',
|
||||
'episode': 'Episode 125',
|
||||
'description': 'CTV News Ottawa at Six',
|
||||
'duration': 2712.076,
|
||||
'episode_number': 125,
|
||||
'upload_date': '20241116',
|
||||
'title': 'CTV News Ottawa at Six for Saturday, November 16, 2024',
|
||||
'thumbnail': 'http://images2.9c9media.com/image_asset/2019_3_28_35f5afc3-10f6-4d92-b194-8b9a86f55c6a_png_1920x1080.jpg',
|
||||
'categories': [],
|
||||
'tags': [],
|
||||
'series': 'CTV News Ottawa at Six',
|
||||
'season_id': '92667',
|
||||
},
|
||||
}, {
|
||||
'url': 'http://www.ctvnews.ca/1.810401',
|
||||
'only_matching': True,
|
||||
@@ -42,29 +148,35 @@ class CTVNewsIE(InfoExtractor):
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _ninecninemedia_url_result(self, clip_id):
|
||||
return self.url_result(f'9c9media:ctvnews_web:{clip_id}', NineCNineMediaIE, clip_id)
|
||||
|
||||
def _real_extract(self, url):
|
||||
page_id = self._match_id(url)
|
||||
|
||||
def ninecninemedia_url_result(clip_id):
|
||||
return {
|
||||
'_type': 'url_transparent',
|
||||
'id': clip_id,
|
||||
'url': f'9c9media:ctvnews_web:{clip_id}',
|
||||
'ie_key': 'NineCNineMedia',
|
||||
}
|
||||
if mobj := re.fullmatch(self._VIDEO_ID_RE, urllib.parse.urlparse(url).fragment):
|
||||
page_id = mobj.group('id')
|
||||
|
||||
if page_id.isdigit():
|
||||
return ninecninemedia_url_result(page_id)
|
||||
else:
|
||||
webpage = self._download_webpage(f'http://www.ctvnews.ca/{page_id}', page_id, query={
|
||||
'ot': 'example.AjaxPageLayout.ot',
|
||||
'maxItemsPerPage': 1000000,
|
||||
})
|
||||
entries = [ninecninemedia_url_result(clip_id) for clip_id in orderedSet(
|
||||
re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
|
||||
if not entries:
|
||||
webpage = self._download_webpage(url, page_id)
|
||||
if 'getAuthStates("' in webpage:
|
||||
entries = [ninecninemedia_url_result(clip_id) for clip_id in
|
||||
self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
|
||||
return self.playlist_result(entries, page_id)
|
||||
if re.fullmatch(self._VIDEO_ID_RE, page_id):
|
||||
return self._ninecninemedia_url_result(page_id)
|
||||
|
||||
webpage = self._download_webpage(f'https://www.ctvnews.ca/{page_id}', page_id, query={
|
||||
'ot': 'example.AjaxPageLayout.ot',
|
||||
'maxItemsPerPage': 1000000,
|
||||
})
|
||||
entries = [self._ninecninemedia_url_result(clip_id)
|
||||
for clip_id in orderedSet(re.findall(r'clip\.id\s*=\s*(\d+);', webpage))]
|
||||
if not entries:
|
||||
webpage = self._download_webpage(url, page_id)
|
||||
if 'getAuthStates("' in webpage:
|
||||
entries = [self._ninecninemedia_url_result(clip_id) for clip_id in
|
||||
self._search_regex(r'getAuthStates\("([\d+,]+)"', webpage, 'clip ids').split(',')]
|
||||
else:
|
||||
entries = [
|
||||
self._ninecninemedia_url_result(clip_id) for clip_id in
|
||||
traverse_obj(webpage, (
|
||||
{find_element(tag='jasper-player-container', html=True)},
|
||||
{extract_attributes}, 'axis-ids', {json.loads}, ..., 'axisId', {str}))
|
||||
]
|
||||
|
||||
return self.playlist_result(entries, page_id)
|
||||
|
||||
@@ -1,7 +1,4 @@
|
||||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking import HEADRequest
|
||||
from ..utils import int_or_none
|
||||
|
||||
|
||||
@@ -31,9 +28,6 @@ class CultureUnpluggedIE(InfoExtractor):
|
||||
video_id = mobj.group('id')
|
||||
display_id = mobj.group('display_id') or video_id
|
||||
|
||||
# request setClientTimezone.php to get PHPSESSID cookie which is need to get valid json data in the next request
|
||||
self._request_webpage(HEADRequest(
|
||||
'http://www.cultureunplugged.com/setClientTimezone.php?timeOffset=%d' % -(time.timezone / 3600)), display_id)
|
||||
movie_data = self._download_json(
|
||||
f'http://www.cultureunplugged.com/movie-data/cu-{video_id}.json', display_id)
|
||||
|
||||
|
||||
@@ -1,35 +1,40 @@
|
||||
import re
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
parse_age_limit,
|
||||
parse_iso8601,
|
||||
parse_qs,
|
||||
smuggle_url,
|
||||
str_or_none,
|
||||
update_url_query,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class CWTVIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?cw(?:tv(?:pr)?|seed)\.com/(?:shows/)?(?:[^/]+/)+[^?]*\?.*\b(?:play|watch)=(?P<id>[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})'
|
||||
IE_NAME = 'cwtv'
|
||||
_VALID_URL = r'https?://(?:www\.)?cw(?:tv(?:pr)?|seed)\.com/(?:shows/)?(?:[^/]+/)+[^?]*\?.*\b(?:play|watch|guid)=(?P<id>[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12})'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.cwtv.com/shows/all-american-homecoming/ready-or-not/?play=d848488f-f62a-40fd-af1f-6440b1821aab',
|
||||
'url': 'https://www.cwtv.com/shows/continuum/a-stitch-in-time/?play=9149a1e1-4cb2-46d7-81b2-47d35bbd332b',
|
||||
'info_dict': {
|
||||
'id': 'd848488f-f62a-40fd-af1f-6440b1821aab',
|
||||
'id': '9149a1e1-4cb2-46d7-81b2-47d35bbd332b',
|
||||
'ext': 'mp4',
|
||||
'title': 'Ready Or Not',
|
||||
'description': 'Simone is concerned about changes taking place at Bringston; JR makes a decision about his future.',
|
||||
'thumbnail': r're:^https?://.*\.jpe?g$',
|
||||
'duration': 2547,
|
||||
'timestamp': 1720519200,
|
||||
'title': 'A Stitch in Time',
|
||||
'description': r're:(?s)City Protective Services officer Kiera Cameron is transported from 2077.+',
|
||||
'thumbnail': r're:https?://.+\.jpe?g',
|
||||
'duration': 2632,
|
||||
'timestamp': 1736928000,
|
||||
'uploader': 'CWTV',
|
||||
'chapters': 'count:6',
|
||||
'series': 'All American: Homecoming',
|
||||
'season_number': 3,
|
||||
'chapters': 'count:5',
|
||||
'series': 'Continuum',
|
||||
'season_number': 1,
|
||||
'episode_number': 1,
|
||||
'age_limit': 0,
|
||||
'upload_date': '20240709',
|
||||
'season': 'Season 3',
|
||||
'age_limit': 14,
|
||||
'upload_date': '20250115',
|
||||
'season': 'Season 1',
|
||||
'episode': 'Episode 1',
|
||||
},
|
||||
'params': {
|
||||
@@ -42,7 +47,7 @@ class CWTVIE(InfoExtractor):
|
||||
'id': '6b15e985-9345-4f60-baf8-56e96be57c63',
|
||||
'ext': 'mp4',
|
||||
'title': 'Legends of Yesterday',
|
||||
'description': 'Oliver and Barry Allen take Kendra Saunders and Carter Hall to a remote location to keep them hidden from Vandal Savage while they figure out how to defeat him.',
|
||||
'description': r're:(?s)Oliver and Barry Allen take Kendra Saunders and Carter Hall to a remote.+',
|
||||
'duration': 2665,
|
||||
'series': 'Arrow',
|
||||
'season_number': 4,
|
||||
@@ -71,7 +76,7 @@ class CWTVIE(InfoExtractor):
|
||||
'timestamp': 1444107300,
|
||||
'age_limit': 14,
|
||||
'uploader': 'CWTV',
|
||||
'thumbnail': r're:^https?://.*\.jpe?g$',
|
||||
'thumbnail': r're:https?://.+\.jpe?g',
|
||||
'chapters': 'count:4',
|
||||
'episode': 'Episode 20',
|
||||
'season': 'Season 11',
|
||||
@@ -89,14 +94,17 @@ class CWTVIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'http://cwtv.com/shows/arrow/legends-of-yesterday/?watch=6b15e985-9345-4f60-baf8-56e96be57c63',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://www.cwtv.com/movies/play/?guid=0a8e8b5b-1356-41d5-9a6a-4eda1a6feb6c',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
data = self._download_json(
|
||||
f'https://images.cwtv.com/feed/mobileapp/video-meta/apiversion_12/guid_{video_id}', video_id)
|
||||
if data.get('result') != 'ok':
|
||||
raise ExtractorError(data['msg'], expected=True)
|
||||
f'https://images.cwtv.com/feed/app-2/video-meta/apiversion_22/device_android/guid_{video_id}', video_id)
|
||||
if traverse_obj(data, 'result') != 'ok':
|
||||
raise ExtractorError(traverse_obj(data, (('error_msg', 'msg'), {str}, any)), expected=True)
|
||||
video_data = data['video']
|
||||
title = video_data['title']
|
||||
mpx_url = update_url_query(
|
||||
@@ -123,3 +131,50 @@ class CWTVIE(InfoExtractor):
|
||||
'ie_key': 'ThePlatform',
|
||||
'thumbnail': video_data.get('large_thumbnail'),
|
||||
}
|
||||
|
||||
|
||||
class CWTVMovieIE(InfoExtractor):
|
||||
IE_NAME = 'cwtv:movie'
|
||||
_VALID_URL = r'https?://(?:www\.)?cwtv\.com/shows/(?P<id>[\w-]+)/?\?(?:[^#]+&)?viewContext=Movies'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.cwtv.com/shows/the-crush/?viewContext=Movies+Swimlane',
|
||||
'info_dict': {
|
||||
'id': '0a8e8b5b-1356-41d5-9a6a-4eda1a6feb6c',
|
||||
'ext': 'mp4',
|
||||
'title': 'The Crush',
|
||||
'upload_date': '20241112',
|
||||
'description': 'md5:1549acd90dff4a8273acd7284458363e',
|
||||
'chapters': 'count:9',
|
||||
'timestamp': 1731398400,
|
||||
'age_limit': 16,
|
||||
'duration': 5337,
|
||||
'series': 'The Crush',
|
||||
'season': 'Season 1',
|
||||
'uploader': 'CWTV',
|
||||
'season_number': 1,
|
||||
'episode': 'Episode 1',
|
||||
'episode_number': 1,
|
||||
'thumbnail': r're:https?://.+\.jpe?g',
|
||||
},
|
||||
'params': {
|
||||
# m3u8 download
|
||||
'skip_download': True,
|
||||
},
|
||||
}]
|
||||
_UUID_RE = r'[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12}'
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
app_url = (
|
||||
self._html_search_meta('al:ios:url', webpage, default=None)
|
||||
or self._html_search_meta('al:android:url', webpage, default=None))
|
||||
video_id = (
|
||||
traverse_obj(parse_qs(app_url), ('video_id', 0, {lambda x: re.fullmatch(self._UUID_RE, x)}, 0))
|
||||
or self._search_regex([
|
||||
rf'CWTV\.Site\.curPlayingGUID\s*=\s*["\']({self._UUID_RE})',
|
||||
rf'CWTV\.Site\.viewInAppURL\s*=\s*["\']/shows/[\w-]+/watch-in-app/\?play=({self._UUID_RE})',
|
||||
], webpage, 'video ID'))
|
||||
|
||||
return self.url_result(
|
||||
f'https://www.cwtv.com/shows/{display_id}/{display_id}/?play={video_id}', CWTVIE, video_id)
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import functools
|
||||
import hashlib
|
||||
import re
|
||||
import time
|
||||
@@ -51,6 +52,15 @@ class DacastVODIE(DacastBaseIE):
|
||||
'thumbnail': 'https://universe-files.dacast.com/26137208-5858-65c1-5e9a-9d6b6bd2b6c2',
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
}, { # /uspaes/ in hls_url
|
||||
'url': 'https://iframe.dacast.com/vod/f9823fc6-faba-b98f-0d00-4a7b50a58c5b/348c5c84-b6af-4859-bb9d-1d01009c795b',
|
||||
'info_dict': {
|
||||
'id': '348c5c84-b6af-4859-bb9d-1d01009c795b',
|
||||
'ext': 'mp4',
|
||||
'title': 'pl1-edyta-rubas-211124.mp4',
|
||||
'uploader_id': 'f9823fc6-faba-b98f-0d00-4a7b50a58c5b',
|
||||
'thumbnail': 'https://universe-files.dacast.com/4d0bd042-a536-752d-fc34-ad2fa44bbcbb.png',
|
||||
},
|
||||
}]
|
||||
_WEBPAGE_TESTS = [{
|
||||
'url': 'https://www.dacast.com/support/knowledgebase/how-can-i-embed-a-video-on-my-website/',
|
||||
@@ -74,6 +84,15 @@ class DacastVODIE(DacastBaseIE):
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
}]
|
||||
|
||||
@functools.cached_property
|
||||
def _usp_signing_secret(self):
|
||||
player_js = self._download_webpage(
|
||||
'https://player.dacast.com/js/player.js', None, 'Downloading player JS')
|
||||
# Rotates every so often, but hardcode a fallback in case of JS change/breakage before rotation
|
||||
return self._search_regex(
|
||||
r'\bUSP_SIGNING_SECRET\s*=\s*(["\'])(?P<secret>(?:(?!\1).)+)', player_js,
|
||||
'usp signing secret', group='secret', fatal=False) or 'odnInCGqhvtyRTtIiddxtuRtawYYICZP'
|
||||
|
||||
def _real_extract(self, url):
|
||||
user_id, video_id = self._match_valid_url(url).group('user_id', 'id')
|
||||
query = {'contentId': f'{user_id}-vod-{video_id}', 'provider': 'universe'}
|
||||
@@ -94,10 +113,10 @@ class DacastVODIE(DacastBaseIE):
|
||||
if 'DRM_EXT' in hls_url:
|
||||
self.report_drm(video_id)
|
||||
elif '/uspaes/' in hls_url:
|
||||
# From https://player.dacast.com/js/player.js
|
||||
# Ref: https://player.dacast.com/js/player.js
|
||||
ts = int(time.time())
|
||||
signature = hashlib.sha1(
|
||||
f'{10413792000 - ts}{ts}YfaKtquEEpDeusCKbvYszIEZnWmBcSvw').digest().hex()
|
||||
f'{10413792000 - ts}{ts}{self._usp_signing_secret}'.encode()).digest().hex()
|
||||
hls_aes['uri'] = f'https://keys.dacast.com/uspaes/{video_id}.key?s={signature}&ts={ts}'
|
||||
|
||||
for retry in self.RetryManager():
|
||||
|
||||
@@ -261,6 +261,7 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
|
||||
'tags': [],
|
||||
'view_count': int,
|
||||
'like_count': int,
|
||||
'thumbnail': r're:https://\w+.dmcdn.net/v/WnEY61cmvMxt2Fi6d/x1080',
|
||||
},
|
||||
}, {
|
||||
# https://geo.dailymotion.com/player/xf7zn.html?playlist=x7wdsj
|
||||
@@ -288,6 +289,25 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
|
||||
'description': 'À bord du « véloto », l’alternative à la voiture pour la campagne',
|
||||
'tags': ['biclou', 'vélo', 'véloto', 'campagne', 'voiture', 'environnement', 'véhicules intermédiaires'],
|
||||
},
|
||||
}, {
|
||||
# https://geo.dailymotion.com/player/xry80.html?video=x8vu47w
|
||||
'url': 'https://www.metatube.com/en/videos/546765/This-frogs-decorates-Christmas-tree/',
|
||||
'info_dict': {
|
||||
'id': 'x8vu47w',
|
||||
'ext': 'mp4',
|
||||
'like_count': int,
|
||||
'uploader': 'Metatube',
|
||||
'thumbnail': r're:https://\w+.dmcdn.net/v/W1G_S1coGSFTfkTeR/x1080',
|
||||
'upload_date': '20240326',
|
||||
'view_count': int,
|
||||
'timestamp': 1711496732,
|
||||
'age_limit': 0,
|
||||
'uploader_id': 'x2xpy74',
|
||||
'title': 'Está lindas ranitas ponen su arbolito',
|
||||
'duration': 28,
|
||||
'description': 'Que lindura',
|
||||
'tags': [],
|
||||
},
|
||||
}]
|
||||
_GEO_BYPASS = False
|
||||
_COMMON_MEDIA_FIELDS = '''description
|
||||
@@ -302,7 +322,7 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
|
||||
yield from super()._extract_embed_urls(url, webpage)
|
||||
for mobj in re.finditer(
|
||||
r'(?s)DM\.player\([^,]+,\s*{.*?video[\'"]?\s*:\s*["\']?(?P<id>[0-9a-zA-Z]+).+?}\s*\);', webpage):
|
||||
yield from 'https://www.dailymotion.com/embed/video/' + mobj.group('id')
|
||||
yield 'https://www.dailymotion.com/embed/video/' + mobj.group('id')
|
||||
for mobj in re.finditer(
|
||||
r'(?s)<script [^>]*\bsrc=(["\'])(?:https?:)?//[\w-]+\.dailymotion\.com/player/(?:(?!\1).)+\1[^>]*>', webpage):
|
||||
attrs = extract_attributes(mobj.group(0))
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
import time
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
jwt_decode_hs256,
|
||||
parse_codecs,
|
||||
try_get,
|
||||
url_or_none,
|
||||
@@ -13,9 +16,6 @@ from ..utils.traversal import traverse_obj
|
||||
class DigitalConcertHallIE(InfoExtractor):
|
||||
IE_DESC = 'DigitalConcertHall extractor'
|
||||
_VALID_URL = r'https?://(?:www\.)?digitalconcerthall\.com/(?P<language>[a-z]+)/(?P<type>film|concert|work)/(?P<id>[0-9]+)-?(?P<part>[0-9]+)?'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_ACCESS_TOKEN = None
|
||||
_NETRC_MACHINE = 'digitalconcerthall'
|
||||
_TESTS = [{
|
||||
'note': 'Playlist with only one video',
|
||||
@@ -69,59 +69,157 @@ class DigitalConcertHallIE(InfoExtractor):
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
'playlist_count': 1,
|
||||
}]
|
||||
_LOGIN_HINT = ('Use --username token --password ACCESS_TOKEN where ACCESS_TOKEN '
|
||||
'is the "access_token_production" from your browser local storage')
|
||||
_REFRESH_HINT = 'or else use a "refresh_token" with --username refresh --password REFRESH_TOKEN'
|
||||
_OAUTH_URL = 'https://api.digitalconcerthall.com/v2/oauth2/token'
|
||||
_CLIENT_ID = 'dch.webapp'
|
||||
_CLIENT_SECRET = '2ySLN+2Fwb'
|
||||
_USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15'
|
||||
_OAUTH_HEADERS = {
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Origin': 'https://www.digitalconcerthall.com',
|
||||
'Referer': 'https://www.digitalconcerthall.com/',
|
||||
'User-Agent': _USER_AGENT,
|
||||
}
|
||||
_access_token = None
|
||||
_access_token_expiry = 0
|
||||
_refresh_token = None
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
login_token = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, 'Obtaining token', errnote='Unable to obtain token', data=urlencode_postdata({
|
||||
@property
|
||||
def _access_token_is_expired(self):
|
||||
return self._access_token_expiry - 30 <= int(time.time())
|
||||
|
||||
def _set_access_token(self, value):
|
||||
self._access_token = value
|
||||
self._access_token_expiry = traverse_obj(value, ({jwt_decode_hs256}, 'exp', {int})) or 0
|
||||
|
||||
def _cache_tokens(self, /):
|
||||
self.cache.store(self._NETRC_MACHINE, 'tokens', {
|
||||
'access_token': self._access_token,
|
||||
'refresh_token': self._refresh_token,
|
||||
})
|
||||
|
||||
def _fetch_new_tokens(self, invalidate=False):
|
||||
if invalidate:
|
||||
self.report_warning('Access token has been invalidated')
|
||||
self._set_access_token(None)
|
||||
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
if not self._refresh_token:
|
||||
self._set_access_token(None)
|
||||
self._cache_tokens()
|
||||
raise ExtractorError(
|
||||
'Access token has expired or been invalidated. '
|
||||
'Get a new "access_token_production" value from your browser '
|
||||
f'and try again, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# If we only have a refresh token, we need a temporary "initial token" for the refresh flow
|
||||
bearer_token = self._access_token or self._download_json(
|
||||
self._OAUTH_URL, None, 'Obtaining initial token', 'Unable to obtain initial token',
|
||||
data=urlencode_postdata({
|
||||
'affiliate': 'none',
|
||||
'grant_type': 'device',
|
||||
'device_vendor': 'unknown',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio
|
||||
'device_model': 'unknown' if self._configuration_arg('prefer_combined_hls') else 'Safari',
|
||||
'app_id': 'dch.webapp',
|
||||
# device_model 'Safari' gets split streams of 4K/HEVC video and lossless/FLAC audio,
|
||||
# but this is no longer effective since actual login is not possible anymore
|
||||
'device_model': 'unknown',
|
||||
'app_id': self._CLIENT_ID,
|
||||
'app_distributor': 'berlinphil',
|
||||
'app_version': '1.84.0',
|
||||
'client_secret': '2ySLN+2Fwb',
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})['access_token']
|
||||
'app_version': '1.95.0',
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers=self._OAUTH_HEADERS)['access_token']
|
||||
|
||||
try:
|
||||
login_response = self._download_json(
|
||||
self._OAUTH_URL,
|
||||
None, note='Logging in', errnote='Unable to login', data=urlencode_postdata({
|
||||
'grant_type': 'password',
|
||||
'username': username,
|
||||
'password': password,
|
||||
response = self._download_json(
|
||||
self._OAUTH_URL, None, 'Refreshing token', 'Unable to refresh token',
|
||||
data=urlencode_postdata({
|
||||
'grant_type': 'refresh_token',
|
||||
'refresh_token': self._refresh_token,
|
||||
'client_id': self._CLIENT_ID,
|
||||
'client_secret': self._CLIENT_SECRET,
|
||||
}), headers={
|
||||
'Accept': 'application/json',
|
||||
'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
|
||||
'Referer': 'https://www.digitalconcerthall.com',
|
||||
'Authorization': f'Bearer {login_token}',
|
||||
'User-Agent': self._USER_AGENT,
|
||||
**self._OAUTH_HEADERS,
|
||||
'Authorization': f'Bearer {bearer_token}',
|
||||
})
|
||||
except ExtractorError as error:
|
||||
if isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
raise ExtractorError('Invalid username or password', expected=True)
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
self._set_access_token(None)
|
||||
self._refresh_token = None
|
||||
self._cache_tokens()
|
||||
raise ExtractorError('Your tokens have been invalidated', expected=True)
|
||||
raise
|
||||
self._ACCESS_TOKEN = login_response['access_token']
|
||||
|
||||
self._set_access_token(response['access_token'])
|
||||
if refresh_token := traverse_obj(response, ('refresh_token', {str})):
|
||||
self.write_debug('New refresh token granted')
|
||||
self._refresh_token = refresh_token
|
||||
self._cache_tokens()
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
self.report_login()
|
||||
|
||||
if username == 'refresh':
|
||||
self._refresh_token = password
|
||||
self._fetch_new_tokens()
|
||||
|
||||
if username == 'token':
|
||||
if not traverse_obj(password, {jwt_decode_hs256}):
|
||||
raise ExtractorError(
|
||||
f'The access token passed to yt-dlp is not valid. {self._LOGIN_HINT}', expected=True)
|
||||
self._set_access_token(password)
|
||||
self._cache_tokens()
|
||||
|
||||
if username in ('refresh', 'token'):
|
||||
if self.get_param('cachedir') is not False:
|
||||
token_type = 'access' if username == 'token' else 'refresh'
|
||||
self.to_screen(f'Your {token_type} token has been cached to disk. To use the cached '
|
||||
'token next time, pass --username cache along with any password')
|
||||
return
|
||||
|
||||
if username != 'cache':
|
||||
raise ExtractorError(
|
||||
'Login with username and password is no longer supported '
|
||||
f'for this site. {self._LOGIN_HINT}, {self._REFRESH_HINT}', expected=True)
|
||||
|
||||
# Try cached access_token
|
||||
cached_tokens = self.cache.load(self._NETRC_MACHINE, 'tokens', default={})
|
||||
self._set_access_token(cached_tokens.get('access_token'))
|
||||
self._refresh_token = cached_tokens.get('refresh_token')
|
||||
if not self._access_token_is_expired:
|
||||
return
|
||||
|
||||
# Try cached refresh_token
|
||||
self._fetch_new_tokens(invalidate=True)
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._ACCESS_TOKEN:
|
||||
self.raise_login_required(method='password')
|
||||
if not self._access_token:
|
||||
self.raise_login_required(
|
||||
'All content on this site is only available for registered users. '
|
||||
f'{self._LOGIN_HINT}, {self._REFRESH_HINT}', method=None)
|
||||
|
||||
def _entries(self, items, language, type_, **kwargs):
|
||||
for item in items:
|
||||
video_id = item['id']
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
|
||||
for should_retry in (True, False):
|
||||
self._fetch_new_tokens(invalidate=not should_retry)
|
||||
try:
|
||||
stream_info = self._download_json(
|
||||
self._proto_relative_url(item['_links']['streams']['href']), video_id, headers={
|
||||
'Accept': 'application/json',
|
||||
'Authorization': f'Bearer {self._access_token}',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
})
|
||||
break
|
||||
except ExtractorError as error:
|
||||
if should_retry and isinstance(error.cause, HTTPError) and error.cause.status == 401:
|
||||
continue
|
||||
raise
|
||||
|
||||
formats = []
|
||||
for m3u8_url in traverse_obj(stream_info, ('channel', ..., 'stream', ..., 'url', {url_or_none})):
|
||||
@@ -157,7 +255,6 @@ class DigitalConcertHallIE(InfoExtractor):
|
||||
'Accept': 'application/json',
|
||||
'Accept-Language': language,
|
||||
'User-Agent': self._USER_AGENT,
|
||||
'Authorization': f'Bearer {self._ACCESS_TOKEN}',
|
||||
})
|
||||
videos = [vid_info] if type_ == 'film' else traverse_obj(vid_info, ('_embedded', ..., ...))
|
||||
|
||||
|
||||
130
yt_dlp/extractor/digiview.py
Normal file
130
yt_dlp/extractor/digiview.py
Normal file
@@ -0,0 +1,130 @@
|
||||
from .common import InfoExtractor
|
||||
from .youtube import YoutubeIE
|
||||
from ..utils import clean_html, int_or_none, traverse_obj, url_or_none, urlencode_postdata
|
||||
|
||||
|
||||
class DigiviewIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?ladigitale\.dev/digiview/#/v/(?P<id>[0-9a-f]+)'
|
||||
_TESTS = [{
|
||||
# normal video
|
||||
'url': 'https://ladigitale.dev/digiview/#/v/67a8e50aee2ec',
|
||||
'info_dict': {
|
||||
'id': '67a8e50aee2ec',
|
||||
'ext': 'mp4',
|
||||
'title': 'Big Buck Bunny 60fps 4K - Official Blender Foundation Short Film',
|
||||
'thumbnail': 'https://i.ytimg.com/vi/aqz-KE-bpKQ/hqdefault.jpg',
|
||||
'upload_date': '20141110',
|
||||
'playable_in_embed': True,
|
||||
'duration': 635,
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'channel': 'Blender',
|
||||
'license': 'Creative Commons Attribution license (reuse allowed)',
|
||||
'like_count': int,
|
||||
'tags': 'count:8',
|
||||
'live_status': 'not_live',
|
||||
'channel_id': 'UCSMOQeBJ2RAnuFungnQOxLg',
|
||||
'channel_follower_count': int,
|
||||
'channel_url': 'https://www.youtube.com/channel/UCSMOQeBJ2RAnuFungnQOxLg',
|
||||
'uploader_id': '@BlenderOfficial',
|
||||
'description': 'md5:8f3ed18a53a1bb36cbb3b70a15782fd0',
|
||||
'categories': ['Film & Animation'],
|
||||
'channel_is_verified': True,
|
||||
'heatmap': 'count:100',
|
||||
'section_end': 635,
|
||||
'uploader': 'Blender',
|
||||
'timestamp': 1415628355,
|
||||
'uploader_url': 'https://www.youtube.com/@BlenderOfficial',
|
||||
'age_limit': 0,
|
||||
'section_start': 0,
|
||||
'availability': 'public',
|
||||
},
|
||||
}, {
|
||||
# cut video
|
||||
'url': 'https://ladigitale.dev/digiview/#/v/67a8e51d0dd58',
|
||||
'info_dict': {
|
||||
'id': '67a8e51d0dd58',
|
||||
'ext': 'mp4',
|
||||
'title': 'Big Buck Bunny 60fps 4K - Official Blender Foundation Short Film',
|
||||
'thumbnail': 'https://i.ytimg.com/vi/aqz-KE-bpKQ/hqdefault.jpg',
|
||||
'upload_date': '20141110',
|
||||
'playable_in_embed': True,
|
||||
'duration': 5,
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'channel': 'Blender',
|
||||
'license': 'Creative Commons Attribution license (reuse allowed)',
|
||||
'like_count': int,
|
||||
'tags': 'count:8',
|
||||
'live_status': 'not_live',
|
||||
'channel_id': 'UCSMOQeBJ2RAnuFungnQOxLg',
|
||||
'channel_follower_count': int,
|
||||
'channel_url': 'https://www.youtube.com/channel/UCSMOQeBJ2RAnuFungnQOxLg',
|
||||
'uploader_id': '@BlenderOfficial',
|
||||
'description': 'md5:8f3ed18a53a1bb36cbb3b70a15782fd0',
|
||||
'categories': ['Film & Animation'],
|
||||
'channel_is_verified': True,
|
||||
'heatmap': 'count:100',
|
||||
'section_end': 10,
|
||||
'uploader': 'Blender',
|
||||
'timestamp': 1415628355,
|
||||
'uploader_url': 'https://www.youtube.com/@BlenderOfficial',
|
||||
'age_limit': 0,
|
||||
'section_start': 5,
|
||||
'availability': 'public',
|
||||
},
|
||||
}, {
|
||||
# changed title
|
||||
'url': 'https://ladigitale.dev/digiview/#/v/67a8ea5644d7a',
|
||||
'info_dict': {
|
||||
'id': '67a8ea5644d7a',
|
||||
'ext': 'mp4',
|
||||
'title': 'Big Buck Bunny (with title changed)',
|
||||
'thumbnail': 'https://i.ytimg.com/vi/aqz-KE-bpKQ/hqdefault.jpg',
|
||||
'upload_date': '20141110',
|
||||
'playable_in_embed': True,
|
||||
'duration': 5,
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'channel': 'Blender',
|
||||
'license': 'Creative Commons Attribution license (reuse allowed)',
|
||||
'like_count': int,
|
||||
'tags': 'count:8',
|
||||
'live_status': 'not_live',
|
||||
'channel_id': 'UCSMOQeBJ2RAnuFungnQOxLg',
|
||||
'channel_follower_count': int,
|
||||
'channel_url': 'https://www.youtube.com/channel/UCSMOQeBJ2RAnuFungnQOxLg',
|
||||
'uploader_id': '@BlenderOfficial',
|
||||
'description': 'md5:8f3ed18a53a1bb36cbb3b70a15782fd0',
|
||||
'categories': ['Film & Animation'],
|
||||
'channel_is_verified': True,
|
||||
'heatmap': 'count:100',
|
||||
'section_end': 15,
|
||||
'uploader': 'Blender',
|
||||
'timestamp': 1415628355,
|
||||
'uploader_url': 'https://www.youtube.com/@BlenderOfficial',
|
||||
'age_limit': 0,
|
||||
'section_start': 10,
|
||||
'availability': 'public',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
video_data = self._download_json(
|
||||
'https://ladigitale.dev/digiview/inc/recuperer_video.php', video_id,
|
||||
data=urlencode_postdata({'id': video_id}))
|
||||
|
||||
clip_id = video_data['videoId']
|
||||
return self.url_result(
|
||||
f'https://www.youtube.com/watch?v={clip_id}',
|
||||
YoutubeIE, video_id, url_transparent=True,
|
||||
**traverse_obj(video_data, {
|
||||
'section_start': ('debut', {int_or_none}),
|
||||
'section_end': ('fin', {int_or_none}),
|
||||
'description': ('description', {clean_html}, filter),
|
||||
'title': ('titre', {str}),
|
||||
'thumbnail': ('vignette', {url_or_none}),
|
||||
'view_count': ('vues', {int_or_none}),
|
||||
}),
|
||||
)
|
||||
@@ -1,10 +1,24 @@
|
||||
from .zdf import ZDFIE
|
||||
from .zdf import ZDFBaseIE
|
||||
|
||||
|
||||
class DreiSatIE(ZDFIE): # XXX: Do not subclass from concrete IE
|
||||
class DreiSatIE(ZDFBaseIE):
|
||||
IE_NAME = '3sat'
|
||||
_VALID_URL = r'https?://(?:www\.)?3sat\.de/(?:[^/]+/)*(?P<id>[^/?#&]+)\.html'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.3sat.de/dokumentation/reise/traumziele-suedostasiens-die-philippinen-und-vietnam-102.html',
|
||||
'info_dict': {
|
||||
'id': '231124_traumziele_philippinen_und_vietnam_dokreise',
|
||||
'ext': 'mp4',
|
||||
'title': 'Traumziele Südostasiens (1/2): Die Philippinen und Vietnam',
|
||||
'description': 'md5:26329ce5197775b596773b939354079d',
|
||||
'duration': 2625.0,
|
||||
'thumbnail': 'https://www.3sat.de/assets/traumziele-suedostasiens-die-philippinen-und-vietnam-100~2400x1350?cb=1699870351148',
|
||||
'episode': 'Traumziele Südostasiens (1/2): Die Philippinen und Vietnam',
|
||||
'episode_id': 'POS_cc7ff51c-98cf-4d12-b99d-f7a551de1c95',
|
||||
'timestamp': 1738593000,
|
||||
'upload_date': '20250203',
|
||||
},
|
||||
}, {
|
||||
# Same as https://www.zdf.de/dokumentation/ab-18/10-wochen-sommer-102.html
|
||||
'url': 'https://www.3sat.de/film/ab-18/10-wochen-sommer-108.html',
|
||||
'md5': '0aff3e7bc72c8813f5e0fae333316a1d',
|
||||
@@ -17,6 +31,7 @@ class DreiSatIE(ZDFIE): # XXX: Do not subclass from concrete IE
|
||||
'timestamp': 1608604200,
|
||||
'upload_date': '20201222',
|
||||
},
|
||||
'skip': '410 Gone',
|
||||
}, {
|
||||
'url': 'https://www.3sat.de/gesellschaft/schweizweit/waidmannsheil-100.html',
|
||||
'info_dict': {
|
||||
@@ -30,6 +45,7 @@ class DreiSatIE(ZDFIE): # XXX: Do not subclass from concrete IE
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
'skip': '404 Not Found',
|
||||
}, {
|
||||
# Same as https://www.zdf.de/filme/filme-sonstige/der-hauptmann-112.html
|
||||
'url': 'https://www.3sat.de/film/spielfilm/der-hauptmann-100.html',
|
||||
@@ -39,3 +55,14 @@ class DreiSatIE(ZDFIE): # XXX: Do not subclass from concrete IE
|
||||
'url': 'https://www.3sat.de/wissen/nano/nano-21-mai-2019-102.html',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, video_id, fatal=False)
|
||||
if webpage:
|
||||
player = self._extract_player(webpage, url, fatal=False)
|
||||
if player:
|
||||
return self._extract_regular(url, player, video_id)
|
||||
|
||||
return self._extract_mobile(video_id)
|
||||
|
||||
@@ -48,32 +48,30 @@ class DropboxIE(InfoExtractor):
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
fn = urllib.parse.unquote(url_basename(url))
|
||||
title = os.path.splitext(fn)[0]
|
||||
password = self.get_param('videopassword')
|
||||
content_id = None
|
||||
|
||||
for part in self._yield_decoded_parts(webpage):
|
||||
if '/sm/password' in part:
|
||||
webpage = self._download_webpage(
|
||||
update_url('https://www.dropbox.com/sm/password', query=part.partition('?')[2]), video_id)
|
||||
content_id = self._search_regex(r'content_id=([\w.+=/-]+)', part, 'content ID')
|
||||
break
|
||||
|
||||
if (self._og_search_title(webpage, default=None) == 'Dropbox - Password Required'
|
||||
or 'Enter the password for this link' in webpage):
|
||||
if password:
|
||||
response = self._download_json(
|
||||
'https://www.dropbox.com/sm/auth', video_id, 'POSTing video password',
|
||||
headers={'content-type': 'application/x-www-form-urlencoded; charset=UTF-8'},
|
||||
data=urlencode_postdata({
|
||||
'is_xhr': 'true',
|
||||
't': self._get_cookies('https://www.dropbox.com')['t'].value,
|
||||
'content_id': self._search_regex(r'content_id=([\w.+=/-]+)["\']', webpage, 'content id'),
|
||||
'password': password,
|
||||
'url': url,
|
||||
}))
|
||||
|
||||
if response.get('status') != 'authed':
|
||||
raise ExtractorError('Invalid password', expected=True)
|
||||
elif not self._get_cookies('https://dropbox.com').get('sm_auth'):
|
||||
if content_id:
|
||||
password = self.get_param('videopassword')
|
||||
if not password:
|
||||
raise ExtractorError('Password protected video, use --video-password <password>', expected=True)
|
||||
|
||||
response = self._download_json(
|
||||
'https://www.dropbox.com/sm/auth', video_id, 'POSTing video password',
|
||||
data=urlencode_postdata({
|
||||
'is_xhr': 'true',
|
||||
't': self._get_cookies('https://www.dropbox.com')['t'].value,
|
||||
'content_id': content_id,
|
||||
'password': password,
|
||||
'url': update_url(url, scheme='', netloc=''),
|
||||
}))
|
||||
if response.get('status') != 'authed':
|
||||
raise ExtractorError('Invalid password', expected=True)
|
||||
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
formats, subtitles = [], {}
|
||||
@@ -84,7 +82,7 @@ class DropboxIE(InfoExtractor):
|
||||
has_anonymous_download = self._search_regex(
|
||||
r'(anonymous:\tanonymous)', part, 'anonymous', default=False)
|
||||
transcode_url = self._search_regex(
|
||||
r'\n.(https://[^\x03\x08\x12\n]+\.m3u8)', part, 'transcode url', default=None)
|
||||
r'\n.?(https://[^\x03\x08\x12\n]+\.m3u8)', part, 'transcode url', default=None)
|
||||
if not transcode_url:
|
||||
continue
|
||||
formats, subtitles = self._extract_m3u8_formats_and_subtitles(transcode_url, video_id, 'mp4')
|
||||
|
||||
@@ -135,7 +135,7 @@ class DropoutIE(InfoExtractor):
|
||||
self.raise_login_required(method='any')
|
||||
raise ExtractorError(login_err, expected=True)
|
||||
|
||||
embed_url = self._search_regex(r'embed_url:\s*["\'](.+?)["\']', webpage, 'embed url')
|
||||
embed_url = self._html_search_regex(r'embed_url:\s*["\'](.+?)["\']', webpage, 'embed url')
|
||||
thumbnail = self._og_search_thumbnail(webpage)
|
||||
watch_info = get_element_by_id('watch-info', webpage) or ''
|
||||
|
||||
|
||||
51
yt_dlp/extractor/drtalks.py
Normal file
51
yt_dlp/extractor/drtalks.py
Normal file
@@ -0,0 +1,51 @@
|
||||
from .brightcove import BrightcoveNewIE
|
||||
from .common import InfoExtractor
|
||||
from ..utils import url_or_none
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class DrTalksIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?drtalks\.com/videos/(?P<id>[\w-]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://drtalks.com/videos/six-pillars-of-resilience-tools-for-managing-stress-and-flourishing/',
|
||||
'info_dict': {
|
||||
'id': '6366193757112',
|
||||
'ext': 'mp4',
|
||||
'uploader_id': '6314452011001',
|
||||
'tags': ['resilience'],
|
||||
'description': 'md5:9c6805aee237ee6de8052461855b9dda',
|
||||
'timestamp': 1734546659,
|
||||
'thumbnail': 'https://drtalks.com/wp-content/uploads/2024/12/Episode-82-Eva-Selhub-DrTalks-Thumbs.jpg',
|
||||
'title': 'Six Pillars of Resilience: Tools for Managing Stress and Flourishing',
|
||||
'duration': 2800.682,
|
||||
'upload_date': '20241218',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://drtalks.com/videos/the-pcos-puzzle-mastering-metabolic-health-with-marcelle-pick/',
|
||||
'info_dict': {
|
||||
'id': '6364699891112',
|
||||
'ext': 'mp4',
|
||||
'title': 'The PCOS Puzzle: Mastering Metabolic Health with Marcelle Pick',
|
||||
'description': 'md5:e87cbe00ca50135d5702787fc4043aaa',
|
||||
'thumbnail': 'https://drtalks.com/wp-content/uploads/2024/11/Episode-34-Marcelle-Pick-OBGYN-NP-DrTalks.jpg',
|
||||
'duration': 3515.2,
|
||||
'tags': ['pcos'],
|
||||
'upload_date': '20241114',
|
||||
'timestamp': 1731592119,
|
||||
'uploader_id': '6314452011001',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
next_data = self._search_nextjs_data(webpage, video_id)['props']['pageProps']['data']['video']
|
||||
|
||||
return self.url_result(
|
||||
next_data['videos']['brightcoveVideoLink'], BrightcoveNewIE, video_id,
|
||||
url_transparent=True,
|
||||
**traverse_obj(next_data, {
|
||||
'title': ('title', {str}),
|
||||
'description': ('videos', 'summury', {str}),
|
||||
'thumbnail': ('featuredImage', 'node', 'sourceUrl', {url_or_none}),
|
||||
}))
|
||||
@@ -5,15 +5,16 @@ from ..utils import (
|
||||
get_element_text_and_html_by_tag,
|
||||
int_or_none,
|
||||
join_nonempty,
|
||||
parse_qs,
|
||||
str_or_none,
|
||||
try_call,
|
||||
unified_timestamp,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
from ..utils.traversal import traverse_obj, value
|
||||
|
||||
|
||||
class DuoplayIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://duoplay\.ee/(?P<id>\d+)/[\w-]+/?(?:\?(?:[^#]+&)?ep=(?P<ep>\d+))?'
|
||||
_VALID_URL = r'https?://duoplay\.ee/(?P<id>\d+)(?:[/?#]|$)'
|
||||
_TESTS = [{
|
||||
'note': 'Siberi võmm S02E12',
|
||||
'url': 'https://duoplay.ee/4312/siberi-vomm?ep=24',
|
||||
@@ -34,15 +35,16 @@ class DuoplayIE(InfoExtractor):
|
||||
'episode_number': 12,
|
||||
'episode_id': '24',
|
||||
},
|
||||
'skip': 'No video found',
|
||||
}, {
|
||||
'note': 'Empty title',
|
||||
'url': 'https://duoplay.ee/17/uhikarotid?ep=14',
|
||||
'md5': '6aca68be71112314738dd17cced7f8bf',
|
||||
'md5': 'cba9f5dabf2582b224d80ac44fb80e47',
|
||||
'info_dict': {
|
||||
'id': '17_14',
|
||||
'ext': 'mp4',
|
||||
'title': 'Ühikarotid',
|
||||
'thumbnail': r're:https://.+\.jpg(?:\?c=\d+)?$',
|
||||
'title': 'Episode 14',
|
||||
'thumbnail': r're:https?://.+\.jpg',
|
||||
'description': 'md5:4719b418e058c209def41d48b601276e',
|
||||
'upload_date': '20100916',
|
||||
'timestamp': 1284661800,
|
||||
@@ -52,6 +54,8 @@ class DuoplayIE(InfoExtractor):
|
||||
'season_number': 2,
|
||||
'episode_id': '14',
|
||||
'release_year': 2010,
|
||||
'episode': 'Episode 14',
|
||||
'episode_number': 14,
|
||||
},
|
||||
}, {
|
||||
'note': 'Movie without expiry',
|
||||
@@ -68,10 +72,32 @@ class DuoplayIE(InfoExtractor):
|
||||
'timestamp': 1671054000,
|
||||
'release_year': 2018,
|
||||
},
|
||||
'skip': 'No video found',
|
||||
}, {
|
||||
'note': 'Episode url without show name',
|
||||
'url': 'https://duoplay.ee/9644?ep=185',
|
||||
'md5': '63f324b4fe2dbd8194dca16a6d52184a',
|
||||
'info_dict': {
|
||||
'id': '9644_185',
|
||||
'ext': 'mp4',
|
||||
'title': 'Episode 185',
|
||||
'thumbnail': r're:https?://.+\.jpg',
|
||||
'description': 'md5:ed25ba4e9e5d54bc291a4a0cdd241467',
|
||||
'upload_date': '20241120',
|
||||
'timestamp': 1732077000,
|
||||
'episode': 'Episode 63',
|
||||
'episode_id': '185',
|
||||
'episode_number': 63,
|
||||
'season': 'Season 2',
|
||||
'season_number': 2,
|
||||
'series': 'Telehommik',
|
||||
'series_id': '9644',
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
telecast_id, episode = self._match_valid_url(url).group('id', 'ep')
|
||||
telecast_id = self._match_id(url)
|
||||
episode = traverse_obj(parse_qs(url), ('ep', 0, {int_or_none}, {str_or_none}))
|
||||
video_id = join_nonempty(telecast_id, episode, delim='_')
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
video_player = try_call(lambda: extract_attributes(
|
||||
@@ -79,25 +105,33 @@ class DuoplayIE(InfoExtractor):
|
||||
if not video_player or not video_player.get('manifest-url'):
|
||||
raise ExtractorError('No video found', expected=True)
|
||||
|
||||
manifest_url = video_player['manifest-url']
|
||||
session_token = self._download_json(
|
||||
'https://sts.postimees.ee/session/register', video_id, 'Registering session',
|
||||
'Unable to register session', headers={
|
||||
'Accept': 'application/json',
|
||||
'X-Original-URI': manifest_url,
|
||||
})['session']
|
||||
|
||||
episode_attr = self._parse_json(video_player.get(':episode') or '', video_id, fatal=False) or {}
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'formats': self._extract_m3u8_formats(video_player['manifest-url'], video_id, 'mp4'),
|
||||
'formats': self._extract_m3u8_formats(manifest_url, video_id, 'mp4', query={'s': session_token}),
|
||||
**traverse_obj(episode_attr, {
|
||||
'title': 'title',
|
||||
'description': 'synopsis',
|
||||
'title': ('title', {str}),
|
||||
'description': ('synopsis', {str}),
|
||||
'thumbnail': ('images', 'original'),
|
||||
'timestamp': ('airtime', {lambda x: unified_timestamp(x + ' +0200')}),
|
||||
'cast': ('cast', {lambda x: x.split(', ')}),
|
||||
'cast': ('cast', filter, {lambda x: x.split(', ')}),
|
||||
'release_year': ('year', {int_or_none}),
|
||||
}),
|
||||
**(traverse_obj(episode_attr, {
|
||||
'title': (None, ('subtitle', ('episode_nr', {lambda x: f'Episode {x}' if x else None}))),
|
||||
'series': 'title',
|
||||
'title': (None, (('subtitle', {str}, filter), {value(f'Episode {episode}' if episode else None)})),
|
||||
'series': ('title', {str}),
|
||||
'series_id': ('telecast_id', {str_or_none}),
|
||||
'season_number': ('season_id', {int_or_none}),
|
||||
'episode': 'subtitle',
|
||||
'episode': ('subtitle', {str}, filter),
|
||||
'episode_number': ('episode_nr', {int_or_none}),
|
||||
'episode_id': ('episode_id', {str_or_none}),
|
||||
}, get_all=False) if episode_attr.get('category') != 'movies' else {}),
|
||||
|
||||
@@ -162,7 +162,7 @@ class DVTVIE(InfoExtractor):
|
||||
items = re.findall(r'(?s)playlist\.push\(({.+?})\);', webpage)
|
||||
if items:
|
||||
return self.playlist_result(
|
||||
[self._parse_video_metadata(i, video_id, timestamp) for i in items],
|
||||
(self._parse_video_metadata(i, video_id, timestamp) for i in items),
|
||||
video_id, self._html_search_meta('twitter:title', webpage))
|
||||
|
||||
item = self._search_regex(
|
||||
|
||||
155
yt_dlp/extractor/eggs.py
Normal file
155
yt_dlp/extractor/eggs.py
Normal file
@@ -0,0 +1,155 @@
|
||||
import secrets
|
||||
|
||||
from .common import InfoExtractor
|
||||
from .youtube import YoutubeIE
|
||||
from ..utils import (
|
||||
int_or_none,
|
||||
parse_iso8601,
|
||||
str_or_none,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class EggsBaseIE(InfoExtractor):
|
||||
_API_HEADERS = {
|
||||
'Accept': '*/*',
|
||||
'apVersion': '8.2.00',
|
||||
'deviceName': 'Android',
|
||||
}
|
||||
|
||||
def _real_initialize(self):
|
||||
self._API_HEADERS['deviceId'] = secrets.token_hex(8)
|
||||
|
||||
def _call_api(self, endpoint, video_id):
|
||||
return self._download_json(
|
||||
f'https://app-front-api.eggs.mu/v1/{endpoint}', video_id,
|
||||
headers=self._API_HEADERS)
|
||||
|
||||
def _extract_music_info(self, data):
|
||||
if yt_url := traverse_obj(data, ('youtubeUrl', {url_or_none})):
|
||||
return self.url_result(yt_url, ie=YoutubeIE)
|
||||
|
||||
artist_name = traverse_obj(data, ('artist', 'artistName', {str_or_none}))
|
||||
music_id = traverse_obj(data, ('musicId', {str_or_none}))
|
||||
webpage_url = None
|
||||
if artist_name and music_id:
|
||||
webpage_url = f'https://eggs.mu/artist/{artist_name}/song/{music_id}'
|
||||
|
||||
return {
|
||||
'id': music_id,
|
||||
'vcodec': 'none',
|
||||
'webpage_url': webpage_url,
|
||||
'extractor_key': EggsIE.ie_key(),
|
||||
'extractor': EggsIE.IE_NAME,
|
||||
**traverse_obj(data, {
|
||||
'title': ('musicTitle', {str}),
|
||||
'url': ('musicDataPath', {url_or_none}),
|
||||
'uploader': ('artist', 'displayName', {str}),
|
||||
'uploader_id': ('artist', 'artistId', {str_or_none}),
|
||||
'thumbnail': ('imageDataPath', {url_or_none}),
|
||||
'view_count': ('numberOfMusicPlays', {int_or_none}),
|
||||
'like_count': ('numberOfLikes', {int_or_none}),
|
||||
'comment_count': ('numberOfComments', {int_or_none}),
|
||||
'composers': ('composer', {str}, all),
|
||||
'tags': ('tags', ..., {str}),
|
||||
'timestamp': ('releaseDate', {parse_iso8601}),
|
||||
'artist': ('artist', 'displayName', {str}),
|
||||
})}
|
||||
|
||||
|
||||
class EggsIE(EggsBaseIE):
|
||||
IE_NAME = 'eggs:single'
|
||||
_VALID_URL = r'https?://eggs\.mu/artist/[^/?#]+/song/(?P<id>[\da-f-]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://eggs.mu/artist/32_sunny_girl/song/0e95fd1d-4d61-4d5b-8b18-6092c551da90',
|
||||
'info_dict': {
|
||||
'id': '0e95fd1d-4d61-4d5b-8b18-6092c551da90',
|
||||
'ext': 'm4a',
|
||||
'title': 'シネマと信号',
|
||||
'uploader': 'Sunny Girl',
|
||||
'thumbnail': r're:https?://.*\.jpg(?:\?.*)?$',
|
||||
'uploader_id': '1607',
|
||||
'like_count': int,
|
||||
'timestamp': 1731327327,
|
||||
'composers': ['橘高連太郎'],
|
||||
'view_count': int,
|
||||
'comment_count': int,
|
||||
'artists': ['Sunny Girl'],
|
||||
'upload_date': '20241111',
|
||||
'tags': ['SunnyGirl', 'シネマと信号'],
|
||||
},
|
||||
}, {
|
||||
'url': 'https://eggs.mu/artist/KAMO_3pband/song/1d4bc45f-1af6-47a9-8b30-a70cae350b4f',
|
||||
'info_dict': {
|
||||
'id': '80cLKA2wnoA',
|
||||
'ext': 'mp4',
|
||||
'title': 'KAMO「いい女だから」Audio',
|
||||
'uploader': 'KAMO',
|
||||
'live_status': 'not_live',
|
||||
'channel_id': 'UCsHLBw2__5Q9y55skXPotOg',
|
||||
'channel_follower_count': int,
|
||||
'description': 'md5:d260da711ecbec3e720293dc11401b87',
|
||||
'availability': 'public',
|
||||
'uploader_id': '@KAMO_band',
|
||||
'upload_date': '20240925',
|
||||
'thumbnail': 'https://i.ytimg.com/vi/80cLKA2wnoA/maxresdefault.jpg',
|
||||
'comment_count': int,
|
||||
'channel_url': 'https://www.youtube.com/channel/UCsHLBw2__5Q9y55skXPotOg',
|
||||
'view_count': int,
|
||||
'duration': 151,
|
||||
'like_count': int,
|
||||
'channel': 'KAMO',
|
||||
'playable_in_embed': True,
|
||||
'uploader_url': 'https://www.youtube.com/@KAMO_band',
|
||||
'tags': [],
|
||||
'timestamp': 1727271121,
|
||||
'age_limit': 0,
|
||||
'categories': ['People & Blogs'],
|
||||
},
|
||||
'add_ie': ['Youtube'],
|
||||
'params': {'skip_download': 'Youtube'},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
song_id = self._match_id(url)
|
||||
json_data = self._call_api(f'musics/{song_id}', song_id)
|
||||
return self._extract_music_info(json_data)
|
||||
|
||||
|
||||
class EggsArtistIE(EggsBaseIE):
|
||||
IE_NAME = 'eggs:artist'
|
||||
_VALID_URL = r'https?://eggs\.mu/artist/(?P<id>\w+)/?(?:[?#&]|$)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://eggs.mu/artist/32_sunny_girl',
|
||||
'info_dict': {
|
||||
'id': '32_sunny_girl',
|
||||
'thumbnail': 'https://image-pro.eggs.mu/profile/1607.jpeg?updated_at=2024-04-03T20%3A06%3A00%2B09%3A00',
|
||||
'description': 'Muddy Mine / 東京高田馬場CLUB PHASE / Gt.Vo 橘高 連太郎 / Ba.Cho 小野 ゆうき / Dr 大森 りゅうひこ',
|
||||
'title': 'Sunny Girl',
|
||||
},
|
||||
'playlist_mincount': 18,
|
||||
}, {
|
||||
'url': 'https://eggs.mu/artist/KAMO_3pband',
|
||||
'info_dict': {
|
||||
'id': 'KAMO_3pband',
|
||||
'description': '川崎発3ピースバンド',
|
||||
'thumbnail': 'https://image-pro.eggs.mu/profile/35217.jpeg?updated_at=2024-11-27T16%3A31%3A50%2B09%3A00',
|
||||
'title': 'KAMO',
|
||||
},
|
||||
'playlist_mincount': 2,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
artist_id = self._match_id(url)
|
||||
artist_data = self._call_api(f'artists/{artist_id}', artist_id)
|
||||
song_data = self._call_api(f'artists/{artist_id}/musics', artist_id)
|
||||
return self.playlist_result(
|
||||
traverse_obj(song_data, ('data', ..., {dict}, {self._extract_music_info})),
|
||||
playlist_id=artist_id, **traverse_obj(artist_data, {
|
||||
'title': ('displayName', {str}),
|
||||
'description': ('profile', {str}),
|
||||
'thumbnail': ('imageDataPath', {url_or_none}),
|
||||
}))
|
||||
@@ -50,7 +50,7 @@ class FacebookIE(InfoExtractor):
|
||||
[^/]+/videos/(?:[^/]+/)?|
|
||||
[^/]+/posts/|
|
||||
events/(?:[^/]+/)?|
|
||||
groups/[^/]+/(?:permalink|posts)/|
|
||||
groups/[^/]+/(?:permalink|posts)/(?:[\da-f]+/)?|
|
||||
watchparty/
|
||||
)|
|
||||
facebook:
|
||||
@@ -410,6 +410,9 @@ class FacebookIE(InfoExtractor):
|
||||
'uploader': 'Comitato Liberi Pensatori',
|
||||
'uploader_id': '100065709540881',
|
||||
},
|
||||
}, {
|
||||
'url': 'https://www.facebook.com/groups/1513990329015294/posts/d41d8cd9/2013209885760000/?app=fbl',
|
||||
'only_matching': True,
|
||||
}]
|
||||
_SUPPORTED_PAGLETS_REGEX = r'(?:pagelet_group_mall|permalink_video_pagelet|hyperfeed_story_id_[0-9a-f]+)'
|
||||
_api_config = {
|
||||
@@ -563,13 +566,13 @@ class FacebookIE(InfoExtractor):
|
||||
return extract_video_data(try_get(
|
||||
js_data, lambda x: x['jsmods']['instances'], list) or [])
|
||||
|
||||
def extract_dash_manifest(video, formats):
|
||||
def extract_dash_manifest(vid_data, formats, mpd_url=None):
|
||||
dash_manifest = traverse_obj(
|
||||
video, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', expected_type=str)
|
||||
vid_data, 'dash_manifest', 'playlist', 'dash_manifest_xml_string', 'manifest_xml', expected_type=str)
|
||||
if dash_manifest:
|
||||
formats.extend(self._parse_mpd_formats(
|
||||
compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest)),
|
||||
mpd_url=url_or_none(video.get('dash_manifest_url'))))
|
||||
mpd_url=url_or_none(vid_data.get('dash_manifest_url')) or mpd_url))
|
||||
|
||||
def process_formats(info):
|
||||
# Downloads with browser's User-Agent are rate limited. Working around
|
||||
@@ -619,9 +622,12 @@ class FacebookIE(InfoExtractor):
|
||||
video = video['creation_story']
|
||||
video['owner'] = traverse_obj(video, ('short_form_video_context', 'video_owner'))
|
||||
video.update(reel_info)
|
||||
fmt_data = traverse_obj(video, ('videoDeliveryLegacyFields', {dict})) or video
|
||||
|
||||
formats = []
|
||||
q = qualities(['sd', 'hd'])
|
||||
|
||||
# Legacy formats extraction
|
||||
fmt_data = traverse_obj(video, ('videoDeliveryLegacyFields', {dict})) or video
|
||||
for key, format_id in (('playable_url', 'sd'), ('playable_url_quality_hd', 'hd'),
|
||||
('playable_url_dash', ''), ('browser_native_hd_url', 'hd'),
|
||||
('browser_native_sd_url', 'sd')):
|
||||
@@ -629,7 +635,7 @@ class FacebookIE(InfoExtractor):
|
||||
if not playable_url:
|
||||
continue
|
||||
if determine_ext(playable_url) == 'mpd':
|
||||
formats.extend(self._extract_mpd_formats(playable_url, video_id))
|
||||
formats.extend(self._extract_mpd_formats(playable_url, video_id, fatal=False))
|
||||
else:
|
||||
formats.append({
|
||||
'format_id': format_id,
|
||||
@@ -638,6 +644,28 @@ class FacebookIE(InfoExtractor):
|
||||
'url': playable_url,
|
||||
})
|
||||
extract_dash_manifest(fmt_data, formats)
|
||||
|
||||
# New videoDeliveryResponse formats extraction
|
||||
fmt_data = traverse_obj(video, ('videoDeliveryResponseFragment', 'videoDeliveryResponseResult'))
|
||||
mpd_urls = traverse_obj(fmt_data, ('dash_manifest_urls', ..., 'manifest_url', {url_or_none}))
|
||||
dash_manifests = traverse_obj(fmt_data, ('dash_manifests', lambda _, v: v['manifest_xml']))
|
||||
for idx, dash_manifest in enumerate(dash_manifests):
|
||||
extract_dash_manifest(dash_manifest, formats, mpd_url=traverse_obj(mpd_urls, idx))
|
||||
if not dash_manifests:
|
||||
# Only extract from MPD URLs if the manifests are not already provided
|
||||
for mpd_url in mpd_urls:
|
||||
formats.extend(self._extract_mpd_formats(mpd_url, video_id, fatal=False))
|
||||
for prog_fmt in traverse_obj(fmt_data, ('progressive_urls', lambda _, v: v['progressive_url'])):
|
||||
format_id = traverse_obj(prog_fmt, ('metadata', 'quality', {str.lower}))
|
||||
formats.append({
|
||||
'format_id': format_id,
|
||||
# sd, hd formats w/o resolution info should be deprioritized below DASH
|
||||
'quality': q(format_id) - 3,
|
||||
'url': prog_fmt['progressive_url'],
|
||||
})
|
||||
for m3u8_url in traverse_obj(fmt_data, ('hls_playlist_urls', ..., 'hls_playlist_url', {url_or_none})):
|
||||
formats.extend(self._extract_m3u8_formats(m3u8_url, video_id, 'mp4', fatal=False, m3u8_id='hls'))
|
||||
|
||||
if not formats:
|
||||
# Do not append false positive entry w/o any formats
|
||||
return
|
||||
|
||||
@@ -12,7 +12,7 @@ from ..utils import (
|
||||
class FirstTVIE(InfoExtractor):
|
||||
IE_NAME = '1tv'
|
||||
IE_DESC = 'Первый канал'
|
||||
_VALID_URL = r'https?://(?:www\.)?1tv\.ru/(?:[^/]+/)+(?P<id>[^/?#]+)'
|
||||
_VALID_URL = r'https?://(?:www\.)?(?:sport)?1tv\.ru/(?:[^/?#]+/)+(?P<id>[^/?#]+)'
|
||||
|
||||
_TESTS = [{
|
||||
# single format
|
||||
@@ -52,6 +52,9 @@ class FirstTVIE(InfoExtractor):
|
||||
}, {
|
||||
'url': 'http://www.1tv.ru/shows/tochvtoch-supersezon/vystupleniya/evgeniy-dyatlov-vladimir-vysockiy-koni-priveredlivye-toch-v-toch-supersezon-fragment-vypuska-ot-06-11-2016',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.sport1tv.ru/sport/chempionat-rossii-po-figurnomu-kataniyu-2025',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import json
|
||||
import re
|
||||
import urllib.parse
|
||||
|
||||
@@ -5,8 +6,10 @@ from .common import InfoExtractor
|
||||
from .dailymotion import DailymotionIE
|
||||
from ..networking import HEADRequest
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
clean_html,
|
||||
determine_ext,
|
||||
extract_attributes,
|
||||
filter_dict,
|
||||
format_field,
|
||||
int_or_none,
|
||||
@@ -16,7 +19,7 @@ from ..utils import (
|
||||
unsmuggle_url,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
from ..utils.traversal import find_element, traverse_obj
|
||||
|
||||
|
||||
class FranceTVBaseInfoExtractor(InfoExtractor):
|
||||
@@ -29,6 +32,7 @@ class FranceTVBaseInfoExtractor(InfoExtractor):
|
||||
|
||||
|
||||
class FranceTVIE(InfoExtractor):
|
||||
IE_NAME = 'francetv'
|
||||
_VALID_URL = r'francetv:(?P<id>[^@#]+)'
|
||||
_GEO_COUNTRIES = ['FR']
|
||||
_GEO_BYPASS = False
|
||||
@@ -248,18 +252,19 @@ class FranceTVIE(InfoExtractor):
|
||||
|
||||
|
||||
class FranceTVSiteIE(FranceTVBaseInfoExtractor):
|
||||
IE_NAME = 'francetv:site'
|
||||
_VALID_URL = r'https?://(?:(?:www\.)?france\.tv|mobile\.france\.tv)/(?:[^/]+/)*(?P<id>[^/]+)\.html'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://www.france.tv/france-2/13h15-le-dimanche/140921-les-mysteres-de-jesus.html',
|
||||
'info_dict': {
|
||||
'id': 'c5bda21d-2c6f-4470-8849-3d8327adb2ba',
|
||||
'id': 'ec217ecc-0733-48cf-ac06-af1347b849d1', # old: c5bda21d-2c6f-4470-8849-3d8327adb2ba'
|
||||
'ext': 'mp4',
|
||||
'title': '13h15, le dimanche... - Les mystères de Jésus',
|
||||
'timestamp': 1514118300,
|
||||
'duration': 2880,
|
||||
'timestamp': 1502623500,
|
||||
'duration': 2580,
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'upload_date': '20171224',
|
||||
'upload_date': '20170813',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
@@ -282,6 +287,7 @@ class FranceTVSiteIE(FranceTVBaseInfoExtractor):
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'duration': 1441,
|
||||
},
|
||||
'skip': 'No longer available',
|
||||
}, {
|
||||
# geo-restricted livestream (workflow == 'token-akamai')
|
||||
'url': 'https://www.france.tv/france-4/direct.html',
|
||||
@@ -336,19 +342,33 @@ class FranceTVSiteIE(FranceTVBaseInfoExtractor):
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
# XXX: For parsing next.js v15+ data; see also yt_dlp.extractor.goplay
|
||||
def _find_json(self, s):
|
||||
return self._search_json(
|
||||
r'\w+\s*:\s*', s, 'next js data', None, contains_pattern=r'\[(?s:.+)\]', default=None)
|
||||
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
|
||||
video_id = self._search_regex(
|
||||
r'(?:data-main-video\s*=|videoId["\']?\s*[:=])\s*(["\'])(?P<id>(?:(?!\1).)+)\1',
|
||||
webpage, 'video id', default=None, group='id')
|
||||
nextjs_data = traverse_obj(
|
||||
re.findall(r'<script[^>]*>\s*self\.__next_f\.push\(\s*(\[.+?\])\s*\);?\s*</script>', webpage),
|
||||
(..., {json.loads}, ..., {self._find_json}, ..., 'children', ..., ..., 'children', ..., ..., 'children'))
|
||||
|
||||
if traverse_obj(nextjs_data, (..., ..., 'children', ..., 'isLive', {bool}, any)):
|
||||
# For livestreams we need the id of the stream instead of the currently airing episode id
|
||||
video_id = traverse_obj(nextjs_data, (
|
||||
..., ..., 'children', ..., 'children', ..., 'children', ..., 'children', ..., ...,
|
||||
'children', ..., ..., 'children', ..., ..., 'children', (..., (..., ...)),
|
||||
'options', 'id', {str}, any))
|
||||
else:
|
||||
video_id = traverse_obj(nextjs_data, (
|
||||
..., ..., ..., 'children',
|
||||
lambda _, v: v['video']['url'] == urllib.parse.urlparse(url).path,
|
||||
'video', ('playerReplayId', 'siId'), {str}, any))
|
||||
|
||||
if not video_id:
|
||||
video_id = self._html_search_regex(
|
||||
r'(?:href=|player\.setVideo\(\s*)"http://videos?\.francetv\.fr/video/([^@"]+@[^"]+)"',
|
||||
webpage, 'video ID')
|
||||
raise ExtractorError('Unable to extract video ID')
|
||||
|
||||
return self._make_url_result(video_id, url=url)
|
||||
|
||||
@@ -441,11 +461,16 @@ class FranceTVInfoIE(FranceTVBaseInfoExtractor):
|
||||
self.url_result(dailymotion_url, DailymotionIE.ie_key())
|
||||
for dailymotion_url in dailymotion_urls])
|
||||
|
||||
video_id = self._search_regex(
|
||||
(r'player\.load[^;]+src:\s*["\']([^"\']+)',
|
||||
r'id-video=([^@]+@[^"]+)',
|
||||
r'<a[^>]+href="(?:https?:)?//videos\.francetv\.fr/video/([^@]+@[^"]+)"',
|
||||
r'(?:data-id|<figure[^<]+\bid)=["\']([\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'),
|
||||
webpage, 'video id')
|
||||
video_id = (
|
||||
traverse_obj(webpage, (
|
||||
{find_element(tag='button', attr='data-cy', value='francetv-player-wrapper', html=True)},
|
||||
{extract_attributes}, 'id'))
|
||||
or self._search_regex(
|
||||
(r'player\.load[^;]+src:\s*["\']([^"\']+)',
|
||||
r'id-video=([^@]+@[^"]+)',
|
||||
r'<a[^>]+href="(?:https?:)?//videos\.francetv\.fr/video/([^@]+@[^"]+)"',
|
||||
r'(?:data-id|<figure[^<]+\bid)=["\']([\da-f]{8}-[\da-f]{4}-[\da-f]{4}-[\da-f]{4}-[\da-f]{12})'),
|
||||
webpage, 'video id')
|
||||
)
|
||||
|
||||
return self._make_url_result(video_id, url=url)
|
||||
|
||||
@@ -1,349 +0,0 @@
|
||||
import random
|
||||
import re
|
||||
import string
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
determine_ext,
|
||||
int_or_none,
|
||||
join_nonempty,
|
||||
js_to_json,
|
||||
make_archive_id,
|
||||
orderedSet,
|
||||
qualities,
|
||||
str_or_none,
|
||||
traverse_obj,
|
||||
try_get,
|
||||
urlencode_postdata,
|
||||
)
|
||||
|
||||
|
||||
class FunimationBaseIE(InfoExtractor):
|
||||
_NETRC_MACHINE = 'funimation'
|
||||
_REGION = None
|
||||
_TOKEN = None
|
||||
|
||||
def _get_region(self):
|
||||
region_cookie = self._get_cookies('https://www.funimation.com').get('region')
|
||||
region = region_cookie.value if region_cookie else self.get_param('geo_bypass_country')
|
||||
return region or traverse_obj(
|
||||
self._download_json(
|
||||
'https://geo-service.prd.funimationsvc.com/geo/v1/region/check', None, fatal=False,
|
||||
note='Checking geo-location', errnote='Unable to fetch geo-location information'),
|
||||
'region') or 'US'
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
if self._TOKEN:
|
||||
return
|
||||
try:
|
||||
data = self._download_json(
|
||||
'https://prod-api-funimationnow.dadcdigital.com/api/auth/login/',
|
||||
None, 'Logging in', data=urlencode_postdata({
|
||||
'username': username,
|
||||
'password': password,
|
||||
}))
|
||||
FunimationBaseIE._TOKEN = data['token']
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
error = self._parse_json(e.cause.response.read().decode(), None)['error']
|
||||
raise ExtractorError(error, expected=True)
|
||||
raise
|
||||
|
||||
|
||||
class FunimationPageIE(FunimationBaseIE):
|
||||
IE_NAME = 'funimation:page'
|
||||
_VALID_URL = r'https?://(?:www\.)?funimation(?:\.com|now\.uk)/(?:(?P<lang>[^/]+)/)?(?:shows|v)/(?P<show>[^/]+)/(?P<episode>[^/?#&]+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://www.funimation.com/shows/attack-on-titan-junior-high/broadcast-dub-preview/',
|
||||
'info_dict': {
|
||||
'id': '210050',
|
||||
'ext': 'mp4',
|
||||
'title': 'Broadcast Dub Preview',
|
||||
# Other metadata is tested in FunimationIE
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'm3u8',
|
||||
},
|
||||
'add_ie': ['Funimation'],
|
||||
}, {
|
||||
# Not available in US
|
||||
'url': 'https://www.funimation.com/shows/hacksign/role-play/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
# with lang code
|
||||
'url': 'https://www.funimation.com/en/shows/hacksign/role-play/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.funimationnow.uk/shows/puzzle-dragons-x/drop-impact/simulcast/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://www.funimation.com/v/a-certain-scientific-railgun/super-powered-level-5',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._REGION:
|
||||
FunimationBaseIE._REGION = self._get_region()
|
||||
|
||||
def _real_extract(self, url):
|
||||
locale, show, episode = self._match_valid_url(url).group('lang', 'show', 'episode')
|
||||
|
||||
video_id = traverse_obj(self._download_json(
|
||||
f'https://title-api.prd.funimationsvc.com/v1/shows/{show}/episodes/{episode}',
|
||||
f'{show}_{episode}', query={
|
||||
'deviceType': 'web',
|
||||
'region': self._REGION,
|
||||
'locale': locale or 'en',
|
||||
}), ('videoList', ..., 'id'), get_all=False)
|
||||
|
||||
return self.url_result(f'https://www.funimation.com/player/{video_id}', FunimationIE.ie_key(), video_id)
|
||||
|
||||
|
||||
class FunimationIE(FunimationBaseIE):
|
||||
_VALID_URL = r'https?://(?:www\.)?funimation\.com/player/(?P<id>\d+)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://www.funimation.com/player/210051',
|
||||
'info_dict': {
|
||||
'id': '210050',
|
||||
'display_id': 'broadcast-dub-preview',
|
||||
'ext': 'mp4',
|
||||
'title': 'Broadcast Dub Preview',
|
||||
'thumbnail': r're:https?://.*\.(?:jpg|png)',
|
||||
'episode': 'Broadcast Dub Preview',
|
||||
'episode_id': '210050',
|
||||
'season': 'Extras',
|
||||
'season_id': '166038',
|
||||
'season_number': 99,
|
||||
'series': 'Attack on Titan: Junior High',
|
||||
'description': '',
|
||||
'duration': 155,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'm3u8',
|
||||
},
|
||||
}, {
|
||||
'note': 'player_id should be extracted with the relevent compat-opt',
|
||||
'url': 'https://www.funimation.com/player/210051',
|
||||
'info_dict': {
|
||||
'id': '210051',
|
||||
'display_id': 'broadcast-dub-preview',
|
||||
'ext': 'mp4',
|
||||
'title': 'Broadcast Dub Preview',
|
||||
'thumbnail': r're:https?://.*\.(?:jpg|png)',
|
||||
'episode': 'Broadcast Dub Preview',
|
||||
'episode_id': '210050',
|
||||
'season': 'Extras',
|
||||
'season_id': '166038',
|
||||
'season_number': 99,
|
||||
'series': 'Attack on Titan: Junior High',
|
||||
'description': '',
|
||||
'duration': 155,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'm3u8',
|
||||
'compat_opts': ['seperate-video-versions'],
|
||||
},
|
||||
}]
|
||||
|
||||
@staticmethod
|
||||
def _get_experiences(episode):
|
||||
for lang, lang_data in episode.get('languages', {}).items():
|
||||
for video_data in lang_data.values():
|
||||
for version, f in video_data.items():
|
||||
yield lang, version.title(), f
|
||||
|
||||
def _get_episode(self, webpage, experience_id=None, episode_id=None, fatal=True):
|
||||
""" Extract the episode, season and show objects given either episode/experience id """
|
||||
show = self._parse_json(
|
||||
self._search_regex(
|
||||
r'show\s*=\s*({.+?})\s*;', webpage, 'show data', fatal=fatal),
|
||||
experience_id, transform_source=js_to_json, fatal=fatal) or []
|
||||
for season in show.get('seasons', []):
|
||||
for episode in season.get('episodes', []):
|
||||
if episode_id is not None:
|
||||
if str(episode.get('episodePk')) == episode_id:
|
||||
return episode, season, show
|
||||
continue
|
||||
for _, _, f in self._get_experiences(episode):
|
||||
if f.get('experienceId') == experience_id:
|
||||
return episode, season, show
|
||||
if fatal:
|
||||
raise ExtractorError('Unable to find episode information')
|
||||
else:
|
||||
self.report_warning('Unable to find episode information')
|
||||
return {}, {}, {}
|
||||
|
||||
def _real_extract(self, url):
|
||||
initial_experience_id = self._match_id(url)
|
||||
webpage = self._download_webpage(
|
||||
url, initial_experience_id, note=f'Downloading player webpage for {initial_experience_id}')
|
||||
episode, season, show = self._get_episode(webpage, experience_id=int(initial_experience_id))
|
||||
episode_id = str(episode['episodePk'])
|
||||
display_id = episode.get('slug') or episode_id
|
||||
|
||||
formats, subtitles, thumbnails, duration = [], {}, [], 0
|
||||
requested_languages, requested_versions = self._configuration_arg('language'), self._configuration_arg('version')
|
||||
language_preference = qualities((requested_languages or [''])[::-1])
|
||||
source_preference = qualities((requested_versions or ['uncut', 'simulcast'])[::-1])
|
||||
only_initial_experience = 'seperate-video-versions' in self.get_param('compat_opts', [])
|
||||
|
||||
for lang, version, fmt in self._get_experiences(episode):
|
||||
experience_id = str(fmt['experienceId'])
|
||||
if (only_initial_experience and experience_id != initial_experience_id
|
||||
or requested_languages and lang.lower() not in requested_languages
|
||||
or requested_versions and version.lower() not in requested_versions):
|
||||
continue
|
||||
thumbnails.append({'url': fmt.get('poster')})
|
||||
duration = max(duration, fmt.get('duration', 0))
|
||||
format_name = f'{version} {lang} ({experience_id})'
|
||||
self.extract_subtitles(
|
||||
subtitles, experience_id, display_id=display_id, format_name=format_name,
|
||||
episode=episode if experience_id == initial_experience_id else episode_id)
|
||||
|
||||
headers = {}
|
||||
if self._TOKEN:
|
||||
headers['Authorization'] = f'Token {self._TOKEN}'
|
||||
page = self._download_json(
|
||||
f'https://www.funimation.com/api/showexperience/{experience_id}/',
|
||||
display_id, headers=headers, expected_status=403, query={
|
||||
'pinst_id': ''.join(random.choices(string.digits + string.ascii_letters, k=8)),
|
||||
}, note=f'Downloading {format_name} JSON')
|
||||
sources = page.get('items') or []
|
||||
if not sources:
|
||||
error = try_get(page, lambda x: x['errors'][0], dict)
|
||||
if error:
|
||||
self.report_warning('{} said: Error {} - {}'.format(
|
||||
self.IE_NAME, error.get('code'), error.get('detail') or error.get('title')))
|
||||
else:
|
||||
self.report_warning('No sources found for format')
|
||||
|
||||
current_formats = []
|
||||
for source in sources:
|
||||
source_url = source.get('src')
|
||||
source_type = source.get('videoType') or determine_ext(source_url)
|
||||
if source_type == 'm3u8':
|
||||
current_formats.extend(self._extract_m3u8_formats(
|
||||
source_url, display_id, 'mp4', m3u8_id='{}-{}'.format(experience_id, 'hls'), fatal=False,
|
||||
note=f'Downloading {format_name} m3u8 information'))
|
||||
else:
|
||||
current_formats.append({
|
||||
'format_id': f'{experience_id}-{source_type}',
|
||||
'url': source_url,
|
||||
})
|
||||
for f in current_formats:
|
||||
# TODO: Convert language to code
|
||||
f.update({
|
||||
'language': lang,
|
||||
'format_note': version,
|
||||
'source_preference': source_preference(version.lower()),
|
||||
'language_preference': language_preference(lang.lower()),
|
||||
})
|
||||
formats.extend(current_formats)
|
||||
if not formats and (requested_languages or requested_versions):
|
||||
self.raise_no_formats(
|
||||
'There are no video formats matching the requested languages/versions', expected=True, video_id=display_id)
|
||||
self._remove_duplicate_formats(formats)
|
||||
|
||||
return {
|
||||
'id': episode_id,
|
||||
'_old_archive_ids': [make_archive_id(self, initial_experience_id)],
|
||||
'display_id': display_id,
|
||||
'duration': duration,
|
||||
'title': episode['episodeTitle'],
|
||||
'description': episode.get('episodeSummary'),
|
||||
'episode': episode.get('episodeTitle'),
|
||||
'episode_number': int_or_none(episode.get('episodeId')),
|
||||
'episode_id': episode_id,
|
||||
'season': season.get('seasonTitle'),
|
||||
'season_number': int_or_none(season.get('seasonId')),
|
||||
'season_id': str_or_none(season.get('seasonPk')),
|
||||
'series': show.get('showTitle'),
|
||||
'formats': formats,
|
||||
'thumbnails': thumbnails,
|
||||
'subtitles': subtitles,
|
||||
'_format_sort_fields': ('lang', 'source'),
|
||||
}
|
||||
|
||||
def _get_subtitles(self, subtitles, experience_id, episode, display_id, format_name):
|
||||
if isinstance(episode, str):
|
||||
webpage = self._download_webpage(
|
||||
f'https://www.funimation.com/player/{experience_id}/', display_id,
|
||||
fatal=False, note=f'Downloading player webpage for {format_name}')
|
||||
episode, _, _ = self._get_episode(webpage, episode_id=episode, fatal=False)
|
||||
|
||||
for _, version, f in self._get_experiences(episode):
|
||||
for source in f.get('sources'):
|
||||
for text_track in source.get('textTracks'):
|
||||
if not text_track.get('src'):
|
||||
continue
|
||||
sub_type = text_track.get('type').upper()
|
||||
sub_type = sub_type if sub_type != 'FULL' else None
|
||||
current_sub = {
|
||||
'url': text_track['src'],
|
||||
'name': join_nonempty(version, text_track.get('label'), sub_type, delim=' '),
|
||||
}
|
||||
lang = join_nonempty(text_track.get('language', 'und'),
|
||||
version if version != 'Simulcast' else None,
|
||||
sub_type, delim='_')
|
||||
if current_sub not in subtitles.get(lang, []):
|
||||
subtitles.setdefault(lang, []).append(current_sub)
|
||||
return subtitles
|
||||
|
||||
|
||||
class FunimationShowIE(FunimationBaseIE):
|
||||
IE_NAME = 'funimation:show'
|
||||
_VALID_URL = r'(?P<url>https?://(?:www\.)?funimation(?:\.com|now\.uk)/(?P<locale>[^/]+)?/?shows/(?P<id>[^/?#&]+))/?(?:[?#]|$)'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://www.funimation.com/en/shows/sk8-the-infinity',
|
||||
'info_dict': {
|
||||
'id': '1315000',
|
||||
'title': 'SK8 the Infinity',
|
||||
},
|
||||
'playlist_count': 13,
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
# without lang code
|
||||
'url': 'https://www.funimation.com/shows/ouran-high-school-host-club/',
|
||||
'info_dict': {
|
||||
'id': '39643',
|
||||
'title': 'Ouran High School Host Club',
|
||||
},
|
||||
'playlist_count': 26,
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._REGION:
|
||||
FunimationBaseIE._REGION = self._get_region()
|
||||
|
||||
def _real_extract(self, url):
|
||||
base_url, locale, display_id = self._match_valid_url(url).groups()
|
||||
|
||||
show_info = self._download_json(
|
||||
'https://title-api.prd.funimationsvc.com/v2/shows/{}?region={}&deviceType=web&locale={}'.format(
|
||||
display_id, self._REGION, locale or 'en'), display_id)
|
||||
items_info = self._download_json(
|
||||
'https://prod-api-funimationnow.dadcdigital.com/api/funimation/episodes/?limit=99999&title_id={}'.format(
|
||||
show_info.get('id')), display_id)
|
||||
|
||||
vod_items = traverse_obj(items_info, ('items', ..., lambda k, _: re.match(r'(?i)mostRecent[AS]vod', k), 'item'))
|
||||
|
||||
return {
|
||||
'_type': 'playlist',
|
||||
'id': str_or_none(show_info['id']),
|
||||
'title': show_info['name'],
|
||||
'entries': orderedSet(
|
||||
self.url_result(
|
||||
'{}/{}'.format(base_url, vod_item.get('episodeSlug')), FunimationPageIE.ie_key(),
|
||||
vod_item.get('episodeId'), vod_item.get('episodeName'))
|
||||
for vod_item in sorted(vod_items, key=lambda x: x.get('episodeOrder', -1))),
|
||||
}
|
||||
141
yt_dlp/extractor/gamedevtv.py
Normal file
141
yt_dlp/extractor/gamedevtv.py
Normal file
@@ -0,0 +1,141 @@
|
||||
import json
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking.exceptions import HTTPError
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
clean_html,
|
||||
int_or_none,
|
||||
join_nonempty,
|
||||
parse_iso8601,
|
||||
str_or_none,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class GameDevTVDashboardIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(?:www\.)?gamedev\.tv/dashboard/courses/(?P<course_id>\d+)(?:/(?P<lecture_id>\d+))?'
|
||||
_NETRC_MACHINE = 'gamedevtv'
|
||||
_TESTS = [{
|
||||
'url': 'https://www.gamedev.tv/dashboard/courses/25',
|
||||
'info_dict': {
|
||||
'id': '25',
|
||||
'title': 'Complete Blender Creator 3: Learn 3D Modelling for Beginners',
|
||||
'tags': ['blender', 'course', 'all', 'box modelling', 'sculpting'],
|
||||
'categories': ['Blender', '3D Art'],
|
||||
'thumbnail': 'https://gamedev-files.b-cdn.net/courses/qisc9pmu1jdc.jpg',
|
||||
'upload_date': '20220516',
|
||||
'timestamp': 1652694420,
|
||||
'modified_date': '20241027',
|
||||
'modified_timestamp': 1730049658,
|
||||
},
|
||||
'playlist_count': 100,
|
||||
}, {
|
||||
'url': 'https://www.gamedev.tv/dashboard/courses/63/2279',
|
||||
'info_dict': {
|
||||
'id': 'df04f4d8-68a4-4756-a71b-9ca9446c3a01',
|
||||
'ext': 'mp4',
|
||||
'modified_timestamp': 1701695752,
|
||||
'upload_date': '20230504',
|
||||
'episode': 'MagicaVoxel Community Course Introduction',
|
||||
'series_id': '63',
|
||||
'title': 'MagicaVoxel Community Course Introduction',
|
||||
'timestamp': 1683195397,
|
||||
'modified_date': '20231204',
|
||||
'categories': ['3D Art', 'MagicaVoxel'],
|
||||
'season': 'MagicaVoxel Community Course',
|
||||
'tags': ['MagicaVoxel', 'all', 'course'],
|
||||
'series': 'MagicaVoxel 3D Art Mini Course',
|
||||
'duration': 1405,
|
||||
'episode_number': 1,
|
||||
'season_number': 1,
|
||||
'season_id': '219',
|
||||
'description': 'md5:a378738c5bbec1c785d76c067652d650',
|
||||
'display_id': '63-219-2279',
|
||||
'alt_title': '1_CC_MVX MagicaVoxel Community Course Introduction.mp4',
|
||||
'thumbnail': 'https://vz-23691c65-6fa.b-cdn.net/df04f4d8-68a4-4756-a71b-9ca9446c3a01/thumbnail.jpg',
|
||||
},
|
||||
}]
|
||||
_API_HEADERS = {}
|
||||
|
||||
def _perform_login(self, username, password):
|
||||
try:
|
||||
response = self._download_json(
|
||||
'https://api.gamedev.tv/api/students/login', None, 'Logging in',
|
||||
headers={'Content-Type': 'application/json'},
|
||||
data=json.dumps({
|
||||
'email': username,
|
||||
'password': password,
|
||||
'cart_items': [],
|
||||
}).encode())
|
||||
except ExtractorError as e:
|
||||
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
|
||||
raise ExtractorError('Invalid username/password', expected=True)
|
||||
raise
|
||||
|
||||
self._API_HEADERS['Authorization'] = f'{response["token_type"]} {response["access_token"]}'
|
||||
|
||||
def _real_initialize(self):
|
||||
if not self._API_HEADERS.get('Authorization'):
|
||||
self.raise_login_required(
|
||||
'This content is only available with purchase', method='password')
|
||||
|
||||
def _entries(self, data, course_id, course_info, selected_lecture):
|
||||
for section in traverse_obj(data, ('sections', ..., {dict})):
|
||||
section_info = traverse_obj(section, {
|
||||
'season_id': ('id', {str_or_none}),
|
||||
'season': ('title', {str}),
|
||||
'season_number': ('order', {int_or_none}),
|
||||
})
|
||||
for lecture in traverse_obj(section, ('lectures', lambda _, v: url_or_none(v['video']['playListUrl']))):
|
||||
if selected_lecture and str(lecture.get('id')) != selected_lecture:
|
||||
continue
|
||||
display_id = join_nonempty(course_id, section_info.get('season_id'), lecture.get('id'))
|
||||
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
|
||||
lecture['video']['playListUrl'], display_id, 'mp4', m3u8_id='hls')
|
||||
yield {
|
||||
**course_info,
|
||||
**section_info,
|
||||
'id': display_id, # fallback
|
||||
'display_id': display_id,
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
'series': course_info.get('title'),
|
||||
'series_id': course_id,
|
||||
**traverse_obj(lecture, {
|
||||
'id': ('video', 'guid', {str}),
|
||||
'title': ('title', {str}),
|
||||
'alt_title': ('video', 'title', {str}),
|
||||
'description': ('description', {clean_html}),
|
||||
'episode': ('title', {str}),
|
||||
'episode_number': ('order', {int_or_none}),
|
||||
'duration': ('video', 'duration_in_sec', {int_or_none}),
|
||||
'timestamp': ('video', 'created_at', {parse_iso8601}),
|
||||
'modified_timestamp': ('video', 'updated_at', {parse_iso8601}),
|
||||
'thumbnail': ('video', 'thumbnailUrl', {url_or_none}),
|
||||
}),
|
||||
}
|
||||
|
||||
def _real_extract(self, url):
|
||||
course_id, lecture_id = self._match_valid_url(url).group('course_id', 'lecture_id')
|
||||
data = self._download_json(
|
||||
f'https://api.gamedev.tv/api/courses/my/{course_id}', course_id,
|
||||
headers=self._API_HEADERS)['data']
|
||||
|
||||
course_info = traverse_obj(data, {
|
||||
'title': ('title', {str}),
|
||||
'tags': ('tags', ..., 'name', {str}),
|
||||
'categories': ('categories', ..., 'title', {str}),
|
||||
'timestamp': ('created_at', {parse_iso8601}),
|
||||
'modified_timestamp': ('updated_at', {parse_iso8601}),
|
||||
'thumbnail': ('image', {url_or_none}),
|
||||
})
|
||||
|
||||
entries = self._entries(data, course_id, course_info, lecture_id)
|
||||
if lecture_id:
|
||||
lecture = next(entries, None)
|
||||
if not lecture:
|
||||
raise ExtractorError('Lecture not found')
|
||||
return lecture
|
||||
return self.playlist_result(entries, course_id, **course_info)
|
||||
@@ -293,6 +293,19 @@ class GenericIE(InfoExtractor):
|
||||
'timestamp': 1378272859.0,
|
||||
},
|
||||
},
|
||||
# Live DASH MPD
|
||||
{
|
||||
'url': 'https://livesim2.dashif.org/livesim2/ato_10/testpic_2s/Manifest.mpd',
|
||||
'info_dict': {
|
||||
'id': 'Manifest',
|
||||
'ext': 'mp4',
|
||||
'title': r're:Manifest \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
|
||||
'live_status': 'is_live',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': 'livestream',
|
||||
},
|
||||
},
|
||||
# m3u8 served with Content-Type: audio/x-mpegURL; charset=utf-8
|
||||
{
|
||||
'url': 'http://once.unicornmedia.com/now/master/playlist/bb0b18ba-64f5-4b1b-a29f-0ac252f06b68/77a785f3-5188-4806-b788-0893a61634ed/93677179-2d99-4ef4-9e17-fe70d49abfbf/content.m3u8',
|
||||
@@ -2436,10 +2449,9 @@ class GenericIE(InfoExtractor):
|
||||
subtitles = {}
|
||||
if format_id.endswith('mpegurl') or ext == 'm3u8':
|
||||
formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4', headers=headers)
|
||||
elif format_id.endswith(('mpd', 'dash+xml')) or ext == 'mpd':
|
||||
formats, subtitles = self._extract_mpd_formats_and_subtitles(url, video_id, headers=headers)
|
||||
elif format_id == 'f4m' or ext == 'f4m':
|
||||
formats = self._extract_f4m_formats(url, video_id, headers=headers)
|
||||
# Don't check for DASH/mpd here, do it later w/ first_bytes. Same number of requests either way
|
||||
else:
|
||||
formats = [{
|
||||
'format_id': format_id,
|
||||
@@ -2521,6 +2533,7 @@ class GenericIE(InfoExtractor):
|
||||
doc,
|
||||
mpd_base_url=full_response.url.rpartition('/')[0],
|
||||
mpd_url=url)
|
||||
info_dict['live_status'] = 'is_live' if doc.get('type') == 'dynamic' else None
|
||||
self._extra_manifest_info(info_dict, url)
|
||||
self.report_detected('DASH manifest')
|
||||
return info_dict
|
||||
|
||||
@@ -1,32 +1,48 @@
|
||||
import base64
|
||||
import hashlib
|
||||
import json
|
||||
import random
|
||||
import re
|
||||
import uuid
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..networking import HEADRequest
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
determine_ext,
|
||||
filter_dict,
|
||||
float_or_none,
|
||||
int_or_none,
|
||||
orderedSet,
|
||||
str_or_none,
|
||||
try_get,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import subs_list_to_dict, traverse_obj
|
||||
|
||||
|
||||
class GloboIE(InfoExtractor):
|
||||
_VALID_URL = r'(?:globo:|https?://.+?\.globo\.com/(?:[^/]+/)*(?:v/(?:[^/]+/)?|videos/))(?P<id>\d{7,})'
|
||||
_VALID_URL = r'(?:globo:|https?://[^/?#]+?\.globo\.com/(?:[^/?#]+/))(?P<id>\d{7,})'
|
||||
_NETRC_MACHINE = 'globo'
|
||||
_VIDEO_VIEW = '''
|
||||
query getVideoView($videoId: ID!) {
|
||||
video(id: $videoId) {
|
||||
duration
|
||||
description
|
||||
relatedEpisodeNumber
|
||||
relatedSeasonNumber
|
||||
headline
|
||||
title {
|
||||
originProgramId
|
||||
headline
|
||||
}
|
||||
}
|
||||
}
|
||||
'''
|
||||
_TESTS = [{
|
||||
'url': 'http://g1.globo.com/carros/autoesporte/videos/t/exclusivos-do-g1/v/mercedes-benz-gla-passa-por-teste-de-colisao-na-europa/3607726/',
|
||||
'url': 'https://globoplay.globo.com/v/3607726/',
|
||||
'info_dict': {
|
||||
'id': '3607726',
|
||||
'ext': 'mp4',
|
||||
'title': 'Mercedes-Benz GLA passa por teste de colisão na Europa',
|
||||
'duration': 103.204,
|
||||
'uploader': 'G1',
|
||||
'uploader_id': '2015',
|
||||
'uploader': 'G1 ao vivo',
|
||||
'uploader_id': '4209',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
@@ -38,39 +54,36 @@ class GloboIE(InfoExtractor):
|
||||
'ext': 'mp4',
|
||||
'title': 'Acidentes de trânsito estão entre as maiores causas de queda de energia em SP',
|
||||
'duration': 137.973,
|
||||
'uploader': 'Rede Globo',
|
||||
'uploader_id': '196',
|
||||
'uploader': 'Bom Dia Brasil',
|
||||
'uploader_id': '810',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
}, {
|
||||
'url': 'http://canalbrasil.globo.com/programas/sangue-latino/videos/3928201.html',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://globosatplay.globo.com/globonews/v/4472924/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://globotv.globo.com/t/programa/v/clipe-sexo-e-as-negas-adeus/3836166/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://globotv.globo.com/canal-brasil/sangue-latino/t/todos-os-videos/v/ator-e-diretor-argentino-ricado-darin-fala-sobre-utopias-e-suas-perdas/3928201/',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'http://canaloff.globo.com/programas/desejar-profundo/videos/4518560.html',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'globo:3607726',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://globoplay.globo.com/v/10248083/',
|
||||
},
|
||||
{
|
||||
'url': 'globo:8013907', # needs subscription to globoplay
|
||||
'info_dict': {
|
||||
'id': '10248083',
|
||||
'id': '8013907',
|
||||
'ext': 'mp4',
|
||||
'title': 'Melhores momentos: Equador 1 x 1 Brasil pelas Eliminatórias da Copa do Mundo 2022',
|
||||
'duration': 530.964,
|
||||
'uploader': 'SporTV',
|
||||
'uploader_id': '698',
|
||||
'title': 'Capítulo de 14⧸08⧸1989',
|
||||
'episode_number': 1,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
},
|
||||
{
|
||||
'url': 'globo:12824146',
|
||||
'info_dict': {
|
||||
'id': '12824146',
|
||||
'ext': 'mp4',
|
||||
'title': 'Acordo de damas',
|
||||
'episode_number': 1,
|
||||
'season_number': 2,
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
@@ -80,98 +93,70 @@ class GloboIE(InfoExtractor):
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
|
||||
self._request_webpage(
|
||||
HEADRequest('https://globo-ab.globo.com/v2/selected-alternatives?experiments=player-isolated-experiment-02&skipImpressions=true'),
|
||||
video_id, 'Getting cookies')
|
||||
|
||||
video = self._download_json(
|
||||
f'http://api.globovideos.com/videos/{video_id}/playlist',
|
||||
video_id)['videos'][0]
|
||||
if not self.get_param('allow_unplayable_formats') and video.get('encrypted') is True:
|
||||
self.report_drm(video_id)
|
||||
|
||||
title = video['title']
|
||||
info = self._download_json(
|
||||
'https://cloud-jarvis.globo.com/graphql', video_id,
|
||||
query={
|
||||
'operationName': 'getVideoView',
|
||||
'variables': json.dumps({'videoId': video_id}),
|
||||
'query': self._VIDEO_VIEW,
|
||||
}, headers={
|
||||
'content-type': 'application/json',
|
||||
'x-platform-id': 'web',
|
||||
'x-device-id': 'desktop',
|
||||
'x-client-version': '2024.12-5',
|
||||
})['data']['video']
|
||||
|
||||
formats = []
|
||||
security = self._download_json(
|
||||
'https://playback.video.globo.com/v2/video-session', video_id, f'Downloading security hash for {video_id}',
|
||||
headers={'content-type': 'application/json'}, data=json.dumps({
|
||||
'player_type': 'desktop',
|
||||
video = self._download_json(
|
||||
'https://playback.video.globo.com/v4/video-session', video_id,
|
||||
f'Downloading resource info for {video_id}',
|
||||
headers={'Content-Type': 'application/json'},
|
||||
data=json.dumps(filter_dict({
|
||||
'player_type': 'mirakulo_8k_hdr',
|
||||
'video_id': video_id,
|
||||
'quality': 'max',
|
||||
'content_protection': 'widevine',
|
||||
'vsid': '581b986b-4c40-71f0-5a58-803e579d5fa2',
|
||||
'tz': '-3.0:00',
|
||||
}).encode())
|
||||
'vsid': f'{uuid.uuid4()}',
|
||||
'consumption': 'streaming',
|
||||
'capabilities': {'low_latency': True},
|
||||
'tz': '-03:00',
|
||||
'Authorization': try_get(self._get_cookies('https://globo.com'),
|
||||
lambda x: f'Bearer {x["GLBID"].value}'),
|
||||
'version': 1,
|
||||
})).encode())
|
||||
|
||||
self._request_webpage(HEADRequest(security['sources'][0]['url_template']), video_id, 'Getting locksession cookie')
|
||||
if traverse_obj(video, ('resource', 'drm_protection_enabled', {bool})):
|
||||
self.report_drm(video_id)
|
||||
|
||||
security_hash = security['sources'][0]['token']
|
||||
if not security_hash:
|
||||
message = security.get('message')
|
||||
if message:
|
||||
raise ExtractorError(
|
||||
f'{self.IE_NAME} returned error: {message}', expected=True)
|
||||
main_source = video['sources'][0]
|
||||
|
||||
hash_code = security_hash[:2]
|
||||
padding = '%010d' % random.randint(1, 10000000000)
|
||||
if hash_code in ('04', '14'):
|
||||
received_time = security_hash[3:13]
|
||||
received_md5 = security_hash[24:]
|
||||
hash_prefix = security_hash[:23]
|
||||
elif hash_code in ('02', '12', '03', '13'):
|
||||
received_time = security_hash[2:12]
|
||||
received_md5 = security_hash[22:]
|
||||
padding += '1'
|
||||
hash_prefix = '05' + security_hash[:22]
|
||||
|
||||
padded_sign_time = str(int(received_time) + 86400) + padding
|
||||
md5_data = (received_md5 + padded_sign_time + '0xAC10FD').encode()
|
||||
signed_md5 = base64.urlsafe_b64encode(hashlib.md5(md5_data).digest()).decode().strip('=')
|
||||
signed_hash = hash_prefix + padded_sign_time + signed_md5
|
||||
source = security['sources'][0]['url_parts']
|
||||
resource_url = source['scheme'] + '://' + source['domain'] + source['path']
|
||||
signed_url = '{}?h={}&k=html5&a={}'.format(resource_url, signed_hash, 'F' if video.get('subscriber_only') else 'A')
|
||||
|
||||
fmts, subtitles = self._extract_m3u8_formats_and_subtitles(
|
||||
signed_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False)
|
||||
formats.extend(fmts)
|
||||
|
||||
for resource in video['resources']:
|
||||
if resource.get('type') == 'subtitle':
|
||||
subtitles.setdefault(resource.get('language') or 'por', []).append({
|
||||
'url': resource.get('url'),
|
||||
})
|
||||
subs = try_get(security, lambda x: x['source']['subtitles'], expected_type=dict) or {}
|
||||
for sub_lang, sub_url in subs.items():
|
||||
if sub_url:
|
||||
subtitles.setdefault(sub_lang or 'por', []).append({
|
||||
'url': sub_url,
|
||||
})
|
||||
subs = try_get(security, lambda x: x['source']['subtitles_webvtt'], expected_type=dict) or {}
|
||||
for sub_lang, sub_url in subs.items():
|
||||
if sub_url:
|
||||
subtitles.setdefault(sub_lang or 'por', []).append({
|
||||
'url': sub_url,
|
||||
})
|
||||
|
||||
duration = float_or_none(video.get('duration'), 1000)
|
||||
uploader = video.get('channel')
|
||||
uploader_id = str_or_none(video.get('channel_id'))
|
||||
# 4k streams are exclusively outputted in dash, so we need to filter these out
|
||||
if determine_ext(main_source['url']) == 'mpd':
|
||||
formats, subtitles = self._extract_mpd_formats_and_subtitles(main_source['url'], video_id, mpd_id='dash')
|
||||
else:
|
||||
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
|
||||
main_source['url'], video_id, 'mp4', m3u8_id='hls')
|
||||
self._merge_subtitles(traverse_obj(main_source, ('text', ..., {
|
||||
'url': ('subtitle', 'srt', 'url', {url_or_none}),
|
||||
}, all, {subs_list_to_dict(lang='en')})), target=subtitles)
|
||||
|
||||
return {
|
||||
'id': video_id,
|
||||
'title': title,
|
||||
'duration': duration,
|
||||
'uploader': uploader,
|
||||
'uploader_id': uploader_id,
|
||||
**traverse_obj(info, {
|
||||
'title': ('headline', {str}),
|
||||
'duration': ('duration', {float_or_none(scale=1000)}),
|
||||
'uploader': ('title', 'headline', {str}),
|
||||
'uploader_id': ('title', 'originProgramId', {str_or_none}),
|
||||
'episode_number': ('relatedEpisodeNumber', {int_or_none}),
|
||||
'season_number': ('relatedSeasonNumber', {int_or_none}),
|
||||
}),
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
}
|
||||
|
||||
|
||||
class GloboArticleIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://.+?\.globo\.com/(?:[^/]+/)*(?P<id>[^/.]+)(?:\.html)?'
|
||||
_VALID_URL = r'https?://(?!globoplay).+?\.globo\.com/(?:[^/?#]+/)*(?P<id>[^/?#.]+)(?:\.html)?'
|
||||
|
||||
_VIDEOID_REGEXES = [
|
||||
r'\bdata-video-id=["\'](\d{7,})["\']',
|
||||
|
||||
@@ -1,40 +1,48 @@
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
int_or_none,
|
||||
str_or_none,
|
||||
traverse_obj,
|
||||
url_or_none,
|
||||
)
|
||||
|
||||
|
||||
class GoodGameIE(InfoExtractor):
|
||||
IE_NAME = 'goodgame:stream'
|
||||
_VALID_URL = r'https?://goodgame\.ru/channel/(?P<id>\w+)'
|
||||
_VALID_URL = r'https?://goodgame\.ru/(?!channel/)(?P<id>[\w.*-]+)'
|
||||
_TESTS = [{
|
||||
'url': 'https://goodgame.ru/channel/Pomi/#autoplay',
|
||||
'url': 'https://goodgame.ru/TGW#autoplay',
|
||||
'info_dict': {
|
||||
'id': 'pomi',
|
||||
'id': '7998',
|
||||
'ext': 'mp4',
|
||||
'title': r're:Reynor vs Special \(1/2,bo3\) Wardi Spring EU \- playoff \(финальный день\) \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
|
||||
'channel_id': '1644',
|
||||
'channel': 'Pomi',
|
||||
'channel_url': 'https://goodgame.ru/channel/Pomi/',
|
||||
'description': 'md5:4a87b775ee7b2b57bdccebe285bbe171',
|
||||
'thumbnail': r're:^https?://.*\.jpg$',
|
||||
'channel_id': '7998',
|
||||
'title': r're:шоуматч Happy \(NE\) vs Fortitude \(UD\), потом ладдер и дс \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
|
||||
'channel_url': 'https://goodgame.ru/TGW',
|
||||
'thumbnail': 'https://hls.goodgame.ru/previews/7998_240.jpg',
|
||||
'uploader': 'TGW',
|
||||
'channel': 'JosephStalin',
|
||||
'live_status': 'is_live',
|
||||
'view_count': int,
|
||||
'age_limit': 18,
|
||||
'channel_follower_count': int,
|
||||
'uploader_id': '2899',
|
||||
'concurrent_view_count': int,
|
||||
},
|
||||
'params': {'skip_download': 'm3u8'},
|
||||
'skip': 'May not be online',
|
||||
}, {
|
||||
'url': 'https://goodgame.ru/Mr.Gray',
|
||||
'only_matching': True,
|
||||
}, {
|
||||
'url': 'https://goodgame.ru/HeDoPa3yMeHue*',
|
||||
'only_matching': True,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
channel_name = self._match_id(url)
|
||||
response = self._download_json(f'https://api2.goodgame.ru/v2/streams/{channel_name}', channel_name)
|
||||
player_id = response['channel']['gg_player_src']
|
||||
response = self._download_json(f'https://goodgame.ru/api/4/users/{channel_name}/stream', channel_name)
|
||||
player_id = response['streamkey']
|
||||
|
||||
formats, subtitles = [], {}
|
||||
if response.get('status') == 'Live':
|
||||
if response.get('status'):
|
||||
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
|
||||
f'https://hls.goodgame.ru/manifest/{player_id}_master.m3u8',
|
||||
channel_name, 'mp4', live=True)
|
||||
@@ -45,13 +53,17 @@ class GoodGameIE(InfoExtractor):
|
||||
'id': player_id,
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
'title': traverse_obj(response, ('channel', 'title')),
|
||||
'channel': channel_name,
|
||||
'channel_id': str_or_none(traverse_obj(response, ('channel', 'id'))),
|
||||
'channel_url': response.get('url'),
|
||||
'description': clean_html(traverse_obj(response, ('channel', 'description'))),
|
||||
'thumbnail': traverse_obj(response, ('channel', 'thumb')),
|
||||
'is_live': bool(formats),
|
||||
'view_count': int_or_none(response.get('viewers')),
|
||||
'age_limit': 18 if traverse_obj(response, ('channel', 'adult')) else None,
|
||||
**traverse_obj(response, {
|
||||
'title': ('title', {str}),
|
||||
'channel': ('channelkey', {str}),
|
||||
'channel_id': ('id', {str_or_none}),
|
||||
'channel_url': ('link', {url_or_none}),
|
||||
'uploader': ('streamer', 'username', {str}),
|
||||
'uploader_id': ('streamer', 'id', {str_or_none}),
|
||||
'thumbnail': ('preview', {url_or_none}, {self._proto_relative_url}),
|
||||
'concurrent_view_count': ('viewers', {int_or_none}),
|
||||
'channel_follower_count': ('followers', {int_or_none}),
|
||||
'age_limit': ('adult', {bool}, {lambda x: 18 if x else None}),
|
||||
}),
|
||||
}
|
||||
|
||||
@@ -5,56 +5,62 @@ import hashlib
|
||||
import hmac
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import urllib.parse
|
||||
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
ExtractorError,
|
||||
int_or_none,
|
||||
remove_end,
|
||||
traverse_obj,
|
||||
unescapeHTML,
|
||||
)
|
||||
|
||||
|
||||
class GoPlayIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://(www\.)?goplay\.be/video/([^/]+/[^/]+/|)(?P<display_id>[^/#]+)'
|
||||
_VALID_URL = r'https?://(www\.)?goplay\.be/video/([^/?#]+/[^/?#]+/|)(?P<id>[^/#]+)'
|
||||
|
||||
_NETRC_MACHINE = 'goplay'
|
||||
|
||||
_TESTS = [{
|
||||
'url': 'https://www.goplay.be/video/de-container-cup/de-container-cup-s3/de-container-cup-s3-aflevering-2#autoplay',
|
||||
'url': 'https://www.goplay.be/video/de-slimste-mens-ter-wereld/de-slimste-mens-ter-wereld-s22/de-slimste-mens-ter-wereld-s22-aflevering-1',
|
||||
'info_dict': {
|
||||
'id': '9c4214b8-e55d-4e4b-a446-f015f6c6f811',
|
||||
'id': '2baa4560-87a0-421b-bffc-359914e3c387',
|
||||
'ext': 'mp4',
|
||||
'title': 'S3 - Aflevering 2',
|
||||
'series': 'De Container Cup',
|
||||
'season': 'Season 3',
|
||||
'season_number': 3,
|
||||
'episode': 'Episode 2',
|
||||
'episode_number': 2,
|
||||
'title': 'S22 - Aflevering 1',
|
||||
'description': r're:In aflevering 1 nemen Daan Alferink, Tess Elst en Xander De Rycke .{66}',
|
||||
'series': 'De Slimste Mens ter Wereld',
|
||||
'episode': 'Episode 1',
|
||||
'season_number': 22,
|
||||
'episode_number': 1,
|
||||
'season': 'Season 22',
|
||||
},
|
||||
'params': {'skip_download': True},
|
||||
'skip': 'This video is only available for registered users',
|
||||
}, {
|
||||
'url': 'https://www.goplay.be/video/a-family-for-thr-holidays-s1-aflevering-1#autoplay',
|
||||
'url': 'https://www.goplay.be/video/1917',
|
||||
'info_dict': {
|
||||
'id': '74e3ed07-748c-49e4-85a0-393a93337dbf',
|
||||
'id': '40cac41d-8d29-4ef5-aa11-75047b9f0907',
|
||||
'ext': 'mp4',
|
||||
'title': 'A Family for the Holidays',
|
||||
'title': '1917',
|
||||
'description': r're:Op het hoogtepunt van de Eerste Wereldoorlog krijgen twee jonge .{94}',
|
||||
},
|
||||
'params': {'skip_download': True},
|
||||
'skip': 'This video is only available for registered users',
|
||||
}, {
|
||||
'url': 'https://www.goplay.be/video/de-mol/de-mol-s11/de-mol-s11-aflevering-1#autoplay',
|
||||
'info_dict': {
|
||||
'id': '03eb8f2f-153e-41cb-9805-0d3a29dab656',
|
||||
'id': 'ecb79672-92b9-4cd9-a0d7-e2f0250681ee',
|
||||
'ext': 'mp4',
|
||||
'title': 'S11 - Aflevering 1',
|
||||
'description': r're:Tien kandidaten beginnen aan hun verovering van Amerika en ontmoeten .{102}',
|
||||
'episode': 'Episode 1',
|
||||
'series': 'De Mol',
|
||||
'season_number': 11,
|
||||
'episode_number': 1,
|
||||
'season': 'Season 11',
|
||||
},
|
||||
'params': {
|
||||
'skip_download': True,
|
||||
},
|
||||
'params': {'skip_download': True},
|
||||
'skip': 'This video is only available for registered users',
|
||||
}]
|
||||
|
||||
@@ -69,27 +75,44 @@ class GoPlayIE(InfoExtractor):
|
||||
if not self._id_token:
|
||||
raise self.raise_login_required(method='password')
|
||||
|
||||
def _real_extract(self, url):
|
||||
url, display_id = self._match_valid_url(url).group(0, 'display_id')
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
video_data_json = self._html_search_regex(r'<div\s+data-hero="([^"]+)"', webpage, 'video_data')
|
||||
video_data = self._parse_json(unescapeHTML(video_data_json), display_id).get('data')
|
||||
# XXX: For parsing next.js v15+ data; see also yt_dlp.extractor.francetv
|
||||
def _find_json(self, s):
|
||||
return self._search_json(
|
||||
r'\w+\s*:\s*', s, 'next js data', None, contains_pattern=r'\[(?s:.+)\]', default=None)
|
||||
|
||||
movie = video_data.get('movie')
|
||||
if movie:
|
||||
video_id = movie['videoUuid']
|
||||
info_dict = {
|
||||
'title': movie.get('title'),
|
||||
}
|
||||
else:
|
||||
episode = traverse_obj(video_data, ('playlists', ..., 'episodes', lambda _, v: v['pageInfo']['url'] == url), get_all=False)
|
||||
video_id = episode['videoUuid']
|
||||
info_dict = {
|
||||
'title': episode.get('episodeTitle'),
|
||||
'series': traverse_obj(episode, ('program', 'title')),
|
||||
'season_number': episode.get('seasonNumber'),
|
||||
'episode_number': episode.get('episodeNumber'),
|
||||
}
|
||||
def _real_extract(self, url):
|
||||
display_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, display_id)
|
||||
|
||||
nextjs_data = traverse_obj(
|
||||
re.findall(r'<script[^>]*>\s*self\.__next_f\.push\(\s*(\[.+?\])\s*\);?\s*</script>', webpage),
|
||||
(..., {json.loads}, ..., {self._find_json}, ...))
|
||||
meta = traverse_obj(nextjs_data, (
|
||||
..., ..., 'children', ..., ..., 'children',
|
||||
lambda _, v: v['video']['path'] == urllib.parse.urlparse(url).path, 'video', any))
|
||||
|
||||
video_id = meta['uuid']
|
||||
info_dict = traverse_obj(meta, {
|
||||
'title': ('title', {str}),
|
||||
'description': ('description', {str.strip}),
|
||||
})
|
||||
|
||||
if traverse_obj(meta, ('program', 'subtype')) != 'movie':
|
||||
for season_data in traverse_obj(nextjs_data, (..., 'children', ..., 'playlists', ...)):
|
||||
episode_data = traverse_obj(
|
||||
season_data, ('videos', lambda _, v: v['videoId'] == video_id, any))
|
||||
if not episode_data:
|
||||
continue
|
||||
|
||||
episode_title = traverse_obj(
|
||||
episode_data, 'contextualTitle', 'episodeTitle', expected_type=str)
|
||||
info_dict.update({
|
||||
'title': episode_title or info_dict.get('title'),
|
||||
'series': remove_end(info_dict.get('title'), f' - {episode_title}'),
|
||||
'season_number': traverse_obj(season_data, ('season', {int_or_none})),
|
||||
'episode_number': traverse_obj(episode_data, ('episodeNumber', {int_or_none})),
|
||||
})
|
||||
break
|
||||
|
||||
api = self._download_json(
|
||||
f'https://api.goplay.be/web/v1/videos/long-form/{video_id}',
|
||||
|
||||
@@ -254,7 +254,7 @@ class InstagramIOSIE(InfoExtractor):
|
||||
|
||||
|
||||
class InstagramIE(InstagramBaseIE):
|
||||
_VALID_URL = r'(?P<url>https?://(?:www\.)?instagram\.com(?:/[^/]+)?/(?:p|tv|reels?(?!/audio/))/(?P<id>[^/?#&]+))'
|
||||
_VALID_URL = r'(?P<url>https?://(?:www\.)?instagram\.com(?:/(?!share/)[^/?#]+)?/(?:p|tv|reels?(?!/audio/))/(?P<id>[^/?#&]+))'
|
||||
_EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//(?:www\.)?instagram\.com/p/[^/]+/embed.*?)\1']
|
||||
_TESTS = [{
|
||||
'url': 'https://instagram.com/p/aye83DjauH/?foo=bar#abc',
|
||||
|
||||
160
yt_dlp/extractor/kenh14.py
Normal file
160
yt_dlp/extractor/kenh14.py
Normal file
@@ -0,0 +1,160 @@
|
||||
from .common import InfoExtractor
|
||||
from ..utils import (
|
||||
clean_html,
|
||||
extract_attributes,
|
||||
get_element_by_class,
|
||||
get_element_html_by_attribute,
|
||||
get_elements_html_by_class,
|
||||
int_or_none,
|
||||
parse_duration,
|
||||
parse_iso8601,
|
||||
remove_start,
|
||||
strip_or_none,
|
||||
unescapeHTML,
|
||||
update_url,
|
||||
url_or_none,
|
||||
)
|
||||
from ..utils.traversal import traverse_obj
|
||||
|
||||
|
||||
class Kenh14VideoIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://video\.kenh14\.vn/(?:video/)?[\w-]+-(?P<id>[0-9]+)\.chn'
|
||||
_TESTS = [{
|
||||
'url': 'https://video.kenh14.vn/video/mo-hop-iphone-14-pro-max-nguon-unbox-therapy-316173.chn',
|
||||
'md5': '1ed67f9c3a1e74acf15db69590cf6210',
|
||||
'info_dict': {
|
||||
'id': '316173',
|
||||
'ext': 'mp4',
|
||||
'title': 'Video mở hộp iPhone 14 Pro Max (Nguồn: Unbox Therapy)',
|
||||
'description': 'Video mở hộp iPhone 14 Pro MaxVideo mở hộp iPhone 14 Pro Max (Nguồn: Unbox Therapy)',
|
||||
'thumbnail': r're:^https?://videothumbs\.mediacdn\.vn/.*\.jpg$',
|
||||
'tags': [],
|
||||
'uploader': 'Unbox Therapy',
|
||||
'upload_date': '20220517',
|
||||
'view_count': int,
|
||||
'duration': 722.86,
|
||||
'timestamp': 1652764468,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://video.kenh14.vn/video-316174.chn',
|
||||
'md5': '2b41877d2afaf4a3f487ceda8e5c7cbd',
|
||||
'info_dict': {
|
||||
'id': '316174',
|
||||
'ext': 'mp4',
|
||||
'title': 'Khoảnh khắc VĐV nằm gục khóc sau chiến thắng: 7 năm trời Việt Nam mới có HCV kiếm chém nữ, chỉ có 8 tháng để khổ luyện trước khi lên sàn đấu',
|
||||
'description': 'md5:de86aa22e143e2b277bce8ec9c6f17dc',
|
||||
'thumbnail': r're:^https?://videothumbs\.mediacdn\.vn/.*\.jpg$',
|
||||
'tags': [],
|
||||
'upload_date': '20220517',
|
||||
'view_count': int,
|
||||
'duration': 70.04,
|
||||
'timestamp': 1652766021,
|
||||
},
|
||||
}, {
|
||||
'url': 'https://video.kenh14.vn/0-344740.chn',
|
||||
'md5': 'b843495d5e728142c8870c09b46df2a9',
|
||||
'info_dict': {
|
||||
'id': '344740',
|
||||
'ext': 'mov',
|
||||
'title': 'Kỳ Duyên đầy căng thẳng trong buổi ra quân đi Miss Universe, nghi thức tuyên thuệ lần đầu xuất hiện gây nhiều tranh cãi',
|
||||
'description': 'md5:2a2dbb4a7397169fb21ee68f09160497',
|
||||
'thumbnail': r're:^https?://kenh14cdn\.com/.*\.jpg$',
|
||||
'tags': ['kỳ duyên', 'Kỳ Duyên tuyên thuệ', 'miss universe'],
|
||||
'uploader': 'Quang Vũ',
|
||||
'upload_date': '20241024',
|
||||
'view_count': int,
|
||||
'duration': 198.88,
|
||||
'timestamp': 1729741590,
|
||||
},
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
video_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, video_id)
|
||||
|
||||
attrs = extract_attributes(get_element_html_by_attribute('type', 'VideoStream', webpage) or '')
|
||||
direct_url = attrs['data-vid']
|
||||
|
||||
metadata = self._download_json(
|
||||
'https://api.kinghub.vn/video/api/v1/detailVideoByGet?FileName={}'.format(
|
||||
remove_start(direct_url, 'kenh14cdn.com/')), video_id, fatal=False)
|
||||
|
||||
formats = [{'url': f'https://{direct_url}', 'format_id': 'http', 'quality': 1}]
|
||||
subtitles = {}
|
||||
video_data = self._download_json(
|
||||
f'https://{direct_url}.json', video_id, note='Downloading video data', fatal=False)
|
||||
if hls_url := traverse_obj(video_data, ('hls', {url_or_none})):
|
||||
fmts, subs = self._extract_m3u8_formats_and_subtitles(
|
||||
hls_url, video_id, m3u8_id='hls', fatal=False)
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
if dash_url := traverse_obj(video_data, ('mpd', {url_or_none})):
|
||||
fmts, subs = self._extract_mpd_formats_and_subtitles(
|
||||
dash_url, video_id, mpd_id='dash', fatal=False)
|
||||
formats.extend(fmts)
|
||||
self._merge_subtitles(subs, target=subtitles)
|
||||
|
||||
return {
|
||||
**traverse_obj(metadata, {
|
||||
'duration': ('duration', {parse_duration}),
|
||||
'uploader': ('author', {strip_or_none}),
|
||||
'timestamp': ('uploadtime', {parse_iso8601(delimiter=' ')}),
|
||||
'view_count': ('views', {int_or_none}),
|
||||
}),
|
||||
'id': video_id,
|
||||
'title': (
|
||||
traverse_obj(metadata, ('title', {strip_or_none}))
|
||||
or clean_html(self._og_search_title(webpage))
|
||||
or clean_html(get_element_by_class('vdbw-title', webpage))),
|
||||
'formats': formats,
|
||||
'subtitles': subtitles,
|
||||
'description': (
|
||||
clean_html(self._og_search_description(webpage))
|
||||
or clean_html(get_element_by_class('vdbw-sapo', webpage))),
|
||||
'thumbnail': (self._og_search_thumbnail(webpage) or attrs.get('data-thumb')),
|
||||
'tags': traverse_obj(self._html_search_meta('keywords', webpage), (
|
||||
{lambda x: x.split(';')}, ..., filter)),
|
||||
}
|
||||
|
||||
|
||||
class Kenh14PlaylistIE(InfoExtractor):
|
||||
_VALID_URL = r'https?://video\.kenh14\.vn/playlist/[\w-]+-(?P<id>[0-9]+)\.chn'
|
||||
_TESTS = [{
|
||||
'url': 'https://video.kenh14.vn/playlist/tran-tinh-naked-love-mua-2-71.chn',
|
||||
'info_dict': {
|
||||
'id': '71',
|
||||
'title': 'Trần Tình (Naked love) mùa 2',
|
||||
'description': 'md5:e9522339304956dea931722dd72eddb2',
|
||||
'thumbnail': r're:^https?://kenh14cdn\.com/.*\.png$',
|
||||
},
|
||||
'playlist_count': 9,
|
||||
}, {
|
||||
'url': 'https://video.kenh14.vn/playlist/0-72.chn',
|
||||
'info_dict': {
|
||||
'id': '72',
|
||||
'title': 'Lau Lại Đầu Từ',
|
||||
'description': 'Cùng xem xưa và nay có gì khác biệt nhé!',
|
||||
'thumbnail': r're:^https?://kenh14cdn\.com/.*\.png$',
|
||||
},
|
||||
'playlist_count': 6,
|
||||
}]
|
||||
|
||||
def _real_extract(self, url):
|
||||
playlist_id = self._match_id(url)
|
||||
webpage = self._download_webpage(url, playlist_id)
|
||||
|
||||
category_detail = get_element_by_class('category-detail', webpage) or ''
|
||||
embed_info = traverse_obj(
|
||||
self._yield_json_ld(webpage, playlist_id),
|
||||
(lambda _, v: v['name'] and v['alternateName'], any)) or {}
|
||||
|
||||
return self.playlist_from_matches(
|
||||
get_elements_html_by_class('video-item', webpage), playlist_id,
|
||||
(clean_html(get_element_by_class('name', category_detail)) or unescapeHTML(embed_info.get('name'))),
|
||||
getter=lambda x: 'https://video.kenh14.vn/video/video-{}.chn'.format(extract_attributes(x)['data-id']),
|
||||
ie=Kenh14VideoIE, playlist_description=(
|
||||
clean_html(get_element_by_class('description', category_detail))
|
||||
or unescapeHTML(embed_info.get('alternateName'))),
|
||||
thumbnail=traverse_obj(
|
||||
self._og_search_thumbnail(webpage),
|
||||
({url_or_none}, {update_url(query=None)})))
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user