1
0
mirror of https://github.com/yt-dlp/yt-dlp.git synced 2026-01-17 20:31:29 +00:00

Compare commits

...

18 Commits

Author SHA1 Message Date
github-actions
8729e7b57c Release 2023.03.04
Created by: pukkandan

:ci skip all :ci run dl
2023-03-04 22:24:51 +00:00
pukkandan
392389b7df [cleanup] Misc 2023-03-05 03:34:55 +05:30
Elyse
eb8fd6d044 [extractor/lefigaro] Add extractors (#6309)
Authored by: elyse0
Closes #6197
2023-03-05 03:30:45 +05:30
Ferdinand Bachmann
f44cb4e77b [extractor/tubetugraz] Support --twofactor (#6424) (#6427)
Authored by: Ferdi265
Closes #6424
2023-03-05 03:28:16 +05:30
Elyse
46580ced56 [extractor/tunein] Fix extractors (#6310)
Authored by: elyse0
Closes #2973
2023-03-05 01:35:19 +05:30
Elyse
b404712822 [extractor/telecaribe] Add extractor (#6311)
Authored by: elyse0
Closes #6001
2023-03-05 01:11:41 +05:30
Chris Caruso
1f8489cccb [extractor/lumni] Add extractor (#6302)
Authored by: carusocr
Closes #6202
2023-03-05 00:52:11 +05:30
columndeeply
ed4cc4ea79 [extractor/Prankcast] Fix tags (#6316)
Authored by: columndeeply
2023-03-04 23:22:15 +05:30
lauren n. liberda
cf60522652 [extractor/twitter] Fix retweet extraction (#6422)
Authored by: selfisekai
2023-03-04 23:21:33 +05:30
pukkandan
45db357289 [extractor/SportDeutschland] Rewrite extractor
Note: `multi_video` live streams are untested

Closes #6417, closes #6418, closes #6420
2023-03-04 22:32:58 +05:30
LXYan2333
8a83baaf21 [extractor/bilibili] Fix for downloading wrong subtitles (#6358)
Closes #6357
Authored by: LXYan2333
2023-03-04 20:14:48 +05:30
pukkandan
7accdd9845 [devscripts] make_changelog: Stop at Release ... commit
Closes #6415
2023-03-04 19:26:43 +05:30
Yakabuff
283a0b5bc5 [xvideos:quickies] Add extractor (#6414)
Authored by: Yakabuff
Closes #6356
2023-03-04 19:04:27 +05:30
mushbite
22ccd5420b [extractor/rutube] Extract chapters from description (#6345)
Authored by: mushbite
2023-03-04 19:03:17 +05:30
Simon Sawicki
08ff6d59f9 [build] Only archive if vars.ARCHIVE_REPO is set
Authored by: Grub4K
2023-03-04 14:18:24 +01:00
Elyse
4a6272c6d1 [extractor/twitch] Update for GraphQL API changes (#6318)
Authored by: elyse0
Closes #6308
2023-03-04 12:31:30 +05:30
Venkata Krishna S
640c934823 [extractor/ESPNcricinfo] Handle new URL pattern (#6321)
Authored by: venkata-krishnas
Closes #6164
2023-03-04 12:27:30 +05:30
bashonly
55676fe498 [build] Fix publishing to PyPI and homebrew
Closes #6411
Authored by: bashonly
2023-03-03 21:54:20 -06:00
36 changed files with 832 additions and 333 deletions

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting that a **supported** site is broken - label: I'm reporting that a **supported** site is broken
required: true required: true
- label: I've verified that I'm running yt-dlp version **2023.03.03** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -64,7 +64,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.03.03 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -72,8 +72,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.03.03, Current version: 2023.03.03 Latest version: 2023.03.04, Current version: 2023.03.04
yt-dlp is up to date (2023.03.03) yt-dlp is up to date (2023.03.04)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a new site support request - label: I'm reporting a new site support request
required: true required: true
- label: I've verified that I'm running yt-dlp version **2023.03.03** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -76,7 +76,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.03.03 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -84,8 +84,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.03.03, Current version: 2023.03.03 Latest version: 2023.03.04, Current version: 2023.03.04
yt-dlp is up to date (2023.03.03) yt-dlp is up to date (2023.03.04)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm requesting a site-specific feature - label: I'm requesting a site-specific feature
required: true required: true
- label: I've verified that I'm running yt-dlp version **2023.03.03** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -72,7 +72,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.03.03 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -80,8 +80,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.03.03, Current version: 2023.03.03 Latest version: 2023.03.04, Current version: 2023.03.04
yt-dlp is up to date (2023.03.03) yt-dlp is up to date (2023.03.04)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a bug unrelated to a specific site - label: I'm reporting a bug unrelated to a specific site
required: true required: true
- label: I've verified that I'm running yt-dlp version **2023.03.03** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@@ -57,7 +57,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.03.03 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -65,8 +65,8 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.03.03, Current version: 2023.03.03 Latest version: 2023.03.04, Current version: 2023.03.04
yt-dlp is up to date (2023.03.03) yt-dlp is up to date (2023.03.04)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@@ -20,7 +20,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I'm running yt-dlp version **2023.03.03** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true required: true
@@ -53,7 +53,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.03.03 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -61,7 +61,7 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.03.03, Current version: 2023.03.03 Latest version: 2023.03.04, Current version: 2023.03.04
yt-dlp is up to date (2023.03.03) yt-dlp is up to date (2023.03.04)
<more lines> <more lines>
render: shell render: shell

View File

@@ -26,7 +26,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I'm running yt-dlp version **2023.03.03** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true required: true
@@ -59,7 +59,7 @@ body:
[debug] Command-line config: ['-vU', 'test:youtube'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i'] [debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.03.03 [9d339c4] (win32_exe) [debug] yt-dlp version 2023.03.04 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs [debug] Checking exe version: ffprobe -bsfs
@@ -67,7 +67,7 @@ body:
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.03.03, Current version: 2023.03.03 Latest version: 2023.03.04, Current version: 2023.03.04
yt-dlp is up to date (2023.03.03) yt-dlp is up to date (2023.03.04)
<more lines> <more lines>
render: shell render: shell

View File

@@ -55,12 +55,12 @@ jobs:
run: | run: |
gh release create \ gh release create \
--notes-file ARCHIVE_NOTES \ --notes-file ARCHIVE_NOTES \
--title "Build ${{ inputs.version }}" \ --title "yt-dlp nightly ${{ inputs.version }}" \
${{ inputs.version }} \ ${{ inputs.version }} \
artifact/* artifact/*
- name: Prune old nightly release - name: Prune old nightly release
if: inputs.nightly if: inputs.nightly && !vars.ARCHIVE_REPO
env: env:
GH_TOKEN: ${{ github.token }} GH_TOKEN: ${{ github.token }}
run: | run: |
@@ -71,6 +71,7 @@ jobs:
- name: Publish release${{ inputs.nightly && ' (nightly)' || '' }} - name: Publish release${{ inputs.nightly && ' (nightly)' || '' }}
env: env:
GH_TOKEN: ${{ github.token }} GH_TOKEN: ${{ github.token }}
if: (inputs.nightly && !vars.ARCHIVE_REPO) || !inputs.nightly
run: | run: |
gh release create \ gh release create \
--notes-file ${{ inputs.nightly && 'PRE' || '' }}RELEASE_NOTES \ --notes-file ${{ inputs.nightly && 'PRE' || '' }}RELEASE_NOTES \

View File

@@ -4,7 +4,7 @@ on:
branches: branches:
- master - master
paths: paths:
- "**.py" - "yt_dlp/**.py"
- "!yt_dlp/version.py" - "!yt_dlp/version.py"
concurrency: concurrency:
group: release-nightly group: release-nightly

View File

@@ -64,6 +64,7 @@ jobs:
- name: Install Requirements - name: Install Requirements
run: | run: |
sudo apt-get -y install pandoc man
python -m pip install -U pip setuptools wheel twine python -m pip install -U pip setuptools wheel twine
python -m pip install -U -r requirements.txt python -m pip install -U -r requirements.txt
@@ -79,6 +80,7 @@ jobs:
if: env.TWINE_PASSWORD != '' if: env.TWINE_PASSWORD != ''
run: | run: |
rm -rf dist/* rm -rf dist/*
make pypi-files
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update" python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
python setup.py sdist bdist_wheel python setup.py sdist bdist_wheel
twine upload dist/* twine upload dist/*

View File

@@ -406,3 +406,6 @@ rohieb
sdht0 sdht0
seproDev seproDev
Hill-98 Hill-98
LXYan2333
mushbite
venkata-krishnas

View File

@@ -4,6 +4,45 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master # To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
--> -->
### 2023.03.04
#### Extractor changes
- bilibili
- [Fix for downloading wrong subtitles](https://github.com/yt-dlp/yt-dlp/commit/8a83baaf218ab89e6e7faa76b7c7be3a2ec19e3a) ([#6358](https://github.com/yt-dlp/yt-dlp/issues/6358)) by [LXYan2333](https://github.com/LXYan2333)
- ESPNcricinfo
- [Handle new URL pattern](https://github.com/yt-dlp/yt-dlp/commit/640c934823fc2d1ec77ec932566078014058635f) ([#6321](https://github.com/yt-dlp/yt-dlp/issues/6321)) by [venkata-krishnas](https://github.com/venkata-krishnas)
- lefigaro
- [Add extractors](https://github.com/yt-dlp/yt-dlp/commit/eb8fd6d044e8926532772b72be0645c6b8ecb3aa) ([#6309](https://github.com/yt-dlp/yt-dlp/issues/6309)) by [elyse0](https://github.com/elyse0)
- lumni
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/1f8489cccbdc6e96027ef527b88717458f0900e8) ([#6302](https://github.com/yt-dlp/yt-dlp/issues/6302)) by [carusocr](https://github.com/carusocr)
- Prankcast
- [Fix tags](https://github.com/yt-dlp/yt-dlp/commit/ed4cc4ea793314c50ae3f82e98248c1de1c25694) ([#6316](https://github.com/yt-dlp/yt-dlp/issues/6316)) by [columndeeply](https://github.com/columndeeply)
- rutube
- [Extract chapters from description](https://github.com/yt-dlp/yt-dlp/commit/22ccd5420b3eb0782776071f12cccd1fedaa1fd0) ([#6345](https://github.com/yt-dlp/yt-dlp/issues/6345)) by [mushbite](https://github.com/mushbite)
- SportDeutschland
- [Rewrite extractor](https://github.com/yt-dlp/yt-dlp/commit/45db357289b4e1eec09093c8bc5446520378f426) by [pukkandan](https://github.com/pukkandan)
- telecaribe
- [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/b40471282286bd2b09c485bf79afd271d229272c) ([#6311](https://github.com/yt-dlp/yt-dlp/issues/6311)) by [elyse0](https://github.com/elyse0)
- tubetugraz
- [Support `--twofactor` (#6424)](https://github.com/yt-dlp/yt-dlp/commit/f44cb4e77bb9be8be291d02ab6f79dc0b4c0d4a1) ([#6427](https://github.com/yt-dlp/yt-dlp/issues/6427)) by [Ferdi265](https://github.com/Ferdi265)
- tunein
- [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/46580ced56c90b559885aded6aa8f46f20a9cdce) ([#6310](https://github.com/yt-dlp/yt-dlp/issues/6310)) by [elyse0](https://github.com/elyse0)
- twitch
- [Update for GraphQL API changes](https://github.com/yt-dlp/yt-dlp/commit/4a6272c6d1bff89969b67cd22b26ebe6d7e72279) ([#6318](https://github.com/yt-dlp/yt-dlp/issues/6318)) by [elyse0](https://github.com/elyse0)
- twitter
- [Fix retweet extraction](https://github.com/yt-dlp/yt-dlp/commit/cf605226521e99c89fc8dff26a319025810e63a0) ([#6422](https://github.com/yt-dlp/yt-dlp/issues/6422)) by [selfisekai](https://github.com/selfisekai)
- xvideos
- quickies: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/283a0b5bc511f3b350eead4488158f50c20ec526) ([#6414](https://github.com/yt-dlp/yt-dlp/issues/6414)) by [Yakabuff](https://github.com/Yakabuff)
#### Misc. changes
- build
- [Fix publishing to PyPI and homebrew](https://github.com/yt-dlp/yt-dlp/commit/55676fe498345a389a2539d8baaba958d6d61c3e) by [bashonly](https://github.com/bashonly)
- [Only archive if `vars.ARCHIVE_REPO` is set](https://github.com/yt-dlp/yt-dlp/commit/08ff6d59f97b5f5f0128f6bf6fbef56fd836cc52) by [Grub4K](https://github.com/Grub4K)
- cleanup
- Miscellaneous: [392389b](https://github.com/yt-dlp/yt-dlp/commit/392389b7df7b818f794b231f14dc396d4875fbad) by [pukkandan](https://github.com/pukkandan)
- devscripts
- `make_changelog`: [Stop at `Release ...` commit](https://github.com/yt-dlp/yt-dlp/commit/7accdd9845fe7ce9d0aa5a9d16faaa489c1294eb) by [pukkandan](https://github.com/pukkandan)
### 2023.03.03 ### 2023.03.03
#### Important changes #### Important changes

View File

@@ -192,9 +192,8 @@ For other third-party package managers, see [the wiki](https://github.com/yt-dlp
<a id="update-channels"/> <a id="update-channels"/>
There are currently two release channels for binaries, `stable` and `nightly`. There are currently two release channels for binaries, `stable` and `nightly`.
`stable` releases are what the program will update to by default, and have had many of their changes tested by users of the master branch. `stable` is the default channel, and many of its changes have been tested by users of the nightly channel.
`nightly` releases are built after each push to the master branch, and will have the most recent fixes and additions, but also have the potential for bugs. The `nightly` channel has releases built after each push to the master branch, and will have the most recent fixes and additions, but also have more risk of regressions. They are available in [their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
The latest `nightly` is available as a [pre-release from this repository](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly), and all `nightly` releases are [archived in their own repo](https://github.com/yt-dlp/yt-dlp-nightly-builds/releases).
When using `--update`/`-U`, a release binary will only update to its current channel. When using `--update`/`-U`, a release binary will only update to its current channel.
This release channel can be changed by using the `--update-to` option. `--update-to` can also be used to upgrade or downgrade to specific tags from a channel. This release channel can be changed by using the `--update-to` option. `--update-to` can also be used to upgrade or downgrade to specific tags from a channel.

View File

@@ -1,12 +1,12 @@
[ [
{ {
"action": "add", "action": "add",
"when": "2023.02.17", "when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
"short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`" "short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`"
}, },
{ {
"action": "add", "action": "add",
"when": "2023.02.17", "when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
"short": "[priority] **YouTube throttling fixes!**" "short": "[priority] **YouTube throttling fixes!**"
} }
] ]

View File

@@ -1,19 +1,26 @@
from __future__ import annotations from __future__ import annotations
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import enum import enum
import itertools import itertools
import json import json
import logging import logging
import re import re
import subprocess
import sys
from collections import defaultdict from collections import defaultdict
from dataclasses import dataclass from dataclasses import dataclass
from functools import lru_cache from functools import lru_cache
from pathlib import Path from pathlib import Path
from devscripts.utils import read_file, run_process, write_file
BASE_URL = 'https://github.com' BASE_URL = 'https://github.com'
LOCATION_PATH = Path(__file__).parent LOCATION_PATH = Path(__file__).parent
HASH_LENGTH = 7
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -82,7 +89,7 @@ class Commit:
result = f'{self.short!r}' result = f'{self.short!r}'
if self.hash: if self.hash:
result += f' ({self.hash[:7]})' result += f' ({self.hash[:HASH_LENGTH]})'
if self.authors: if self.authors:
authors = ', '.join(self.authors) authors = ', '.join(self.authors)
@@ -208,7 +215,7 @@ class Changelog:
def _format_message_link(self, message, hash): def _format_message_link(self, message, hash):
assert message or hash, 'Improperly defined commit message or override' assert message or hash, 'Improperly defined commit message or override'
message = message if message else hash[:7] message = message if message else hash[:HASH_LENGTH]
return f'[{message}]({self.repo_url}/commit/{hash})' if hash else message return f'[{message}]({self.repo_url}/commit/{hash})' if hash else message
def _format_issues(self, issues): def _format_issues(self, issues):
@@ -242,36 +249,11 @@ class CommitRange:
FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert)\s+([\da-f]{40})') FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert)\s+([\da-f]{40})')
UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)') UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)')
def __init__(self, start, end, default_author=None) -> None: def __init__(self, start, end, default_author=None):
self._start = start self._start, self._end = start, end
self._end = end
self._commits, self._fixes = self._get_commits_and_fixes(default_author) self._commits, self._fixes = self._get_commits_and_fixes(default_author)
self._commits_added = [] self._commits_added = []
@classmethod
def from_single(cls, commitish='HEAD', default_author=None):
start_commitish = cls.get_prev_tag(commitish)
end_commitish = cls.get_next_tag(commitish)
if start_commitish == end_commitish:
start_commitish = cls.get_prev_tag(f'{commitish}~')
logger.info(f'Determined range from {commitish!r}: {start_commitish}..{end_commitish}')
return cls(start_commitish, end_commitish, default_author)
@classmethod
def get_prev_tag(cls, commitish):
command = [cls.COMMAND, 'describe', '--tags', '--abbrev=0', '--exclude=*[^0-9.]*', commitish]
return subprocess.check_output(command, text=True).strip()
@classmethod
def get_next_tag(cls, commitish):
result = subprocess.run(
[cls.COMMAND, 'describe', '--contains', '--abbrev=0', commitish],
stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, text=True)
if result.returncode:
return 'HEAD'
return result.stdout.partition('~')[0].strip()
def __iter__(self): def __iter__(self):
return iter(itertools.chain(self._commits.values(), self._commits_added)) return iter(itertools.chain(self._commits.values(), self._commits_added))
@@ -286,20 +268,15 @@ class CommitRange:
return commit in self._commits return commit in self._commits
def _is_ancestor(self, commitish):
return bool(subprocess.call(
[self.COMMAND, 'merge-base', '--is-ancestor', commitish, self._start]))
def _get_commits_and_fixes(self, default_author): def _get_commits_and_fixes(self, default_author):
result = subprocess.check_output([ result = run_process(
self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}', self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}',
f'{self._start}..{self._end}'], text=True) f'{self._start}..{self._end}' if self._start else self._end).stdout
commits = {} commits = {}
fixes = defaultdict(list) fixes = defaultdict(list)
lines = iter(result.splitlines(False)) lines = iter(result.splitlines(False))
for line in lines: for i, commit_hash in enumerate(lines):
commit_hash = line
short = next(lines) short = next(lines)
skip = short.startswith('Release ') or short == '[version] update' skip = short.startswith('Release ') or short == '[version] update'
@@ -310,9 +287,12 @@ class CommitRange:
authors = sorted(map(str.strip, line[match.end():].split(',')), key=str.casefold) authors = sorted(map(str.strip, line[match.end():].split(',')), key=str.casefold)
commit = Commit(commit_hash, short, authors) commit = Commit(commit_hash, short, authors)
if skip: if skip and (self._start or not i):
logger.debug(f'Skipped commit: {commit}') logger.debug(f'Skipped commit: {commit}')
continue continue
elif skip:
logger.debug(f'Reached Release commit, breaking: {commit}')
break
fix_match = self.FIXES_RE.search(commit.short) fix_match = self.FIXES_RE.search(commit.short)
if fix_match: if fix_match:
@@ -323,12 +303,12 @@ class CommitRange:
for commitish, fix_commits in fixes.items(): for commitish, fix_commits in fixes.items():
if commitish in commits: if commitish in commits:
hashes = ', '.join(commit.hash[:7] for commit in fix_commits) hashes = ', '.join(commit.hash[:HASH_LENGTH] for commit in fix_commits)
logger.info(f'Found fix(es) for {commitish[:7]}: {hashes}') logger.info(f'Found fix(es) for {commitish[:HASH_LENGTH]}: {hashes}')
for fix_commit in fix_commits: for fix_commit in fix_commits:
del commits[fix_commit.hash] del commits[fix_commit.hash]
else: else:
logger.debug(f'Commit with fixes not in changes: {commitish[:7]}') logger.debug(f'Commit with fixes not in changes: {commitish[:HASH_LENGTH]}')
return commits, fixes return commits, fixes
@@ -419,11 +399,10 @@ class CommitRange:
def get_new_contributors(contributors_path, commits): def get_new_contributors(contributors_path, commits):
contributors = set() contributors = set()
if contributors_path.exists(): if contributors_path.exists():
with contributors_path.open() as file: for line in read_file(contributors_path).splitlines():
for line in filter(None, map(str.strip, file)): author, _, _ = line.strip().partition(' (')
author, _, _ = line.partition(' (') authors = author.split('/')
authors = author.split('/') contributors.update(map(str.casefold, authors))
contributors.update(map(str.casefold, authors))
new_contributors = set() new_contributors = set()
for commit in commits: for commit in commits:
@@ -471,12 +450,11 @@ if __name__ == '__main__':
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}', datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr) level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
commits = CommitRange.from_single(args.commitish, args.default_author) commits = CommitRange(None, args.commitish, args.default_author)
if not args.no_override: if not args.no_override:
if args.override_path.exists(): if args.override_path.exists():
with args.override_path.open() as file: overrides = json.loads(read_file(args.override_path))
overrides = json.load(file)
commits.apply_overrides(overrides) commits.apply_overrides(overrides)
else: else:
logger.warning(f'File {args.override_path.as_posix()} does not exist') logger.warning(f'File {args.override_path.as_posix()} does not exist')
@@ -486,8 +464,7 @@ if __name__ == '__main__':
new_contributors = get_new_contributors(args.contributors_path, commits) new_contributors = get_new_contributors(args.contributors_path, commits)
if new_contributors: if new_contributors:
if args.contributors: if args.contributors:
with args.contributors_path.open('a') as file: write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
file.writelines(f'{contributor}\n' for contributor in new_contributors)
logger.info(f'New contributors: {", ".join(new_contributors)}') logger.info(f'New contributors: {", ".join(new_contributors)}')
print(Changelog(commits.groups(), args.repo)) print(Changelog(commits.groups(), args.repo))

View File

@@ -9,11 +9,10 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import argparse import argparse
import contextlib import contextlib
import subprocess
import sys import sys
from datetime import datetime from datetime import datetime
from devscripts.utils import read_version, write_file from devscripts.utils import read_version, run_process, write_file
def get_new_version(version, revision): def get_new_version(version, revision):
@@ -32,7 +31,7 @@ def get_new_version(version, revision):
def get_git_head(): def get_git_head():
with contextlib.suppress(Exception): with contextlib.suppress(Exception):
return subprocess.check_output(['git', 'rev-parse', 'HEAD'], text=True).strip() or None return run_process('git', 'rev-parse', 'HEAD').stdout.strip()
VERSION_TEMPLATE = '''\ VERSION_TEMPLATE = '''\

View File

@@ -1,5 +1,6 @@
import argparse import argparse
import functools import functools
import subprocess
def read_file(fname): def read_file(fname):
@@ -12,8 +13,8 @@ def write_file(fname, content, mode='w'):
return f.write(content) return f.write(content)
# Get the version without importing the package
def read_version(fname='yt_dlp/version.py'): def read_version(fname='yt_dlp/version.py'):
"""Get the version without importing the package"""
exec(compile(read_file(fname), fname, 'exec')) exec(compile(read_file(fname), fname, 'exec'))
return locals()['__version__'] return locals()['__version__']
@@ -33,3 +34,13 @@ def get_filename_args(has_infile=False, default_outfile=None):
def compose_functions(*functions): def compose_functions(*functions):
return lambda x: functools.reduce(lambda y, f: f(y), functions, x) return lambda x: functools.reduce(lambda y, f: f(y), functions, x)
def run_process(*args, **kwargs):
kwargs.setdefault('text', True)
kwargs.setdefault('check', True)
kwargs.setdefault('capture_output', True)
if kwargs['text']:
kwargs.setdefault('encoding', 'utf-8')
kwargs.setdefault('errors', 'replace')
return subprocess.run(args, **kwargs)

View File

@@ -663,6 +663,8 @@
- **Lecturio**: [*lecturio*](## "netrc machine") - **Lecturio**: [*lecturio*](## "netrc machine")
- **LecturioCourse**: [*lecturio*](## "netrc machine") - **LecturioCourse**: [*lecturio*](## "netrc machine")
- **LecturioDeCourse**: [*lecturio*](## "netrc machine") - **LecturioDeCourse**: [*lecturio*](## "netrc machine")
- **LeFigaroVideoEmbed**
- **LeFigaroVideoSection**
- **LEGO** - **LEGO**
- **Lemonde** - **Lemonde**
- **Lenta** - **Lenta**
@@ -696,6 +698,7 @@
- **LoveHomePorn** - **LoveHomePorn**
- **LRTStream** - **LRTStream**
- **LRTVOD** - **LRTVOD**
- **Lumni**
- **lynda**: [*lynda*](## "netrc machine") lynda.com videos - **lynda**: [*lynda*](## "netrc machine") lynda.com videos
- **lynda:course**: [*lynda*](## "netrc machine") lynda.com online courses - **lynda:course**: [*lynda*](## "netrc machine") lynda.com online courses
- **m6** - **m6**
@@ -1365,6 +1368,7 @@
- **Tele13** - **Tele13**
- **Tele5** - **Tele5**
- **TeleBruxelles** - **TeleBruxelles**
- **TelecaribePlay**
- **Telecinco**: telecinco.es, cuatro.com and mediaset.es - **Telecinco**: telecinco.es, cuatro.com and mediaset.es
- **Telegraaf** - **Telegraaf**
- **telegram:embed** - **telegram:embed**
@@ -1440,10 +1444,9 @@
- **TubiTv**: [*tubitv*](## "netrc machine") - **TubiTv**: [*tubitv*](## "netrc machine")
- **TubiTvShow** - **TubiTvShow**
- **Tumblr**: [*tumblr*](## "netrc machine") - **Tumblr**: [*tumblr*](## "netrc machine")
- **tunein:clip** - **TuneInPodcast**
- **tunein:program** - **TuneInPodcastEpisode**
- **tunein:station** - **TuneInStation**
- **tunein:topic**
- **TunePk** - **TunePk**
- **Turbo** - **Turbo**
- **tv.dfb.de** - **tv.dfb.de**
@@ -1695,6 +1698,7 @@
- **XTubeUser**: XTube user profile - **XTubeUser**: XTube user profile
- **Xuite**: 隨意窩Xuite影音 - **Xuite**: 隨意窩Xuite影音
- **XVideos** - **XVideos**
- **xvideos:quickies**
- **XXXYMovies** - **XXXYMovies**
- **Yahoo**: Yahoo screen and movies - **Yahoo**: Yahoo screen and movies
- **yahoo:gyao** - **yahoo:gyao**

View File

@@ -3784,7 +3784,7 @@ class YoutubeDL:
klass = type(self) klass = type(self)
write_debug(join_nonempty( write_debug(join_nonempty(
f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version', f'{"yt-dlp" if REPOSITORY == "yt-dlp/yt-dlp" else REPOSITORY} version',
__version__ + {'stable': '', 'nightly': '*'}.get(CHANNEL, f' <{CHANNEL}>'), f'{CHANNEL}@{__version__}',
f'[{RELEASE_GIT_HEAD[:9]}]' if RELEASE_GIT_HEAD else '', f'[{RELEASE_GIT_HEAD[:9]}]' if RELEASE_GIT_HEAD else '',
'' if source == 'unknown' else f'({source})', '' if source == 'unknown' else f'({source})',
'' if _IN_CLI else 'API' if klass == YoutubeDL else f'API:{self.__module__}.{klass.__qualname__}', '' if _IN_CLI else 'API' if klass == YoutubeDL else f'API:{self.__module__}.{klass.__qualname__}',

View File

@@ -914,6 +914,10 @@ from .leeco import (
LePlaylistIE, LePlaylistIE,
LetvCloudIE, LetvCloudIE,
) )
from .lefigaro import (
LeFigaroVideoEmbedIE,
LeFigaroVideoSectionIE,
)
from .lego import LEGOIE from .lego import LEGOIE
from .lemonde import LemondeIE from .lemonde import LemondeIE
from .lenta import LentaIE from .lenta import LentaIE
@@ -962,6 +966,9 @@ from .lrt import (
LRTVODIE, LRTVODIE,
LRTStreamIE LRTStreamIE
) )
from .lumni import (
LumniIE
)
from .lynda import ( from .lynda import (
LyndaIE, LyndaIE,
LyndaCourseIE LyndaCourseIE
@@ -1851,6 +1858,7 @@ from .ted import (
from .tele5 import Tele5IE from .tele5 import Tele5IE
from .tele13 import Tele13IE from .tele13 import Tele13IE
from .telebruxelles import TeleBruxellesIE from .telebruxelles import TeleBruxellesIE
from .telecaribe import TelecaribePlayIE
from .telecinco import TelecincoIE from .telecinco import TelecincoIE
from .telegraaf import TelegraafIE from .telegraaf import TelegraafIE
from .telegram import TelegramEmbedIE from .telegram import TelegramEmbedIE
@@ -1963,10 +1971,9 @@ from .tubitv import (
) )
from .tumblr import TumblrIE from .tumblr import TumblrIE
from .tunein import ( from .tunein import (
TuneInClipIE,
TuneInStationIE, TuneInStationIE,
TuneInProgramIE, TuneInPodcastIE,
TuneInTopicIE, TuneInPodcastEpisodeIE,
TuneInShortenerIE, TuneInShortenerIE,
) )
from .tunepk import TunePkIE from .tunepk import TunePkIE
@@ -2315,7 +2322,10 @@ from .xnxx import XNXXIE
from .xstream import XstreamIE from .xstream import XstreamIE
from .xtube import XTubeUserIE, XTubeIE from .xtube import XTubeUserIE, XTubeIE
from .xuite import XuiteIE from .xuite import XuiteIE
from .xvideos import XVideosIE from .xvideos import (
XVideosIE,
XVideosQuickiesIE
)
from .xxxymovies import XXXYMoviesIE from .xxxymovies import XXXYMoviesIE
from .yahoo import ( from .yahoo import (
YahooIE, YahooIE,

View File

@@ -81,7 +81,7 @@ class BilibiliBaseIE(InfoExtractor):
f'{line["content"]}\n\n') f'{line["content"]}\n\n')
return srt_data return srt_data
def _get_subtitles(self, video_id, initial_state, cid): def _get_subtitles(self, video_id, aid, cid):
subtitles = { subtitles = {
'danmaku': [{ 'danmaku': [{
'ext': 'xml', 'ext': 'xml',
@@ -89,7 +89,8 @@ class BilibiliBaseIE(InfoExtractor):
}] }]
} }
for s in traverse_obj(initial_state, ('videoData', 'subtitle', 'list')) or []: video_info_json = self._download_json(f'https://api.bilibili.com/x/player/v2?aid={aid}&cid={cid}', video_id)
for s in traverse_obj(video_info_json, ('data', 'subtitle', 'subtitles', ...)):
subtitles.setdefault(s['lan'], []).append({ subtitles.setdefault(s['lan'], []).append({
'ext': 'srt', 'ext': 'srt',
'data': self.json2srt(self._download_json(s['subtitle_url'], video_id)) 'data': self.json2srt(self._download_json(s['subtitle_url'], video_id))
@@ -331,7 +332,7 @@ class BiliBiliIE(BilibiliBaseIE):
'timestamp': traverse_obj(initial_state, ('videoData', 'pubdate')), 'timestamp': traverse_obj(initial_state, ('videoData', 'pubdate')),
'duration': float_or_none(play_info.get('timelength'), scale=1000), 'duration': float_or_none(play_info.get('timelength'), scale=1000),
'chapters': self._get_chapters(aid, cid), 'chapters': self._get_chapters(aid, cid),
'subtitles': self.extract_subtitles(video_id, initial_state, cid), 'subtitles': self.extract_subtitles(video_id, aid, cid),
'__post_extractor': self.extract_comments(aid), '__post_extractor': self.extract_comments(aid),
'http_headers': {'Referer': url}, 'http_headers': {'Referer': url},
} }

View File

@@ -3649,6 +3649,38 @@ class InfoExtractor:
or urllib.parse.unquote(os.path.splitext(url_basename(url))[0]) or urllib.parse.unquote(os.path.splitext(url_basename(url))[0])
or default) or default)
def _extract_chapters_helper(self, chapter_list, start_function, title_function, duration, strict=True):
if not duration:
return
chapter_list = [{
'start_time': start_function(chapter),
'title': title_function(chapter),
} for chapter in chapter_list or []]
if not strict:
chapter_list.sort(key=lambda c: c['start_time'] or 0)
chapters = [{'start_time': 0}]
for idx, chapter in enumerate(chapter_list):
if chapter['start_time'] is None:
self.report_warning(f'Incomplete chapter {idx}')
elif chapters[-1]['start_time'] <= chapter['start_time'] <= duration:
chapters.append(chapter)
elif chapter not in chapters:
self.report_warning(
f'Invalid start time ({chapter["start_time"]} < {chapters[-1]["start_time"]}) for chapter "{chapter["title"]}"')
return chapters[1:]
def _extract_chapters_from_description(self, description, duration):
duration_re = r'(?:\d+:)?\d{1,2}:\d{2}'
sep_re = r'(?m)^\s*(%s)\b\W*\s(%s)\s*$'
return self._extract_chapters_helper(
re.findall(sep_re % (duration_re, r'.+?'), description or ''),
start_function=lambda x: parse_duration(x[0]), title_function=lambda x: x[1],
duration=duration, strict=False) or self._extract_chapters_helper(
re.findall(sep_re % (r'.+?', duration_re), description or ''),
start_function=lambda x: parse_duration(x[1]), title_function=lambda x: x[0],
duration=duration, strict=False)
@staticmethod @staticmethod
def _availability(is_private=None, needs_premium=None, needs_subscription=None, needs_auth=None, is_unlisted=None): def _availability(is_private=None, needs_premium=None, needs_subscription=None, needs_auth=None, is_unlisted=None):
all_known = all(map( all_known = all(map(

View File

@@ -240,7 +240,7 @@ class FiveThirtyEightIE(InfoExtractor):
class ESPNCricInfoIE(InfoExtractor): class ESPNCricInfoIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?espncricinfo\.com/video/[^#$&?/]+-(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?espncricinfo\.com/(?:cricket-)?videos?/[^#$&?/]+-(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.espncricinfo.com/video/finch-chasing-comes-with-risks-despite-world-cup-trend-1289135', 'url': 'https://www.espncricinfo.com/video/finch-chasing-comes-with-risks-despite-world-cup-trend-1289135',
'info_dict': { 'info_dict': {
@@ -252,6 +252,17 @@ class ESPNCricInfoIE(InfoExtractor):
'duration': 96, 'duration': 96,
}, },
'params': {'skip_download': True} 'params': {'skip_download': True}
}, {
'url': 'https://www.espncricinfo.com/cricket-videos/daryl-mitchell-mitchell-santner-is-one-of-the-best-white-ball-spinners-india-vs-new-zealand-1356225',
'info_dict': {
'id': '1356225',
'ext': 'mp4',
'description': '"Santner has done it for a long time for New Zealand - we\'re lucky to have him"',
'upload_date': '20230128',
'title': 'Mitchell: \'Santner is one of the best white-ball spinners at the moment\'',
'duration': 87,
},
'params': {'skip_download': 'm3u8'},
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -0,0 +1,135 @@
import json
import math
from .common import InfoExtractor
from ..utils import (
InAdvancePagedList,
traverse_obj,
)
class LeFigaroVideoEmbedIE(InfoExtractor):
_VALID_URL = r'https?://video\.lefigaro\.fr/embed/[^?#]+/(?P<id>[\w-]+)'
_TESTS = [{
'url': 'https://video.lefigaro.fr/embed/figaro/video/les-francais-ne-veulent-ils-plus-travailler-suivez-en-direct-le-club-le-figaro-idees/',
'md5': 'e94de44cd80818084352fcf8de1ce82c',
'info_dict': {
'id': 'g9j7Eovo',
'title': 'Les Français ne veulent-ils plus travailler ? Retrouvez Le Club Le Figaro Idées',
'description': 'md5:862b8813148ba4bf10763a65a69dfe41',
'upload_date': '20230216',
'timestamp': 1676581615,
'duration': 3076,
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
'ext': 'mp4',
},
}, {
'url': 'https://video.lefigaro.fr/embed/figaro/video/intelligence-artificielle-faut-il-sen-mefier/',
'md5': '0b3f10332b812034b3a3eda1ef877c5f',
'info_dict': {
'id': 'LeAgybyc',
'title': 'Intelligence artificielle : faut-il sen méfier ?',
'description': 'md5:249d136e3e5934a67c8cb704f8abf4d2',
'upload_date': '20230124',
'timestamp': 1674584477,
'duration': 860,
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
'ext': 'mp4',
},
}]
_WEBPAGE_TESTS = [{
'url': 'https://video.lefigaro.fr/figaro/video/suivez-en-direct-le-club-le-figaro-international-avec-philippe-gelie-9/',
'md5': '3972ddf2d5f8b98699f191687258e2f9',
'info_dict': {
'id': 'QChnbPYA',
'title': 'Où en est le couple franco-allemand ? Retrouvez Le Club Le Figaro International',
'description': 'md5:6f47235b7e7c93b366fd8ebfa10572ac',
'upload_date': '20230123',
'timestamp': 1674503575,
'duration': 3153,
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
'age_limit': 0,
'ext': 'mp4',
},
}, {
'url': 'https://video.lefigaro.fr/figaro/video/la-philosophe-nathalie-sarthou-lajus-est-linvitee-du-figaro-live/',
'md5': '3ac0a0769546ee6be41ab52caea5d9a9',
'info_dict': {
'id': 'QJzqoNbf',
'title': 'La philosophe Nathalie Sarthou-Lajus est linvitée du Figaro Live',
'description': 'md5:c586793bb72e726c83aa257f99a8c8c4',
'upload_date': '20230217',
'timestamp': 1676661986,
'duration': 1558,
'thumbnail': r're:^https?://[^?#]+\.(?:jpeg|jpg)',
'age_limit': 0,
'ext': 'mp4',
},
}]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
player_data = self._search_nextjs_data(webpage, display_id)['props']['pageProps']['pageData']['playerData']
return self.url_result(
f'jwplatform:{player_data["videoId"]}', title=player_data.get('title'),
description=player_data.get('description'), thumbnail=player_data.get('poster'))
class LeFigaroVideoSectionIE(InfoExtractor):
_VALID_URL = r'https?://video\.lefigaro\.fr/figaro/(?P<id>[\w-]+)/?(?:[#?]|$)'
_TESTS = [{
'url': 'https://video.lefigaro.fr/figaro/le-club-le-figaro-idees/',
'info_dict': {
'id': 'le-club-le-figaro-idees',
'title': 'Le Club Le Figaro Idées',
},
'playlist_mincount': 14,
}, {
'url': 'https://video.lefigaro.fr/figaro/factu/',
'info_dict': {
'id': 'factu',
'title': 'Factu',
},
'playlist_mincount': 519,
}]
_PAGE_SIZE = 20
def _get_api_response(self, display_id, page_num, note=None):
return self._download_json(
'https://api-graphql.lefigaro.fr/graphql', display_id, note=note,
query={
'id': 'flive-website_UpdateListPage_1fb260f996bca2d78960805ac382544186b3225f5bedb43ad08b9b8abef79af6',
'variables': json.dumps({
'slug': display_id,
'videosLimit': self._PAGE_SIZE,
'sort': 'DESC',
'order': 'PUBLISHED_AT',
'page': page_num,
}).encode(),
})
def _real_extract(self, url):
display_id = self._match_id(url)
initial_response = self._get_api_response(display_id, page_num=1)['data']['playlist']
def page_func(page_num):
api_response = self._get_api_response(display_id, page_num + 1, note=f'Downloading page {page_num + 1}')
return [self.url_result(
video['embedUrl'], LeFigaroVideoEmbedIE, **traverse_obj(video, {
'title': 'name',
'description': 'description',
'thumbnail': 'thumbnailUrl',
})) for video in api_response['data']['playlist']['jsonLd'][0]['itemListElement']]
entries = InAdvancePagedList(
page_func, math.ceil(initial_response['videoCount'] / self._PAGE_SIZE), self._PAGE_SIZE)
return self.playlist_result(entries, playlist_id=display_id, playlist_title=initial_response.get('title'))

24
yt_dlp/extractor/lumni.py Normal file
View File

@@ -0,0 +1,24 @@
from .common import InfoExtractor
from .francetv import FranceTVIE
class LumniIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?lumni\.fr/video/(?P<id>[\w-]+)'
_TESTS = [{
'url': 'https://www.lumni.fr/video/l-homme-et-son-environnement-dans-la-revolution-industrielle',
'md5': '960e8240c4f2c7a20854503a71e52f5e',
'info_dict': {
'id': 'd2b9a4e5-a526-495b-866c-ab72737e3645',
'ext': 'mp4',
'title': "L'homme et son environnement dans la révolution industrielle - L'ère de l'homme",
'thumbnail': 'https://assets.webservices.francetelevisions.fr/v1/assets/images/a7/17/9f/a7179f5f-63a5-4e11-8d4d-012ab942d905.jpg',
'duration': 230,
}
}]
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
video_id = self._html_search_regex(
r'<div[^>]+data-factoryid\s*=\s*["\']([^"\']+)', webpage, 'video id')
return self.url_result(f'francetv:{video_id}', FranceTVIE, video_id)

View File

@@ -18,7 +18,7 @@ class PrankCastIE(InfoExtractor):
'cast': ['Devonanustart', 'Phonelosers'], 'cast': ['Devonanustart', 'Phonelosers'],
'description': '', 'description': '',
'categories': ['prank'], 'categories': ['prank'],
'tags': ['prank call', 'prank'], 'tags': ['prank call', 'prank', 'live show'],
'upload_date': '20220825' 'upload_date': '20220825'
} }
}, { }, {
@@ -35,7 +35,7 @@ class PrankCastIE(InfoExtractor):
'cast': ['phonelosers'], 'cast': ['phonelosers'],
'description': '', 'description': '',
'categories': ['prank'], 'categories': ['prank'],
'tags': ['prank call', 'prank'], 'tags': ['prank call', 'prank', 'live show'],
'upload_date': '20221006' 'upload_date': '20221006'
} }
}] }]
@@ -62,5 +62,5 @@ class PrankCastIE(InfoExtractor):
'cast': list(filter(None, [uploader] + traverse_obj(guests_json, (..., 'name')))), 'cast': list(filter(None, [uploader] + traverse_obj(guests_json, (..., 'name')))),
'description': json_info.get('broadcast_description'), 'description': json_info.get('broadcast_description'),
'categories': [json_info.get('broadcast_category')], 'categories': [json_info.get('broadcast_category')],
'tags': self._parse_json(json_info.get('broadcast_tags') or '{}', video_id) 'tags': try_call(lambda: json_info['broadcast_tags'].split(','))
} }

View File

@@ -25,8 +25,7 @@ class RutubeBaseIE(InfoExtractor):
video_id, 'Downloading video JSON', video_id, 'Downloading video JSON',
'Unable to download video JSON', query=query) 'Unable to download video JSON', query=query)
@staticmethod def _extract_info(self, video, video_id=None, require_title=True):
def _extract_info(video, video_id=None, require_title=True):
title = video['title'] if require_title else video.get('title') title = video['title'] if require_title else video.get('title')
age_limit = video.get('is_adult') age_limit = video.get('is_adult')
@@ -35,13 +34,15 @@ class RutubeBaseIE(InfoExtractor):
uploader_id = try_get(video, lambda x: x['author']['id']) uploader_id = try_get(video, lambda x: x['author']['id'])
category = try_get(video, lambda x: x['category']['name']) category = try_get(video, lambda x: x['category']['name'])
description = video.get('description')
duration = int_or_none(video.get('duration'))
return { return {
'id': video.get('id') or video_id if video_id else video['id'], 'id': video.get('id') or video_id if video_id else video['id'],
'title': title, 'title': title,
'description': video.get('description'), 'description': description,
'thumbnail': video.get('thumbnail_url'), 'thumbnail': video.get('thumbnail_url'),
'duration': int_or_none(video.get('duration')), 'duration': duration,
'uploader': try_get(video, lambda x: x['author']['name']), 'uploader': try_get(video, lambda x: x['author']['name']),
'uploader_id': compat_str(uploader_id) if uploader_id else None, 'uploader_id': compat_str(uploader_id) if uploader_id else None,
'timestamp': unified_timestamp(video.get('created_ts')), 'timestamp': unified_timestamp(video.get('created_ts')),
@@ -50,6 +51,7 @@ class RutubeBaseIE(InfoExtractor):
'view_count': int_or_none(video.get('hits')), 'view_count': int_or_none(video.get('hits')),
'comment_count': int_or_none(video.get('comments_count')), 'comment_count': int_or_none(video.get('comments_count')),
'is_live': bool_or_none(video.get('is_livestream')), 'is_live': bool_or_none(video.get('is_livestream')),
'chapters': self._extract_chapters_from_description(description, duration),
} }
def _download_and_extract_info(self, video_id, query=None): def _download_and_extract_info(self, video_id, query=None):
@@ -111,8 +113,9 @@ class RutubeIE(RutubeBaseIE):
'view_count': int, 'view_count': int,
'thumbnail': 'http://pic.rutubelist.ru/video/d2/a0/d2a0aec998494a396deafc7ba2c82add.jpg', 'thumbnail': 'http://pic.rutubelist.ru/video/d2/a0/d2a0aec998494a396deafc7ba2c82add.jpg',
'category': ['Новости и СМИ'], 'category': ['Новости и СМИ'],
'chapters': [],
}, },
'expected_warnings': ['Unable to download f4m'],
}, { }, {
'url': 'http://rutube.ru/play/embed/a10e53b86e8f349080f718582ce4c661', 'url': 'http://rutube.ru/play/embed/a10e53b86e8f349080f718582ce4c661',
'only_matching': True, 'only_matching': True,
@@ -142,7 +145,28 @@ class RutubeIE(RutubeBaseIE):
'view_count': int, 'view_count': int,
'thumbnail': 'http://pic.rutubelist.ru/video/f2/d4/f2d42b54be0a6e69c1c22539e3152156.jpg', 'thumbnail': 'http://pic.rutubelist.ru/video/f2/d4/f2d42b54be0a6e69c1c22539e3152156.jpg',
'category': ['Видеоигры'], 'category': ['Видеоигры'],
'chapters': [],
}, },
'expected_warnings': ['Unable to download f4m'],
}, {
'url': 'https://rutube.ru/video/c65b465ad0c98c89f3b25cb03dcc87c6/',
'info_dict': {
'id': 'c65b465ad0c98c89f3b25cb03dcc87c6',
'ext': 'mp4',
'chapters': 'count:4',
'category': ['Бизнес и предпринимательство'],
'description': 'md5:252feac1305257d8c1bab215cedde75d',
'thumbnail': 'http://pic.rutubelist.ru/video/71/8f/718f27425ea9706073eb80883dd3787b.png',
'duration': 782,
'age_limit': 0,
'uploader_id': '23491359',
'timestamp': 1677153329,
'view_count': int,
'upload_date': '20230223',
'title': 'Бизнес с нуля: найм сотрудников. Интервью с директором строительной компании',
'uploader': 'Стас Быков',
},
'expected_warnings': ['Unable to download f4m'],
}] }]
@classmethod @classmethod

View File

@@ -1,10 +1,9 @@
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
format_field, join_nonempty,
strip_or_none,
traverse_obj, traverse_obj,
unified_timestamp, unified_timestamp,
strip_or_none
) )
@@ -13,98 +12,131 @@ class SportDeutschlandIE(InfoExtractor):
_TESTS = [{ _TESTS = [{
'url': 'https://sportdeutschland.tv/blauweissbuchholztanzsport/buchholzer-formationswochenende-2023-samstag-1-bundesliga-landesliga', 'url': 'https://sportdeutschland.tv/blauweissbuchholztanzsport/buchholzer-formationswochenende-2023-samstag-1-bundesliga-landesliga',
'info_dict': { 'info_dict': {
'id': '983758e9-5829-454d-a3cf-eb27bccc3c94', 'id': '9839a5c7-0dbb-48a8-ab63-3b408adc7b54',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Buchholzer Formationswochenende 2023 - Samstag - 1. Bundesliga / Landesliga', 'title': 'Buchholzer Formationswochenende 2023 - Samstag - 1. Bundesliga / Landesliga',
'display_id': 'blauweissbuchholztanzsport/buchholzer-formationswochenende-2023-samstag-1-bundesliga-landesliga',
'description': 'md5:a288c794a5ee69e200d8f12982f81a87', 'description': 'md5:a288c794a5ee69e200d8f12982f81a87',
'live_status': 'was_live', 'live_status': 'was_live',
'channel': 'Blau-Weiss Buchholz Tanzsport', 'channel': 'Blau-Weiss Buchholz Tanzsport',
'channel_url': 'https://sportdeutschland.tv/blauweissbuchholztanzsport', 'channel_url': 'https://sportdeutschland.tv/blauweissbuchholztanzsport',
'channel_id': '93ec33c9-48be-43b6-b404-e016b64fdfa3', 'channel_id': '93ec33c9-48be-43b6-b404-e016b64fdfa3',
'display_id': '9839a5c7-0dbb-48a8-ab63-3b408adc7b54',
'duration': 32447, 'duration': 32447,
'upload_date': '20230114', 'upload_date': '20230114',
'timestamp': 1673730018.0, 'timestamp': 1673733618,
} }
}, { }, {
'url': 'https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0', 'url': 'https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0',
'info_dict': { 'info_dict': {
'id': '95b97d9a-04f6-4880-9039-182985c33943', 'id': '95c80c52-6b9a-4ae9-9197-984145adfced',
'ext': 'mp4', 'ext': 'mp4',
'title': 'BWF Tour: 1. Runde Feld 1 - YONEX GAINWARD German Open 2022', 'title': 'BWF Tour: 1. Runde Feld 1 - YONEX GAINWARD German Open 2022',
'display_id': 'deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0',
'description': 'md5:2afb5996ceb9ac0b2ac81f563d3a883e', 'description': 'md5:2afb5996ceb9ac0b2ac81f563d3a883e',
'live_status': 'was_live', 'live_status': 'was_live',
'channel': 'Deutscher Badminton Verband', 'channel': 'Deutscher Badminton Verband',
'channel_url': 'https://sportdeutschland.tv/deutscherbadmintonverband', 'channel_url': 'https://sportdeutschland.tv/deutscherbadmintonverband',
'channel_id': '93ca5866-2551-49fc-8424-6db35af58920', 'channel_id': '93ca5866-2551-49fc-8424-6db35af58920',
'display_id': '95c80c52-6b9a-4ae9-9197-984145adfced',
'duration': 41097, 'duration': 41097,
'upload_date': '20220309', 'upload_date': '20220309',
'timestamp': 1646860727.0, 'timestamp': 1646860727.0,
} }
}, {
'url': 'https://sportdeutschland.tv/ggcbremen/formationswochenende-latein-2023',
'info_dict': {
'id': '9889785e-55b0-4d97-a72a-ce9a9f157cce',
'title': 'Formationswochenende Latein 2023 - Samstag',
'display_id': 'ggcbremen/formationswochenende-latein-2023',
'description': 'md5:6e4060d40ff6a8f8eeb471b51a8f08b2',
'live_status': 'was_live',
'channel': 'Grün-Gold-Club Bremen e.V.',
'channel_id': '9888f04e-bb46-4c7f-be47-df960a4167bb',
'channel_url': 'https://sportdeutschland.tv/ggcbremen',
},
'playlist_count': 3,
'playlist': [{
'info_dict': {
'id': '988e1fea-9d44-4fab-8c72-3085fb667547',
'ext': 'mp4',
'channel_url': 'https://sportdeutschland.tv/ggcbremen',
'channel_id': '9888f04e-bb46-4c7f-be47-df960a4167bb',
'channel': 'Grün-Gold-Club Bremen e.V.',
'duration': 86,
'title': 'Formationswochenende Latein 2023 - Samstag Part 1',
'upload_date': '20230225',
'timestamp': 1677349909,
'live_status': 'was_live',
}
}]
}, {
'url': 'https://sportdeutschland.tv/dtb/gymnastik-international-tag-1',
'info_dict': {
'id': '95d71b8a-370a-4b87-ad16-94680da18528',
'ext': 'mp4',
'title': r're:Gymnastik International - Tag 1 .+',
'display_id': 'dtb/gymnastik-international-tag-1',
'channel_id': '936ecef1-2f4a-4e08-be2f-68073cb7ecab',
'channel': 'Deutscher Turner-Bund',
'channel_url': 'https://sportdeutschland.tv/dtb',
'description': 'md5:07a885dde5838a6f0796ee21dc3b0c52',
'live_status': 'is_live',
},
'skip': 'live',
}] }]
def _process_video(self, asset_id, video):
is_live = video['type'] == 'mux_live'
token = self._download_json(
f'https://api.sportdeutschland.tv/api/frontend/asset-token/{asset_id}',
video['id'], query={'type': video['type'], 'playback_id': video['src']})['token']
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
f'https://stream.mux.com/{video["src"]}.m3u8?token={token}', video['id'], live=is_live)
return {
'is_live': is_live,
'formats': formats,
'subtitles': subtitles,
**traverse_obj(video, {
'id': 'id',
'duration': ('duration', {lambda x: float(x) > 0 and float(x)}),
'timestamp': ('created_at', {unified_timestamp})
}),
}
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
meta = self._download_json( meta = self._download_json(
'https://api.sportdeutschland.tv/api/stateless/frontend/assets/' + display_id, f'https://api.sportdeutschland.tv/api/stateless/frontend/assets/{display_id}',
display_id, query={'access_token': 'true'}) display_id, query={'access_token': 'true'})
asset_id = traverse_obj(meta, 'id', 'uuid')
info = { info = {
'id': asset_id, 'display_id': display_id,
'channel_url': format_field(meta, ('profile', 'slug'), 'https://sportdeutschland.tv/%s'),
**traverse_obj(meta, { **traverse_obj(meta, {
'id': (('id', 'uuid'), ),
'title': (('title', 'name'), {strip_or_none}), 'title': (('title', 'name'), {strip_or_none}),
'description': 'description', 'description': 'description',
'channel': ('profile', 'name'), 'channel': ('profile', 'name'),
'channel_id': ('profile', 'id'), 'channel_id': ('profile', 'id'),
'is_live': 'currently_live', 'is_live': 'currently_live',
'was_live': 'was_live' 'was_live': 'was_live',
'channel_url': ('profile', 'slug', {lambda x: f'https://sportdeutschland.tv/{x}'}),
}, get_all=False) }, get_all=False)
} }
videos = meta.get('videos') or [] parts = traverse_obj(meta, (('livestream', ('videos', ...)), ))
entries = [{
'title': join_nonempty(info.get('title'), f'Part {i}', delim=' '),
**traverse_obj(info, {'channel': 'channel', 'channel_id': 'channel_id',
'channel_url': 'channel_url', 'was_live': 'was_live'}),
**self._process_video(info['id'], video),
} for i, video in enumerate(parts, 1)]
if len(videos) > 1: return {
info.update({ '_type': 'multi_video',
'_type': 'multi_video', **info,
'entries': self.processVideoOrStream(asset_id, video) 'entries': entries,
} for video in enumerate(videos) if video.get('formats')) } if len(entries) > 1 else {
**info,
elif len(videos) == 1: **entries[0],
info.update( 'title': info.get('title'),
self.processVideoOrStream(asset_id, videos[0])
)
livestream = meta.get('livestream')
if livestream is not None:
info.update(
self.processVideoOrStream(asset_id, livestream)
)
return info
def process_video_or_stream(self, asset_id, video):
video_id = video['id']
video_src = video['src']
video_type = video['type']
token = self._download_json(
f'https://api.sportdeutschland.tv/api/frontend/asset-token/{asset_id}',
video_id, query={'type': video_type, 'playback_id': video_src})['token']
formats = self._extract_m3u8_formats(f'https://stream.mux.com/{video_src}.m3u8?token={token}', video_id)
video_data = {
'display_id': video_id,
'formats': formats,
} }
if video_type == 'mux_vod':
video_data.update({
'duration': video.get('duration'),
'timestamp': unified_timestamp(video.get('created_at'))
})
return video_data

View File

@@ -0,0 +1,77 @@
import re
from .common import InfoExtractor
from ..utils import traverse_obj
class TelecaribePlayIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?play\.telecaribe\.co/(?P<id>[\w-]+)'
_TESTS = [{
'url': 'https://www.play.telecaribe.co/breicok',
'info_dict': {
'id': 'breicok',
'title': 'Breicok',
},
'playlist_count': 7,
}, {
'url': 'https://www.play.telecaribe.co/si-fue-gol-de-yepes',
'info_dict': {
'id': 'si-fue-gol-de-yepes',
'title': 'Sí Fue Gol de Yepes',
},
'playlist_count': 6,
}, {
'url': 'https://www.play.telecaribe.co/ciudad-futura',
'info_dict': {
'id': 'ciudad-futura',
'title': 'Ciudad Futura',
},
'playlist_count': 10,
}, {
'url': 'https://www.play.telecaribe.co/live',
'info_dict': {
'id': 'live',
'title': r're:^Señal en vivo',
'live_status': 'is_live',
'ext': 'mp4',
},
'params': {
'skip_download': 'Livestream',
}
}]
def _download_player_webpage(self, webpage, display_id):
page_id = self._search_regex(
(r'window.firstPageId\s*=\s*["\']([^"\']+)', r'<div[^>]+id\s*=\s*"pageBackground_([^"]+)'),
webpage, 'page_id')
props = self._download_json(self._search_regex(
rf'<link[^>]+href\s*=\s*"([^"]+)"[^>]+id\s*=\s*"features_{page_id}"',
webpage, 'json_props_url'), display_id)['props']['render']['compProps']
return self._download_webpage(traverse_obj(props, (..., 'url'))[-1], display_id)
def _get_clean_title(self, title):
return re.sub(r'\s*\|\s*Telecaribe\s*VOD', '', title or '').strip() or None
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
player = self._download_player_webpage(webpage, display_id)
if display_id != 'live':
return self.playlist_from_matches(
re.findall(r'<a[^>]+href\s*=\s*"([^"]+\.mp4)', player), display_id,
self._get_clean_title(self._og_search_title(webpage)))
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
self._search_regex(r'(?:let|const|var)\s+source\s*=\s*["\']([^"\']+)', player, 'm3u8 url'),
display_id, 'mp4')
return {
'id': display_id,
'title': self._get_clean_title(self._og_search_title(webpage)),
'formats': formats,
'subtitles': subtitles,
'is_live': True,
}

View File

@@ -21,17 +21,36 @@ class TubeTuGrazBaseIE(InfoExtractor):
if not urlh: if not urlh:
return return
urlh = self._request_webpage( content, urlh = self._download_webpage_handle(
urlh.geturl(), None, fatal=False, headers={'referer': urlh.geturl()}, urlh.geturl(), None, fatal=False, headers={'referer': urlh.geturl()},
note='logging in', errnote='unable to log in', data=urlencode_postdata({ note='logging in', errnote='unable to log in',
data=urlencode_postdata({
'lang': 'de', 'lang': 'de',
'_eventId_proceed': '', '_eventId_proceed': '',
'j_username': username, 'j_username': username,
'j_password': password 'j_password': password
})) }))
if not urlh or urlh.geturl() == 'https://tube.tugraz.at/paella/ui/index.html':
return
if urlh and urlh.geturl() != 'https://tube.tugraz.at/paella/ui/index.html': if not self._html_search_regex(
r'<p\b[^>]*>(Bitte geben Sie einen OTP-Wert ein:)</p>',
content, 'TFA prompt', default=None):
self.report_warning('unable to login: incorrect password') self.report_warning('unable to login: incorrect password')
return
content, urlh = self._download_webpage_handle(
urlh.geturl(), None, fatal=False, headers={'referer': urlh.geturl()},
note='logging in with TFA', errnote='unable to log in with TFA',
data=urlencode_postdata({
'lang': 'de',
'_eventId_proceed': '',
'j_tokenNumber': self._get_tfa_info(),
}))
if not urlh or urlh.geturl() == 'https://tube.tugraz.at/paella/ui/index.html':
return
self.report_warning('unable to login: incorrect TFA code')
def _extract_episode(self, episode_info): def _extract_episode(self, episode_info):
id = episode_info.get('id') id = episode_info.get('id')

View File

@@ -1,149 +1,201 @@
import re import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError from ..utils import (
from ..compat import compat_urlparse OnDemandPagedList,
determine_ext,
parse_iso8601,
traverse_obj,
)
class TuneInBaseIE(InfoExtractor): class TuneInBaseIE(InfoExtractor):
_API_BASE_URL = 'http://tunein.com/tuner/tune/' _VALID_URL_BASE = r'https?://(?:www\.)?tunein\.com'
def _real_extract(self, url): def _extract_metadata(self, webpage, content_id):
content_id = self._match_id(url) return self._search_json(r'window.INITIAL_STATE=', webpage, 'hydration', content_id, fatal=False)
content_info = self._download_json(
self._API_BASE_URL + self._API_URL_QUERY % content_id,
content_id, note='Downloading JSON metadata')
title = content_info['Title']
thumbnail = content_info.get('Logo')
location = content_info.get('Location')
streams_url = content_info.get('StreamUrl')
if not streams_url:
raise ExtractorError('No downloadable streams found', expected=True)
if not streams_url.startswith('http://'):
streams_url = compat_urlparse.urljoin(url, streams_url)
def _extract_formats_and_subtitles(self, content_id):
streams = self._download_json( streams = self._download_json(
streams_url, content_id, note='Downloading stream data', f'https://opml.radiotime.com/Tune.ashx?render=json&formats=mp3,aac,ogg,flash,hls&id={content_id}',
transform_source=lambda s: re.sub(r'^\s*\((.*)\);\s*$', r'\1', s))['Streams'] content_id)['body']
is_live = None formats, subtitles = [], {}
formats = []
for stream in streams: for stream in streams:
if stream.get('Type') == 'Live': if stream.get('media_type') == 'hls':
is_live = True fmts, subs = self._extract_m3u8_formats_and_subtitles(stream['url'], content_id, fatal=False)
reliability = stream.get('Reliability') formats.extend(fmts)
format_note = ( self._merge_subtitles(subs, target=subtitles)
'Reliability: %d%%' % reliability elif determine_ext(stream['url']) == 'pls':
if reliability is not None else None) playlist_content = self._download_webpage(stream['url'], content_id)
formats.append({ formats.append({
'preference': ( 'url': self._search_regex(r'File1=(.*)', playlist_content, 'url', fatal=False),
0 if reliability is None or reliability > 90 'abr': stream.get('bitrate'),
else 1), 'ext': stream.get('media_type'),
'abr': stream.get('Bandwidth'), })
'ext': stream.get('MediaType').lower(), else:
'acodec': stream.get('MediaType'), formats.append({
'vcodec': 'none', 'url': stream['url'],
'url': stream.get('Url'), 'abr': stream.get('bitrate'),
'source_preference': reliability, 'ext': stream.get('media_type'),
'format_note': format_note, })
})
return { return formats, subtitles
'id': content_id,
'title': title,
'formats': formats,
'thumbnail': thumbnail,
'location': location,
'is_live': is_live,
}
class TuneInClipIE(TuneInBaseIE):
IE_NAME = 'tunein:clip'
_VALID_URL = r'https?://(?:www\.)?tunein\.com/station/.*?audioClipId\=(?P<id>\d+)'
_API_URL_QUERY = '?tuneType=AudioClip&audioclipId=%s'
_TESTS = [{
'url': 'http://tunein.com/station/?stationId=246119&audioClipId=816',
'md5': '99f00d772db70efc804385c6b47f4e77',
'info_dict': {
'id': '816',
'title': '32m',
'ext': 'mp3',
},
}]
class TuneInStationIE(TuneInBaseIE): class TuneInStationIE(TuneInBaseIE):
IE_NAME = 'tunein:station' _VALID_URL = TuneInBaseIE._VALID_URL_BASE + r'(?:/radio/[^?#]+-|/embed/player/)(?P<id>s\d+)'
_VALID_URL = r'https?://(?:www\.)?tunein\.com/(?:radio/.*?-s|station/.*?StationId=|embed/player/s)(?P<id>\d+)' _EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>(?:https?://)?tunein\.com/embed/player/s\d+)']
_EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>(?:https?://)?tunein\.com/embed/player/[pst]\d+)']
_API_URL_QUERY = '?tuneType=Station&stationId=%s'
@classmethod
def suitable(cls, url):
return False if TuneInClipIE.suitable(url) else super(TuneInStationIE, cls).suitable(url)
_TESTS = [{ _TESTS = [{
'url': 'http://tunein.com/radio/Jazz24-885-s34682/', 'url': 'https://tunein.com/radio/Jazz24-885-s34682/',
'info_dict': { 'info_dict': {
'id': '34682', 'id': 's34682',
'title': 'Jazz 24 on 88.5 Jazz24 - KPLU-HD2', 'title': 're:^Jazz24',
'description': 'md5:d6d0b89063fd68d529fa7058ee98619b',
'thumbnail': 're:^https?://[^?&]+/s34682',
'location': 'Seattle-Tacoma, US',
'ext': 'mp3', 'ext': 'mp3',
'location': 'Tacoma, WA', 'live_status': 'is_live',
}, },
'params': { 'params': {
'skip_download': True, # live stream 'skip_download': True,
}, },
}, { }, {
'url': 'http://tunein.com/embed/player/s6404/', 'url': 'https://tunein.com/embed/player/s6404/',
'only_matching': True, 'only_matching': True,
}] }, {
'url': 'https://tunein.com/radio/BBC-Radio-1-988-s24939/',
class TuneInProgramIE(TuneInBaseIE):
IE_NAME = 'tunein:program'
_VALID_URL = r'https?://(?:www\.)?tunein\.com/(?:radio/.*?-p|program/.*?ProgramId=|embed/player/p)(?P<id>\d+)'
_API_URL_QUERY = '?tuneType=Program&programId=%s'
_TESTS = [{
'url': 'http://tunein.com/radio/Jazz-24-p2506/',
'info_dict': { 'info_dict': {
'id': '2506', 'id': 's24939',
'title': 'Jazz 24 on 91.3 WUKY-HD3', 'title': 're:^BBC Radio 1',
'description': 'md5:f3f75f7423398d87119043c26e7bfb84',
'thumbnail': 're:^https?://[^?&]+/s24939',
'location': 'London, UK',
'ext': 'mp3', 'ext': 'mp3',
'location': 'Lexington, KY', 'live_status': 'is_live',
}, },
'params': { 'params': {
'skip_download': True, # live stream 'skip_download': True,
}, },
}, {
'url': 'http://tunein.com/embed/player/p191660/',
'only_matching': True,
}] }]
def _real_extract(self, url):
station_id = self._match_id(url)
class TuneInTopicIE(TuneInBaseIE): webpage = self._download_webpage(url, station_id)
IE_NAME = 'tunein:topic' metadata = self._extract_metadata(webpage, station_id)
_VALID_URL = r'https?://(?:www\.)?tunein\.com/(?:topic/.*?TopicId=|embed/player/t)(?P<id>\d+)'
_API_URL_QUERY = '?tuneType=Topic&topicId=%s' formats, subtitles = self._extract_formats_and_subtitles(station_id)
return {
'id': station_id,
'title': traverse_obj(metadata, ('profiles', station_id, 'title')),
'description': traverse_obj(metadata, ('profiles', station_id, 'description')),
'thumbnail': traverse_obj(metadata, ('profiles', station_id, 'image')),
'timestamp': parse_iso8601(
traverse_obj(metadata, ('profiles', station_id, 'actions', 'play', 'publishTime'))),
'location': traverse_obj(
metadata, ('profiles', station_id, 'metadata', 'properties', 'location', 'displayName'),
('profiles', station_id, 'properties', 'location', 'displayName')),
'formats': formats,
'subtitles': subtitles,
'is_live': traverse_obj(metadata, ('profiles', station_id, 'actions', 'play', 'isLive')),
}
class TuneInPodcastIE(TuneInBaseIE):
_VALID_URL = TuneInBaseIE._VALID_URL_BASE + r'/(?:podcasts/[^?#]+-|embed/player/)(?P<id>p\d+)/?(?:#|$)'
_EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>(?:https?://)?tunein\.com/embed/player/p\d+)']
_TESTS = [{ _TESTS = [{
'url': 'http://tunein.com/topic/?TopicId=101830576', 'url': 'https://tunein.com/podcasts/Technology-Podcasts/Artificial-Intelligence-p1153019',
'md5': 'c31a39e6f988d188252eae7af0ef09c9',
'info_dict': { 'info_dict': {
'id': '101830576', 'id': 'p1153019',
'title': 'Votez pour moi du 29 octobre 2015 (29/10/15)', 'title': 'Lex Fridman Podcast',
'ext': 'mp3', 'description': 'md5:bedc4e5f1c94f7dec6e4317b5654b00d',
'location': 'Belgium',
}, },
'playlist_mincount': 200,
}, { }, {
'url': 'http://tunein.com/embed/player/t101830576/', 'url': 'https://tunein.com/embed/player/p191660/',
'only_matching': True, 'only_matching': True
}, {
'url': 'https://tunein.com/podcasts/World-News/BBC-News-p14/',
'info_dict': {
'id': 'p14',
'title': 'BBC News',
'description': 'md5:1218e575eeaff75f48ed978261fa2068',
},
'playlist_mincount': 200,
}] }]
_PAGE_SIZE = 30
def _real_extract(self, url):
podcast_id = self._match_id(url)
webpage = self._download_webpage(url, podcast_id, fatal=False)
metadata = self._extract_metadata(webpage, podcast_id)
def page_func(page_num):
api_response = self._download_json(
f'https://api.tunein.com/profiles/{podcast_id}/contents', podcast_id,
note=f'Downloading page {page_num + 1}', query={
'filter': 't:free',
'offset': page_num * self._PAGE_SIZE,
'limit': self._PAGE_SIZE,
})
return [
self.url_result(
f'https://tunein.com/podcasts/{podcast_id}?topicId={episode["GuideId"][1:]}',
TuneInPodcastEpisodeIE, title=episode.get('Title'))
for episode in api_response['Items']]
entries = OnDemandPagedList(page_func, self._PAGE_SIZE)
return self.playlist_result(
entries, playlist_id=podcast_id, title=traverse_obj(metadata, ('profiles', podcast_id, 'title')),
description=traverse_obj(metadata, ('profiles', podcast_id, 'description')))
class TuneInPodcastEpisodeIE(TuneInBaseIE):
_VALID_URL = TuneInBaseIE._VALID_URL_BASE + r'/podcasts/(?:[^?&]+-)?(?P<podcast_id>p\d+)/?\?topicId=(?P<id>\w\d+)'
_TESTS = [{
'url': 'https://tunein.com/podcasts/Technology-Podcasts/Artificial-Intelligence-p1153019/?topicId=236404354',
'info_dict': {
'id': 't236404354',
'title': '#351 \u2013 MrBeast: Future of YouTube, Twitter, TikTok, and Instagram',
'description': 'md5:e1734db6f525e472c0c290d124a2ad77',
'thumbnail': 're:^https?://[^?&]+/p1153019',
'timestamp': 1673458571,
'upload_date': '20230111',
'series_id': 'p1153019',
'series': 'Lex Fridman Podcast',
'ext': 'mp3',
},
}]
def _real_extract(self, url):
podcast_id, episode_id = self._match_valid_url(url).group('podcast_id', 'id')
episode_id = f't{episode_id}'
webpage = self._download_webpage(url, episode_id)
metadata = self._extract_metadata(webpage, episode_id)
formats, subtitles = self._extract_formats_and_subtitles(episode_id)
return {
'id': episode_id,
'title': traverse_obj(metadata, ('profiles', episode_id, 'title')),
'description': traverse_obj(metadata, ('profiles', episode_id, 'description')),
'thumbnail': traverse_obj(metadata, ('profiles', episode_id, 'image')),
'timestamp': parse_iso8601(
traverse_obj(metadata, ('profiles', episode_id, 'actions', 'play', 'publishTime'))),
'series_id': podcast_id,
'series': traverse_obj(metadata, ('profiles', podcast_id, 'title')),
'formats': formats,
'subtitles': subtitles,
}
class TuneInShortenerIE(InfoExtractor): class TuneInShortenerIE(InfoExtractor):
IE_NAME = 'tunein:shortener' IE_NAME = 'tunein:shortener'
@@ -154,10 +206,13 @@ class TuneInShortenerIE(InfoExtractor):
# test redirection # test redirection
'url': 'http://tun.in/ser7s', 'url': 'http://tun.in/ser7s',
'info_dict': { 'info_dict': {
'id': '34682', 'id': 's34682',
'title': 'Jazz 24 on 88.5 Jazz24 - KPLU-HD2', 'title': 're:^Jazz24',
'description': 'md5:d6d0b89063fd68d529fa7058ee98619b',
'thumbnail': 're:^https?://[^?&]+/s34682',
'location': 'Seattle-Tacoma, US',
'ext': 'mp3', 'ext': 'mp3',
'location': 'Tacoma, WA', 'live_status': 'is_live',
}, },
'params': { 'params': {
'skip_download': True, # live stream 'skip_download': True, # live stream
@@ -169,6 +224,11 @@ class TuneInShortenerIE(InfoExtractor):
# The server doesn't support HEAD requests # The server doesn't support HEAD requests
urlh = self._request_webpage( urlh = self._request_webpage(
url, redirect_id, note='Downloading redirect page') url, redirect_id, note='Downloading redirect page')
url = urlh.geturl() url = urlh.geturl()
url_parsed = urllib.parse.urlparse(url)
if url_parsed.port == 443:
url = url_parsed._replace(netloc=url_parsed.hostname).geturl()
self.to_screen('Following redirect: %s' % url) self.to_screen('Following redirect: %s' % url)
return self.url_result(url) return self.url_result(url)

View File

@@ -48,12 +48,12 @@ class TwitchBaseIE(InfoExtractor):
'CollectionSideBar': '27111f1b382effad0b6def325caef1909c733fe6a4fbabf54f8d491ef2cf2f14', 'CollectionSideBar': '27111f1b382effad0b6def325caef1909c733fe6a4fbabf54f8d491ef2cf2f14',
'FilterableVideoTower_Videos': 'a937f1d22e269e39a03b509f65a7490f9fc247d7f83d6ac1421523e3b68042cb', 'FilterableVideoTower_Videos': 'a937f1d22e269e39a03b509f65a7490f9fc247d7f83d6ac1421523e3b68042cb',
'ClipsCards__User': 'b73ad2bfaecfd30a9e6c28fada15bd97032c83ec77a0440766a56fe0bd632777', 'ClipsCards__User': 'b73ad2bfaecfd30a9e6c28fada15bd97032c83ec77a0440766a56fe0bd632777',
'ChannelCollectionsContent': '07e3691a1bad77a36aba590c351180439a40baefc1c275356f40fc7082419a84', 'ChannelCollectionsContent': '447aec6a0cc1e8d0a8d7732d47eb0762c336a2294fdb009e9c9d854e49d484b9',
'StreamMetadata': '1c719a40e481453e5c48d9bb585d971b8b372f8ebb105b17076722264dfa5b3e', 'StreamMetadata': 'a647c2a13599e5991e175155f798ca7f1ecddde73f7f341f39009c14dbf59962',
'ComscoreStreamingQuery': 'e1edae8122517d013405f237ffcc124515dc6ded82480a88daef69c83b53ac01', 'ComscoreStreamingQuery': 'e1edae8122517d013405f237ffcc124515dc6ded82480a88daef69c83b53ac01',
'VideoAccessToken_Clip': '36b89d2507fce29e5ca551df756d27c1cfe079e2609642b4390aa4c35796eb11', 'VideoAccessToken_Clip': '36b89d2507fce29e5ca551df756d27c1cfe079e2609642b4390aa4c35796eb11',
'VideoPreviewOverlay': '3006e77e51b128d838fa4e835723ca4dc9a05c5efd4466c1085215c6e437e65c', 'VideoPreviewOverlay': '3006e77e51b128d838fa4e835723ca4dc9a05c5efd4466c1085215c6e437e65c',
'VideoMetadata': '226edb3e692509f727fd56821f5653c05740242c82b0388883e0c0e75dcbf687', 'VideoMetadata': '49b5b8f268cdeb259d75b58dcb0c1a748e3b575003448a2333dc5cdafd49adad',
'VideoPlayer_ChapterSelectButtonVideo': '8d2793384aac3773beab5e59bd5d6f585aedb923d292800119e03d40cd0f9b41', 'VideoPlayer_ChapterSelectButtonVideo': '8d2793384aac3773beab5e59bd5d6f585aedb923d292800119e03d40cd0f9b41',
'VideoPlayer_VODSeekbarPreviewVideo': '07e99e4d56c5a7c67117a154777b0baf85a5ffefa393b213f4bc712ccaf85dd6', 'VideoPlayer_VODSeekbarPreviewVideo': '07e99e4d56c5a7c67117a154777b0baf85a5ffefa393b213f4bc712ccaf85dd6',
} }
@@ -380,13 +380,14 @@ class TwitchVodIE(TwitchBaseIE):
}], }],
'Downloading stream metadata GraphQL') 'Downloading stream metadata GraphQL')
video = traverse_obj(data, (0, 'data', 'video')) video = traverse_obj(data, (..., 'data', 'video'), get_all=False)
video['moments'] = traverse_obj(data, (1, 'data', 'video', 'moments', 'edges', ..., 'node'))
video['storyboard'] = traverse_obj(data, (2, 'data', 'video', 'seekPreviewsURL'), expected_type=url_or_none)
if video is None: if video is None:
raise ExtractorError( raise ExtractorError(f'Video {item_id} does not exist', expected=True)
'Video %s does not exist' % item_id, expected=True)
video['moments'] = traverse_obj(data, (..., 'data', 'video', 'moments', 'edges', ..., 'node'))
video['storyboard'] = traverse_obj(
data, (..., 'data', 'video', 'seekPreviewsURL', {url_or_none}), get_all=False)
return video return video
def _extract_info(self, info): def _extract_info(self, info):
@@ -854,6 +855,13 @@ class TwitchVideosCollectionsIE(TwitchPlaylistBaseIE):
'title': 'spamfish - Collections', 'title': 'spamfish - Collections',
}, },
'playlist_mincount': 3, 'playlist_mincount': 3,
}, {
'url': 'https://www.twitch.tv/monstercat/videos?filter=collections',
'info_dict': {
'id': 'monstercat',
'title': 'monstercat - Collections',
},
'playlist_mincount': 13,
}] }]
_OPERATION_NAME = 'ChannelCollectionsContent' _OPERATION_NAME = 'ChannelCollectionsContent'
@@ -922,6 +930,7 @@ class TwitchStreamIE(TwitchBaseIE):
# m3u8 download # m3u8 download
'skip_download': True, 'skip_download': True,
}, },
'skip': 'User does not exist',
}, { }, {
'url': 'http://www.twitch.tv/miracle_doto#profile-0', 'url': 'http://www.twitch.tv/miracle_doto#profile-0',
'only_matching': True, 'only_matching': True,
@@ -934,6 +943,25 @@ class TwitchStreamIE(TwitchBaseIE):
}, { }, {
'url': 'https://m.twitch.tv/food', 'url': 'https://m.twitch.tv/food',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://www.twitch.tv/monstercat',
'info_dict': {
'id': '40500071752',
'display_id': 'monstercat',
'title': 're:Monstercat',
'description': 'md5:0945ad625e615bc8f0469396537d87d9',
'is_live': True,
'timestamp': 1677107190,
'upload_date': '20230222',
'uploader': 'Monstercat',
'uploader_id': 'monstercat',
'live_status': 'is_live',
'thumbnail': 're:https://.*.jpg',
'ext': 'mp4',
},
'params': {
'skip_download': 'Livestream',
},
}] }]
@classmethod @classmethod

View File

@@ -838,6 +838,28 @@ class TwitterIE(TwitterBaseIE):
'timestamp': 1670306984.0, 'timestamp': 1670306984.0,
}, },
'params': {'extractor_args': {'twitter': {'force_graphql': ['']}}}, 'params': {'extractor_args': {'twitter': {'force_graphql': ['']}}},
}, {
# url to retweet id
'url': 'https://twitter.com/liberdalau/status/1623739803874349067',
'info_dict': {
'id': '1623274794488659969',
'display_id': '1623739803874349067',
'ext': 'mp4',
'title': 'Johnny Bullets - Me after going viral to over 30million people: Whoopsie-daisy',
'description': 'md5:e873616a4a8fe0f93e71872678a672f3',
'uploader': 'Johnny Bullets',
'uploader_id': 'Johnnybull3ts',
'uploader_url': 'https://twitter.com/Johnnybull3ts',
'age_limit': 0,
'tags': [],
'duration': 8.033,
'timestamp': 1675853859.0,
'upload_date': '20230208',
'thumbnail': r're:https://pbs\.twimg\.com/ext_tw_video_thumb/.+',
'like_count': int,
'repost_count': int,
'comment_count': int,
},
}, { }, {
# onion route # onion route
'url': 'https://twitter3e4tixl4xyajtrzo62zg5vztmjuricljdp2c5kshju4avyoid.onion/TwitterBlue/status/1484226494708662273', 'url': 'https://twitter3e4tixl4xyajtrzo62zg5vztmjuricljdp2c5kshju4avyoid.onion/TwitterBlue/status/1484226494708662273',
@@ -949,13 +971,13 @@ class TwitterIE(TwitterBaseIE):
status = self._graphql_to_legacy(result, twid) status = self._graphql_to_legacy(result, twid)
else: else:
status = self._call_api(f'statuses/show/{twid}.json', twid, { status = traverse_obj(self._call_api(f'statuses/show/{twid}.json', twid, {
'cards_platform': 'Web-12', 'cards_platform': 'Web-12',
'include_cards': 1, 'include_cards': 1,
'include_reply_count': 1, 'include_reply_count': 1,
'include_user_entities': 0, 'include_user_entities': 0,
'tweet_mode': 'extended', 'tweet_mode': 'extended',
}) }), 'retweeted_status', None)
title = description = status['full_text'].replace('\n', ' ') title = description = status['full_text'].replace('\n', ' ')
# strip 'https -_t.co_BJYgOjSeGA' junk from filenames # strip 'https -_t.co_BJYgOjSeGA' junk from filenames

View File

@@ -157,3 +157,24 @@ class XVideosIE(InfoExtractor):
'thumbnails': thumbnails, 'thumbnails': thumbnails,
'age_limit': 18, 'age_limit': 18,
} }
class XVideosQuickiesIE(InfoExtractor):
IE_NAME = 'xvideos:quickies'
_VALID_URL = r'https?://(?P<domain>(?:[^/]+\.)?xvideos2?\.com)/amateur-channels/[^#]+#quickies/a/(?P<id>\d+)'
_TESTS = [{
'url': 'https://www.xvideos.com/amateur-channels/wifeluna#quickies/a/47258683',
'md5': '16e322a93282667f1963915568f782c1',
'info_dict': {
'id': '47258683',
'ext': 'mp4',
'title': 'Verification video',
'age_limit': 18,
'duration': 16,
'thumbnail': r're:^https://cdn.*-pic.xvideos-cdn.com/.+\.jpg',
}
}]
def _real_extract(self, url):
domain, id_ = self._match_valid_url(url).group('domain', 'id')
return self.url_result(f'https://{domain}/video{id_}/_', XVideosIE, id_)

View File

@@ -3205,11 +3205,11 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'decoratedPlayerBarRenderer', 'playerBar', 'chapteredPlayerBarRenderer', 'chapters' 'decoratedPlayerBarRenderer', 'playerBar', 'chapteredPlayerBarRenderer', 'chapters'
), expected_type=list) ), expected_type=list)
return self._extract_chapters( return self._extract_chapters_helper(
chapter_list, chapter_list,
chapter_time=lambda chapter: float_or_none( start_function=lambda chapter: float_or_none(
traverse_obj(chapter, ('chapterRenderer', 'timeRangeStartMillis')), scale=1000), traverse_obj(chapter, ('chapterRenderer', 'timeRangeStartMillis')), scale=1000),
chapter_title=lambda chapter: traverse_obj( title_function=lambda chapter: traverse_obj(
chapter, ('chapterRenderer', 'title', 'simpleText'), expected_type=str), chapter, ('chapterRenderer', 'title', 'simpleText'), expected_type=str),
duration=duration) duration=duration)
@@ -3222,42 +3222,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
chapter_title = lambda chapter: self._get_text(chapter, 'title') chapter_title = lambda chapter: self._get_text(chapter, 'title')
return next(filter(None, ( return next(filter(None, (
self._extract_chapters(traverse_obj(contents, (..., 'macroMarkersListItemRenderer')), self._extract_chapters_helper(traverse_obj(contents, (..., 'macroMarkersListItemRenderer')),
chapter_time, chapter_title, duration) chapter_time, chapter_title, duration)
for contents in content_list)), []) for contents in content_list)), [])
def _extract_chapters_from_description(self, description, duration):
duration_re = r'(?:\d+:)?\d{1,2}:\d{2}'
sep_re = r'(?m)^\s*(%s)\b\W*\s(%s)\s*$'
return self._extract_chapters(
re.findall(sep_re % (duration_re, r'.+?'), description or ''),
chapter_time=lambda x: parse_duration(x[0]), chapter_title=lambda x: x[1],
duration=duration, strict=False) or self._extract_chapters(
re.findall(sep_re % (r'.+?', duration_re), description or ''),
chapter_time=lambda x: parse_duration(x[1]), chapter_title=lambda x: x[0],
duration=duration, strict=False)
def _extract_chapters(self, chapter_list, chapter_time, chapter_title, duration, strict=True):
if not duration:
return
chapter_list = [{
'start_time': chapter_time(chapter),
'title': chapter_title(chapter),
} for chapter in chapter_list or []]
if not strict:
chapter_list.sort(key=lambda c: c['start_time'] or 0)
chapters = [{'start_time': 0}]
for idx, chapter in enumerate(chapter_list):
if chapter['start_time'] is None:
self.report_warning(f'Incomplete chapter {idx}')
elif chapters[-1]['start_time'] <= chapter['start_time'] <= duration:
chapters.append(chapter)
elif chapter not in chapters:
self.report_warning(
f'Invalid start time ({chapter["start_time"]} < {chapters[-1]["start_time"]}) for chapter "{chapter["title"]}"')
return chapters[1:]
def _extract_comment(self, comment_renderer, parent=None): def _extract_comment(self, comment_renderer, parent=None):
comment_id = comment_renderer.get('commentId') comment_id = comment_renderer.get('commentId')
if not comment_id: if not comment_id:
@@ -3749,10 +3717,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'filesize': int_or_none(fmt.get('contentLength')), 'filesize': int_or_none(fmt.get('contentLength')),
'format_id': f'{itag}{"-drc" if fmt.get("isDrc") else ""}', 'format_id': f'{itag}{"-drc" if fmt.get("isDrc") else ""}',
'format_note': join_nonempty( 'format_note': join_nonempty(
'%s%s' % (audio_track.get('displayName') or '', join_nonempty(audio_track.get('displayName'),
' (default)' if language_preference > 0 else ''), language_preference > 0 and ' (default)', delim=''),
fmt.get('qualityLabel') or quality.replace('audio_quality_', ''), fmt.get('qualityLabel') or quality.replace('audio_quality_', ''),
'DRC' if fmt.get('isDrc') else None, fmt.get('isDrc') and 'DRC',
try_get(fmt, lambda x: x['projectionType'].replace('RECTANGULAR', '').lower()), try_get(fmt, lambda x: x['projectionType'].replace('RECTANGULAR', '').lower()),
try_get(fmt, lambda x: x['spatialAudioType'].replace('SPATIAL_AUDIO_TYPE_', '').lower()), try_get(fmt, lambda x: x['spatialAudioType'].replace('SPATIAL_AUDIO_TYPE_', '').lower()),
throttled and 'THROTTLED', is_damaged and 'DAMAGED', delim=', '), throttled and 'THROTTLED', is_damaged and 'DAMAGED', delim=', '),

View File

@@ -29,13 +29,13 @@ UPDATE_SOURCES = {
'stable': 'yt-dlp/yt-dlp', 'stable': 'yt-dlp/yt-dlp',
'nightly': 'yt-dlp/yt-dlp-nightly-builds', 'nightly': 'yt-dlp/yt-dlp-nightly-builds',
} }
REPOSITORY = UPDATE_SOURCES['stable']
_VERSION_RE = re.compile(r'(\d+\.)*\d+') _VERSION_RE = re.compile(r'(\d+\.)*\d+')
API_BASE_URL = 'https://api.github.com/repos' API_BASE_URL = 'https://api.github.com/repos'
# Backwards compatibility variables for the current channel # Backwards compatibility variables for the current channel
REPOSITORY = UPDATE_SOURCES[CHANNEL]
API_URL = f'{API_BASE_URL}/{REPOSITORY}/releases' API_URL = f'{API_BASE_URL}/{REPOSITORY}/releases'

View File

@@ -1,8 +1,8 @@
# Autogenerated by devscripts/update-version.py # Autogenerated by devscripts/update-version.py
__version__ = '2023.03.03' __version__ = '2023.03.04'
RELEASE_GIT_HEAD = '93449642815a6973a4b09b289982ca7e1f961b5f' RELEASE_GIT_HEAD = '392389b7df7b818f794b231f14dc396d4875fbad'
VARIANT = None VARIANT = None