1
0
mirror of https://github.com/yt-dlp/yt-dlp.git synced 2026-01-18 12:51:27 +00:00

Compare commits

...

54 Commits

Author SHA1 Message Date
github-actions[bot]
487a90c8ef Release 2025.08.27
Created by: bashonly

:ci skip all
2025-08-27 23:56:39 +00:00
bashonly
8cd37b85d4 [ie/youtube] Use alternative tv user-agent when authenticated (#14169)
Authored by: bashonly
2025-08-27 23:00:03 +00:00
bashonly
5c7ad68ff1 [ie/youtube] Deprioritize web_safari m3u8 formats (#14168)
Authored by: bashonly
2025-08-27 22:31:51 +00:00
sepro
1ddbd033f0 [ie/generic] Simplify invalid URL error message (#14167)
Authored by: seproDev
2025-08-27 23:27:57 +02:00
sepro
fec30c56f0 [ie/generic] Use https as fallback protocol (#14160)
Authored by: seproDev
2025-08-27 22:25:35 +02:00
sepro
d6950c27af [ie/skeb] Support wav files (#14147)
Closes #14146
Authored by: seproDev
2025-08-27 15:34:44 +02:00
bashonly
3bd9154412 [ie/youtube] Player client maintenance (#14135)
- Prioritize `tv_simply` over `tv` in default logged-out clients
- Revert `tv` client user-agent to work around 403 errors

Authored by: bashonly
2025-08-23 23:45:29 +00:00
bashonly
8f4a908300 [ie/youtube] Add tcc player JS variant (#14134)
Authored by: bashonly
2025-08-23 23:43:50 +00:00
github-actions[bot]
f1ba9f4ddb Release 2025.08.22
Created by: bashonly

:ci skip all
2025-08-22 23:59:02 +00:00
bashonly
5c8bcfdbc6 [ie/youtube] Optimize playback wait times (#14124)
Authored by: bashonly
2025-08-22 23:53:28 +00:00
bashonly
895e762a83 [ie/youtube] Replace ios with tv_simply in default clients (#14123)
Also:
- Add `web_safari` to default logged-in clients
- Add `web_creator` to default premium clients
- Flag `ios` HLS formats as requiring PO token

Closes #13702
Authored by: bashonly, coletdjnz

Co-authored-by: coletdjnz <coletdjnz@protonmail.com>
2025-08-22 23:49:54 +00:00
bashonly
39b7b8ddc7 [ie/youtube] Improve tv client context (#14122)
Closes #12563
Authored by: bashonly
2025-08-22 23:44:32 +00:00
bashonly
526410b4af [cookies] Fix f29acc4a6e
Authored by: bashonly
2025-08-21 22:13:45 -05:00
bashonly
f29acc4a6e [cookies] Fix --cookies-from-browser with Firefox 142+ (#14114)
Ref: 5869af852c
Related: 28b68f6875

Closes #13559, Closes #14113
Authored by: bashonly, Grub4K

Co-authored-by: Simon Sawicki <contact@grub4k.dev>
2025-08-21 23:47:18 +00:00
zhallgato
4dbe96459d [ie/mediaklikk] Fix extractor (#13975)
Closes #14091
Authored by: zhallgato
2025-08-21 21:28:12 +00:00
bashonly
a03c37b44e [ie/youtube] Update tv client config (#14101)
Authored by: seproDev
    
Co-authored-by: sepro <sepro@sepr0.com>
2025-08-20 19:46:10 +00:00
doe1080
fcea3edb5c [ie/steam] Fix extractors (#14093)
Authored by: doe1080
2025-08-20 15:44:45 +00:00
bashonly
415b6d9ca8 [build] Post-release workflow cleanup (#14090)
Authored by: bashonly
2025-08-20 06:17:45 +00:00
github-actions[bot]
575753b9f3 Release 2025.08.20
Created by: bashonly

:ci skip all
2025-08-20 02:51:33 +00:00
bashonly
c2fc4f3e7f [cleanup] Misc (#13991)
Authored by: bashonly
2025-08-20 02:42:34 +00:00
bashonly
07247d6c20 [build] Add Windows ARM64 builds (#14003)
Adds yt-dlp_arm64.exe and yt-dlp_win_arm64.zip to release assets

Closes #13849
Authored by: bashonly
2025-08-20 02:28:00 +00:00
bashonly
f63a7e41d1 [ie/youtube] Add playback_wait extractor-arg
Authored by: bashonly
2025-08-19 21:22:00 -05:00
bashonly
7b8a8abb98 [ie/francetv:site] Fix extractor (#14082)
Closes #14072
Authored by: bashonly
2025-08-20 01:35:32 +00:00
bashonly
a97f4cb57e [ie/youtube] Handle required preroll waiting period (#14081)
Authored by: bashonly
2025-08-19 20:06:53 -05:00
bashonly
d154dc3dcf [ie/youtube] Remove default player params (#14081)
Closes #13930
Authored by: bashonly
2025-08-19 20:06:53 -05:00
bashonly
438d3f06b3 [fd] Support available_at format field (#13980)
Authored by: bashonly
2025-08-20 00:38:48 +00:00
CasperMcFadden95
74b4b3b005 [ie/faulio] Add extractor (#13907)
Authored by: CasperMcFadden95
2025-08-19 23:27:31 +00:00
e2dk4r
36e873822b [ie/puhutv] Fix playlists extraction (#11955)
Closes #5691
Authored by: e2dk4r
2025-08-19 23:20:07 +00:00
Arseniy D.
d3d1ac8eb2 [ie/steam] Fix extractor (#14008)
Closes #14000
Authored by: AzartX47
2025-08-19 23:14:20 +00:00
doe1080
86d74e5cf0 [ie/medialaan] Rework extractors (#14015)
Fixes MedialaanIE and VTMIE

Authored by: doe1080
2025-08-19 23:10:17 +00:00
Junyi Lou
6ca9165648 [ie/bilibili] Handle Bangumi redirection (#14038)
Closes #13924
Authored by: junyilou, grqz

Co-authored-by: _Grqz <173015200+grqz@users.noreply.github.com>
2025-08-19 23:06:49 +00:00
Pierre
82a1390204 [ie/svt] Extract forced subs under separate lang code (#14062)
Closes #14020
Authored by: PierreMesure
2025-08-19 22:57:46 +00:00
Runar Saur Modahl
7540aa1da1 [ie/NRKTVEpisode] Fix extractor (#14065)
Closes #14054
Authored by: runarmod
2025-08-19 22:45:55 +00:00
bashonly
35da8df4f8 [utils] Add improved jwt_encode function (#14071)
Also deprecates `jwt_encode_hs256`

Authored by: bashonly
2025-08-19 22:36:00 +00:00
bashonly
8df121ba59 [ie/mtv] Overhaul extractors (#14052)
Adds SouthParkComBrIE and SouthParkCoUkIE

Removes these extractors:
 - CMTIE: migrated to Paramount+
 - ComedyCentralTVIE: migrated to Paramount+
 - MTVDEIE: migrated to Paramount+
 - MTVItaliaIE: migrated to Paramount+
 - MTVItaliaProgrammaIE: migrated to Paramount+
 - MTVJapanIE: migrated to JP Services
 - MTVServicesEmbeddedIE: dead domain
 - MTVVideoIE: migrated to Paramount+
 - NickBrIE: redirects to landing page w/o any videos
 - NickDeIE: redirects to landing page w/o any videos
 - NickRuIE: redirects to landing page w/o any videos
 - BellatorIE: migrated to PFL
 - ParamountNetworkIE: migrated to Paramount+
 - SouthParkNlIE: site no longer exists
 - TVLandIE: migrated to Paramount+

Closes #169, Closes #1711, Closes #1712, Closes #2621, Closes #3167, Closes #3893, Closes #4552, Closes #4702, Closes #4928, Closes #5249, Closes #6156, Closes #8722, Closes #9896, Closes #10168, Closes #12765, Closes #13446, Closes #14009

Authored by: bashonly, doe1080, Randalix, seproDev

Co-authored-by: doe1080 <98906116+doe1080@users.noreply.github.com>
Co-authored-by: Randalix <23729538+Randalix@users.noreply.github.com>
Co-authored-by: sepro <sepro@sepr0.com>
2025-08-19 20:46:11 +00:00
bashonly
471a2b60e0 [ie/tiktok:user] Improve infinite loop prevention (#14077)
Fix edf55e8184

Closes #14076
Authored by: bashonly
2025-08-19 20:39:33 +00:00
bashonly
df0553153e [ie/youtube] Default to main player JS variant (#14079)
Authored by: bashonly
2025-08-19 19:28:15 +00:00
bashonly
7bc53ae799 [ie/youtube] Extract title and description from initial data (#14078)
Closes #13604
Authored by: bashonly
2025-08-19 19:27:17 +00:00
bashonly
d8200ff0a4 [ie/vimeo:album] Support embed-only and non-numeric albums (#14021)
Authored by: bashonly
2025-08-18 22:09:35 +00:00
bashonly
0f6b915822 [ie/vimeo:event] Fix extractor (#14064)
Closes #14059
Authored by: bashonly
2025-08-18 21:09:25 +00:00
doe1080
374ea049f5 [ie/niconico:live] Support age-restricted streams (#13549)
Authored by: doe1080
2025-08-18 17:43:40 +00:00
doe1080
6f4c1bb593 [cleanup] Remove dead extractors (#13996)
Removes ArkenaIE, PladformIE, VevoIE, VevoPlaylistIE

Authored by: doe1080
2025-08-18 16:34:32 +00:00
doe1080
c22660aed5 [ie/adobetv] Fix extractor (#13917)
Removes AdobeTVChannelIE, AdobeTVEmbedIE, AdobeTVIE, AdobeTVShowIE

Authored by: doe1080
2025-08-18 16:32:26 +00:00
bashonly
404bd889d0 [ie/weibo] Support more URLs and --no-playlist (#14035)
Authored by: bashonly
2025-08-16 23:02:04 +00:00
bashonly
edf55e8184 [ie/tiktok:user] Avoid infinite loop during extraction (#14032)
Closes #14031
Authored by: bashonly
2025-08-16 22:57:14 +00:00
bashonly
8a8861d538 [ie/youtube:tab] Fix playlists tab extraction (#14030)
Closes #14028
Authored by: bashonly
2025-08-16 22:55:21 +00:00
sepro
70f5669951 Warn against using -f mp4 (#13915)
Authored by: seproDev
2025-08-17 00:35:46 +02:00
doe1080
6ae3543d5a [ie] _rta_search: Do not assume age_limit is 0 (#13985)
Authored by: doe1080
2025-08-16 04:28:58 +00:00
doe1080
770119bdd1 [ie] Extract avif storyboard formats from MPD manifests (#14016)
Authored by: doe1080
2025-08-16 03:32:21 +00:00
Arseniy D.
8e3f8065af [ie/weibo] Fix extractors (#14012)
Closes #14012
Authored by: AzartX47, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2025-08-16 03:07:35 +00:00
bashonly
aea85d525e [build] Discontinue darwin_legacy_exe support (#13860)
* Removes "yt-dlp_macos_legacy" from release assets
* Discontinues executable support for macOS < 10.15

Closes #13856
Authored by: bashonly
2025-08-13 22:02:58 +00:00
bashonly
f2919bd28e [ie/youtube] Add es5 and es6 player JS variants (#14005)
Authored by: bashonly
2025-08-12 23:24:31 +00:00
bashonly
681ed2153d [build] Bump PyInstaller version to 6.15.0 for Windows (#14002)
Authored by: bashonly
2025-08-12 23:17:13 +00:00
bashonly
bdeb3eb3f2 [pp/XAttrMetadata] Only set "Where From" attribute on macOS (#13999)
Fix 3e918d825d

Closes #14004
Authored by: bashonly
2025-08-12 07:58:22 +00:00
62 changed files with 2194 additions and 3042 deletions

View File

@@ -21,15 +21,9 @@ on:
macos: macos:
default: true default: true
type: boolean type: boolean
macos_legacy:
default: true
type: boolean
windows: windows:
default: true default: true
type: boolean type: boolean
windows32:
default: true
type: boolean
origin: origin:
required: false required: false
default: '' default: ''
@@ -67,16 +61,8 @@ on:
description: yt-dlp_macos, yt-dlp_macos.zip description: yt-dlp_macos, yt-dlp_macos.zip
default: true default: true
type: boolean type: boolean
macos_legacy:
description: yt-dlp_macos_legacy
default: true
type: boolean
windows: windows:
description: yt-dlp.exe, yt-dlp_win.zip description: yt-dlp.exe, yt-dlp_win.zip, yt-dlp_x86.exe, yt-dlp_win_x86.zip, yt-dlp_arm64.exe, yt-dlp_win_arm64.zip
default: true
type: boolean
windows32:
description: yt-dlp_x86.exe
default: true default: true
type: boolean type: boolean
origin: origin:
@@ -344,91 +330,76 @@ jobs:
~/yt-dlp-build-venv ~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}-${{ github.ref }} key: cache-reqs-${{ github.job }}-${{ github.ref }}
macos_legacy:
needs: process
if: inputs.macos_legacy
runs-on: macos-13
steps:
- uses: actions/checkout@v4
- name: Install Python
# We need the official Python, because the GA ones only support newer macOS versions
env:
PYTHON_VERSION: 3.10.5
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
run: |
# Hack to get the latest patch version. Uncomment if needed
#brew install python@3.10
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
curl "https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg" -o "python.pkg"
sudo installer -pkg python.pkg -target /
python3 --version
- name: Install Requirements
run: |
brew install coreutils
python3 devscripts/install_deps.py --user -o --include build
python3 devscripts/install_deps.py --user --include pyinstaller
- name: Prepare
run: |
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python3 devscripts/make_lazy_extractors.py
- name: Build
run: |
python3 -m bundle.pyinstaller
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
chmod +x ./dist/yt-dlp_macos_legacy
cp ./dist/yt-dlp_macos_legacy ./dist/yt-dlp_macos_legacy_downgraded
version="$(./dist/yt-dlp_macos_legacy --version)"
./dist/yt-dlp_macos_legacy_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./dist/yt-dlp_macos_legacy_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp_macos_legacy
compression-level: 0
windows: windows:
needs: process needs: process
if: inputs.windows if: inputs.windows
runs-on: windows-latest permissions:
contents: read
actions: write # For cleaning up cache
runs-on: ${{ matrix.runner }}
strategy:
fail-fast: false
matrix:
include:
- arch: 'x64'
runner: windows-2025
python_version: '3.10'
suffix: ''
- arch: 'x86'
runner: windows-2025
python_version: '3.10'
suffix: '_x86'
- arch: 'arm64'
runner: windows-11-arm
python_version: '3.13' # arm64 only has Python >= 3.11 available
suffix: '_arm64'
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: actions/setup-python@v5 - uses: actions/setup-python@v5
with: with:
python-version: "3.10" python-version: ${{ matrix.python_version }}
architecture: ${{ matrix.arch }}
- name: Restore cached requirements
id: restore-cache
if: matrix.arch == 'arm64'
uses: actions/cache/restore@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MINS: 1
with:
path: |
/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}_${{ matrix.arch }}-${{ matrix.python_version }}-${{ github.ref }}
- name: Install Requirements - name: Install Requirements
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds run: |
python -m venv /yt-dlp-build-venv
/yt-dlp-build-venv/Scripts/Activate.ps1
python devscripts/install_deps.py -o --include build python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py --include curl-cffi python devscripts/install_deps.py ${{ (matrix.arch != 'x86' && '--include curl-cffi') || '' }}
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.13.0-py3-none-any.whl" # Use custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/${{ matrix.arch }}/pyinstaller-6.15.0-py3-none-any.whl"
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
/yt-dlp-build-venv/Scripts/Activate.ps1
python -m bundle.pyinstaller python -m bundle.pyinstaller
python -m bundle.pyinstaller --onedir python -m bundle.pyinstaller --onedir
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip Compress-Archive -Path ./dist/yt-dlp${{ matrix.suffix }}/* -DestinationPath ./dist/yt-dlp_win${{ matrix.suffix }}.zip
- name: Verify --update-to - name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION if: vars.UPDATE_TO_VERIFICATION
run: | run: |
foreach ($name in @("yt-dlp")) { foreach ($name in @("yt-dlp${{ matrix.suffix }}")) {
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe" Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
$version = & "./dist/${name}.exe" --version $version = & "./dist/${name}.exe" --version
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04 & "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2025.08.20
$downgraded_version = & "./dist/${name}_downgraded.exe" --version $downgraded_version = & "./dist/${name}_downgraded.exe" --version
if ($version -eq $downgraded_version) { if ($version -eq $downgraded_version) {
exit 1 exit 1
@@ -438,57 +409,28 @@ jobs:
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: build-bin-${{ github.job }} name: build-bin-${{ github.job }}-${{ matrix.arch }}
path: | path: |
dist/yt-dlp.exe dist/yt-dlp${{ matrix.suffix }}.exe
dist/yt-dlp_win.zip dist/yt-dlp_win${{ matrix.suffix }}.zip
compression-level: 0 compression-level: 0
windows32: - name: Cleanup cache
needs: process if: |
if: inputs.windows32 matrix.arch == 'arm64' && steps.restore-cache.outputs.cache-hit == 'true'
runs-on: windows-latest env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
cache_key: cache-reqs-${{ github.job }}_${{ matrix.arch }}-${{ matrix.python_version }}-${{ github.ref }}
run: |
gh cache delete "${cache_key}"
steps: - name: Cache requirements
- uses: actions/checkout@v4 if: matrix.arch == 'arm64'
- uses: actions/setup-python@v5 uses: actions/cache/save@v4
with: with:
python-version: "3.10"
architecture: "x86"
- name: Install Requirements
run: |
python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.13.0-py3-none-any.whl"
- name: Prepare
run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python devscripts/make_lazy_extractors.py
- name: Build
run: |
python -m bundle.pyinstaller
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
foreach ($name in @("yt-dlp_x86")) {
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
$version = & "./dist/${name}.exe" --version
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
$downgraded_version = & "./dist/${name}_downgraded.exe" --version
if ($version -eq $downgraded_version) {
exit 1
}
}
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: | path: |
dist/yt-dlp_x86.exe /yt-dlp-build-venv
compression-level: 0 key: cache-reqs-${{ github.job }}_${{ matrix.arch }}-${{ matrix.python_version }}-${{ github.ref }}
meta_files: meta_files:
if: always() && !cancelled() if: always() && !cancelled()
@@ -498,9 +440,7 @@ jobs:
- linux_static - linux_static
- linux_arm - linux_arm
- macos - macos
- macos_legacy
- windows - windows
- windows32
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Download artifacts - name: Download artifacts
@@ -530,27 +470,31 @@ jobs:
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server) lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lock 2024.10.22 py2exe .+ lock 2024.10.22 py2exe .+
lock 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lock 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lock 2024.10.22 (?!\w+_exe).+ Python 3\.8 lock 2024.10.22 zip Python 3\.8
lock 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lock 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lock 2025.08.11 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6 lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6
lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7 lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server) lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp 2024.10.22 py2exe .+ lockV2 yt-dlp/yt-dlp 2024.10.22 py2exe .+
lockV2 yt-dlp/yt-dlp 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lockV2 yt-dlp/yt-dlp 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp 2024.10.22 (?!\w+_exe).+ Python 3\.8 lockV2 yt-dlp/yt-dlp 2024.10.22 zip Python 3\.8
lockV2 yt-dlp/yt-dlp 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lockV2 yt-dlp/yt-dlp 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp 2025.08.11 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7 lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server) lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 py2exe .+ lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 py2exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 (?!\w+_exe).+ Python 3\.8 lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-nightly-builds 2025.08.12.233030 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7 lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server) lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.045052 py2exe .+ lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.045052 py2exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 (?!\w+_exe).+ Python 3\.8 lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-master-builds 2025.08.12.232447 darwin_legacy_exe .+
EOF EOF
- name: Sign checksum files - name: Sign checksum files

View File

@@ -800,3 +800,9 @@ iribeirocampos
rolandcrosby rolandcrosby
Sojiroh Sojiroh
tchebb tchebb
AzartX47
e2dk4r
junyilou
PierreMesure
Randalix
runarmod

View File

@@ -4,13 +4,94 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master # To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
--> -->
### 2025.08.27
#### Extractor changes
- **generic**
- [Simplify invalid URL error message](https://github.com/yt-dlp/yt-dlp/commit/1ddbd033f0fd65917526b1271cea66913ac8647f) ([#14167](https://github.com/yt-dlp/yt-dlp/issues/14167)) by [seproDev](https://github.com/seproDev)
- [Use https as fallback protocol](https://github.com/yt-dlp/yt-dlp/commit/fec30c56f0e97e573ace659104ff0d72c4cc9809) ([#14160](https://github.com/yt-dlp/yt-dlp/issues/14160)) by [seproDev](https://github.com/seproDev)
- **skeb**: [Support wav files](https://github.com/yt-dlp/yt-dlp/commit/d6950c27af31908363c5c815e3b7eb4f9ff41643) ([#14147](https://github.com/yt-dlp/yt-dlp/issues/14147)) by [seproDev](https://github.com/seproDev)
- **youtube**
- [Add `tcc` player JS variant](https://github.com/yt-dlp/yt-dlp/commit/8f4a908300f55054bc96814bceeaa1034fdf4110) ([#14134](https://github.com/yt-dlp/yt-dlp/issues/14134)) by [bashonly](https://github.com/bashonly)
- [Deprioritize `web_safari` m3u8 formats](https://github.com/yt-dlp/yt-dlp/commit/5c7ad68ff1643ad80d18cef8be9db8fcab05ee6c) ([#14168](https://github.com/yt-dlp/yt-dlp/issues/14168)) by [bashonly](https://github.com/bashonly)
- [Player client maintenance](https://github.com/yt-dlp/yt-dlp/commit/3bd91544122142a87863d79e54e995c26cfd7f92) ([#14135](https://github.com/yt-dlp/yt-dlp/issues/14135)) by [bashonly](https://github.com/bashonly)
- [Use alternative `tv` user-agent when authenticated](https://github.com/yt-dlp/yt-dlp/commit/8cd37b85d492edb56a4f7506ea05527b85a6b02b) ([#14169](https://github.com/yt-dlp/yt-dlp/issues/14169)) by [bashonly](https://github.com/bashonly)
### 2025.08.22
#### Core changes
- **cookies**: [Fix `--cookies-from-browser` with Firefox 142+](https://github.com/yt-dlp/yt-dlp/commit/f29acc4a6e73a9dc091686d40951288acae5a46d) ([#14114](https://github.com/yt-dlp/yt-dlp/issues/14114)) by [bashonly](https://github.com/bashonly), [Grub4K](https://github.com/Grub4K) (With fixes in [526410b](https://github.com/yt-dlp/yt-dlp/commit/526410b4af9c1ca73aa3503cdaf4d32e42308fd6) by [bashonly](https://github.com/bashonly))
#### Extractor changes
- **mediaklikk**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/4dbe96459d7e632d397826d0bb323f3f0ac8b057) ([#13975](https://github.com/yt-dlp/yt-dlp/issues/13975)) by [zhallgato](https://github.com/zhallgato)
- **steam**: [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/fcea3edb5c5648638357f27431500c0aaf08b147) ([#14093](https://github.com/yt-dlp/yt-dlp/issues/14093)) by [doe1080](https://github.com/doe1080)
- **youtube**
- [Improve `tv` client context](https://github.com/yt-dlp/yt-dlp/commit/39b7b8ddc7a4d0669e0cf39105c3bb84cb2736cc) ([#14122](https://github.com/yt-dlp/yt-dlp/issues/14122)) by [bashonly](https://github.com/bashonly)
- [Optimize playback wait times](https://github.com/yt-dlp/yt-dlp/commit/5c8bcfdbc638dfde13e93157637d8521413ed774) ([#14124](https://github.com/yt-dlp/yt-dlp/issues/14124)) by [bashonly](https://github.com/bashonly)
- [Replace `ios` with `tv_simply` in default clients](https://github.com/yt-dlp/yt-dlp/commit/895e762a834bbd729ab822c7d17329fdf815aaf2) ([#14123](https://github.com/yt-dlp/yt-dlp/issues/14123)) by [bashonly](https://github.com/bashonly), [coletdjnz](https://github.com/coletdjnz)
- [Update `tv` client config](https://github.com/yt-dlp/yt-dlp/commit/a03c37b44ec8f50fd472c409115096f92410346d) ([#14101](https://github.com/yt-dlp/yt-dlp/issues/14101)) by [seproDev](https://github.com/seproDev)
#### Misc. changes
- **build**: [Post-release workflow cleanup](https://github.com/yt-dlp/yt-dlp/commit/415b6d9ca868032a45b30b9139a50c5c06be2feb) ([#14090](https://github.com/yt-dlp/yt-dlp/issues/14090)) by [bashonly](https://github.com/bashonly)
### 2025.08.20
#### Core changes
- [Warn against using `-f mp4`](https://github.com/yt-dlp/yt-dlp/commit/70f56699515e0854a4853d214dce11b61d432387) ([#13915](https://github.com/yt-dlp/yt-dlp/issues/13915)) by [seproDev](https://github.com/seproDev)
- **utils**: [Add improved `jwt_encode` function](https://github.com/yt-dlp/yt-dlp/commit/35da8df4f843cb8f0656a301e5bebbf47d64d69a) ([#14071](https://github.com/yt-dlp/yt-dlp/issues/14071)) by [bashonly](https://github.com/bashonly)
#### Extractor changes
- [Extract avif storyboard formats from MPD manifests](https://github.com/yt-dlp/yt-dlp/commit/770119bdd15c525ba4338503f0eb68ea4baedf10) ([#14016](https://github.com/yt-dlp/yt-dlp/issues/14016)) by [doe1080](https://github.com/doe1080)
- `_rta_search`: [Do not assume `age_limit` is `0`](https://github.com/yt-dlp/yt-dlp/commit/6ae3543d5a1feea0c546571fd2782b024c108eac) ([#13985](https://github.com/yt-dlp/yt-dlp/issues/13985)) by [doe1080](https://github.com/doe1080)
- **adobetv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/c22660aed5fadb4ac29bdf25db4e8016414153cc) ([#13917](https://github.com/yt-dlp/yt-dlp/issues/13917)) by [doe1080](https://github.com/doe1080)
- **bilibili**: [Handle Bangumi redirection](https://github.com/yt-dlp/yt-dlp/commit/6ca9165648ac9a07c012de639faf50a97cbe0991) ([#14038](https://github.com/yt-dlp/yt-dlp/issues/14038)) by [grqz](https://github.com/grqz), [junyilou](https://github.com/junyilou)
- **faulio**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/74b4b3b00516e92a60250e0626272a6826459057) ([#13907](https://github.com/yt-dlp/yt-dlp/issues/13907)) by [CasperMcFadden95](https://github.com/CasperMcFadden95)
- **francetv**: site: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/7b8a8abb98165a53c026e2a3f52faee608df1f20) ([#14082](https://github.com/yt-dlp/yt-dlp/issues/14082)) by [bashonly](https://github.com/bashonly)
- **medialaan**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/86d74e5cf0e06c53c931ccdbdd497e3f2c4d2fe2) ([#14015](https://github.com/yt-dlp/yt-dlp/issues/14015)) by [doe1080](https://github.com/doe1080)
- **mtv**: [Overhaul extractors](https://github.com/yt-dlp/yt-dlp/commit/8df121ba59208979aa713822781891347abd03d1) ([#14052](https://github.com/yt-dlp/yt-dlp/issues/14052)) by [bashonly](https://github.com/bashonly), [doe1080](https://github.com/doe1080), [Randalix](https://github.com/Randalix), [seproDev](https://github.com/seproDev)
- **niconico**: live: [Support age-restricted streams](https://github.com/yt-dlp/yt-dlp/commit/374ea049f531959bcccf8a1e6bc5659d228a780e) ([#13549](https://github.com/yt-dlp/yt-dlp/issues/13549)) by [doe1080](https://github.com/doe1080)
- **nrktvepisode**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/7540aa1da1800769af40381f423825a1a8826377) ([#14065](https://github.com/yt-dlp/yt-dlp/issues/14065)) by [runarmod](https://github.com/runarmod)
- **puhutv**: [Fix playlists extraction](https://github.com/yt-dlp/yt-dlp/commit/36e873822bdb2c5aba3780dd3ae32cbae564c6cd) ([#11955](https://github.com/yt-dlp/yt-dlp/issues/11955)) by [e2dk4r](https://github.com/e2dk4r)
- **steam**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/d3d1ac8eb2f9e96f3d75292e0effe2b1bccece3b) ([#14008](https://github.com/yt-dlp/yt-dlp/issues/14008)) by [AzartX47](https://github.com/AzartX47)
- **svt**: [Extract forced subs under separate lang code](https://github.com/yt-dlp/yt-dlp/commit/82a139020417a501f261d9fe02cefca01b1e12e4) ([#14062](https://github.com/yt-dlp/yt-dlp/issues/14062)) by [PierreMesure](https://github.com/PierreMesure)
- **tiktok**: user: [Avoid infinite loop during extraction](https://github.com/yt-dlp/yt-dlp/commit/edf55e81842fcfa6c302528d7f33ccd5081b37ef) ([#14032](https://github.com/yt-dlp/yt-dlp/issues/14032)) by [bashonly](https://github.com/bashonly) (With fixes in [471a2b6](https://github.com/yt-dlp/yt-dlp/commit/471a2b60e0a3e056960d9ceb1ebf57908428f752))
- **vimeo**
- album: [Support embed-only and non-numeric albums](https://github.com/yt-dlp/yt-dlp/commit/d8200ff0a4699e06c9f7daca8f8531f8b98e68f2) ([#14021](https://github.com/yt-dlp/yt-dlp/issues/14021)) by [bashonly](https://github.com/bashonly)
- event: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/0f6b915822fb64bd944126fdacd401975c9f06ed) ([#14064](https://github.com/yt-dlp/yt-dlp/issues/14064)) by [bashonly](https://github.com/bashonly)
- **weibo**
- [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/8e3f8065af1415caeff788c5c430703dd0d8f576) ([#14012](https://github.com/yt-dlp/yt-dlp/issues/14012)) by [AzartX47](https://github.com/AzartX47), [bashonly](https://github.com/bashonly)
- [Support more URLs and --no-playlist](https://github.com/yt-dlp/yt-dlp/commit/404bd889d0e0b62ad72b7281e3fefdc0497080b3) ([#14035](https://github.com/yt-dlp/yt-dlp/issues/14035)) by [bashonly](https://github.com/bashonly)
- **youtube**
- [Add `es5` and `es6` player JS variants](https://github.com/yt-dlp/yt-dlp/commit/f2919bd28eac905f1267c62b83738a02bb5b4e04) ([#14005](https://github.com/yt-dlp/yt-dlp/issues/14005)) by [bashonly](https://github.com/bashonly)
- [Add `playback_wait` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/f63a7e41d120ef84f0f2274b0962438e3272d2fa) by [bashonly](https://github.com/bashonly)
- [Default to `main` player JS variant](https://github.com/yt-dlp/yt-dlp/commit/df0553153e41f81e3b30aa5bb1d119c61bd449ac) ([#14079](https://github.com/yt-dlp/yt-dlp/issues/14079)) by [bashonly](https://github.com/bashonly)
- [Extract title and description from initial data](https://github.com/yt-dlp/yt-dlp/commit/7bc53ae79930b36f4f947679545c75f36e9f0ddd) ([#14078](https://github.com/yt-dlp/yt-dlp/issues/14078)) by [bashonly](https://github.com/bashonly)
- [Handle required preroll waiting period](https://github.com/yt-dlp/yt-dlp/commit/a97f4cb57e61e19be61a7d5ac19665d4b567c960) ([#14081](https://github.com/yt-dlp/yt-dlp/issues/14081)) by [bashonly](https://github.com/bashonly)
- [Remove default player params](https://github.com/yt-dlp/yt-dlp/commit/d154dc3dcf0c7c75dbabb6cd1aca66fdd806f858) ([#14081](https://github.com/yt-dlp/yt-dlp/issues/14081)) by [bashonly](https://github.com/bashonly)
- tab: [Fix playlists tab extraction](https://github.com/yt-dlp/yt-dlp/commit/8a8861d53864c8a38e924bc0657ead5180f17268) ([#14030](https://github.com/yt-dlp/yt-dlp/issues/14030)) by [bashonly](https://github.com/bashonly)
#### Downloader changes
- [Support `available_at` format field](https://github.com/yt-dlp/yt-dlp/commit/438d3f06b3c41bdef8112d40b75d342186e91a16) ([#13980](https://github.com/yt-dlp/yt-dlp/issues/13980)) by [bashonly](https://github.com/bashonly)
#### Postprocessor changes
- **xattrmetadata**: [Only set "Where From" attribute on macOS](https://github.com/yt-dlp/yt-dlp/commit/bdeb3eb3f29eebbe8237fbc5186e51e7293eea4a) ([#13999](https://github.com/yt-dlp/yt-dlp/issues/13999)) by [bashonly](https://github.com/bashonly)
#### Misc. changes
- **build**
- [Add Windows ARM64 builds](https://github.com/yt-dlp/yt-dlp/commit/07247d6c20fef1ad13b6f71f6355a44d308cf010) ([#14003](https://github.com/yt-dlp/yt-dlp/issues/14003)) by [bashonly](https://github.com/bashonly)
- [Bump PyInstaller version to 6.15.0 for Windows](https://github.com/yt-dlp/yt-dlp/commit/681ed2153de754c2c885fdad09ab71fffa8114f9) ([#14002](https://github.com/yt-dlp/yt-dlp/issues/14002)) by [bashonly](https://github.com/bashonly)
- [Discontinue `darwin_legacy_exe` support](https://github.com/yt-dlp/yt-dlp/commit/aea85d525e1007bb64baec0e170c054292d0858a) ([#13860](https://github.com/yt-dlp/yt-dlp/issues/13860)) by [bashonly](https://github.com/bashonly)
- **cleanup**
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/6f4c1bb593da92f0ce68229d0c813cdbaf1314da) ([#13996](https://github.com/yt-dlp/yt-dlp/issues/13996)) by [doe1080](https://github.com/doe1080)
- Miscellaneous: [c2fc4f3](https://github.com/yt-dlp/yt-dlp/commit/c2fc4f3e7f6d757250183b177130c64beee50520) by [bashonly](https://github.com/bashonly)
### 2025.08.11 ### 2025.08.11
#### Important changes #### Important changes
- **The minimum *recommended* Python version has been raised to 3.10** - **The minimum *recommended* Python version has been raised to 3.10**
Since Python 3.9 will reach end-of-life in October 2025, support for it will be dropped soon. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13858) Since Python 3.9 will reach end-of-life in October 2025, support for it will be dropped soon. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13858)
- **darwin_legacy_exe builds are being discontinued** - **darwin_legacy_exe builds are being discontinued**
This release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13857) This release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13856)
- **linux_armv7l_exe builds are being discontinued** - **linux_armv7l_exe builds are being discontinued**
This release's `yt-dlp_linux_armv7l` binary could be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13976) This release's `yt-dlp_linux_armv7l` binary could be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13976)

View File

@@ -106,12 +106,14 @@ File|Description
File|Description File|Description
:---|:--- :---|:---
[yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Win8+) standalone x86 (32-bit) binary [yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Win8+) standalone x86 (32-bit) binary
[yt-dlp_arm64.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_arm64.exe)|Windows (Win10+) standalone arm64 (64-bit) binary
[yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux standalone x64 binary [yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux standalone x64 binary
[yt-dlp_linux_armv7l](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_armv7l)|Linux standalone armv7l (32-bit) binary [yt-dlp_linux_armv7l](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_armv7l)|Linux standalone armv7l (32-bit) binary
[yt-dlp_linux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64)|Linux standalone aarch64 (64-bit) binary [yt-dlp_linux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64)|Linux standalone aarch64 (64-bit) binary
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update) [yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows (Win8+) x64 executable (no auto-update)
[yt-dlp_win_x86.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win_x86.zip)|Unpackaged Windows (Win8+) x86 executable (no auto-update)
[yt-dlp_win_arm64.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win_arm64.zip)|Unpackaged Windows (Win10+) arm64 executable (no auto-update)
[yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update) [yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update)
[yt-dlp_macos_legacy](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos_legacy)|MacOS (10.9+) standalone x64 executable
#### Misc #### Misc
@@ -1800,11 +1802,11 @@ The following extractors use this feature:
#### youtube #### youtube
* `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube/_base.py](https://github.com/yt-dlp/yt-dlp/blob/415b4c9f955b1a0391204bd24a7132590e7b3bdb/yt_dlp/extractor/youtube/_base.py#L402-L409) for the list of supported content language codes * `lang`: Prefer translated metadata (`title`, `description` etc) of this language code (case-sensitive). By default, the video primary language metadata is preferred, with a fallback to `en` translated. See [youtube/_base.py](https://github.com/yt-dlp/yt-dlp/blob/415b4c9f955b1a0391204bd24a7132590e7b3bdb/yt_dlp/extractor/youtube/_base.py#L402-L409) for the list of supported content language codes
* `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively * `skip`: One or more of `hls`, `dash` or `translated_subs` to skip extraction of the m3u8 manifests, dash manifests and [auto-translated subtitles](https://github.com/yt-dlp/yt-dlp/issues/4090#issuecomment-1158102032) respectively
* `player_client`: Clients to extract video data from. The currently available clients are `web`, `web_safari`, `web_embedded`, `web_music`, `web_creator`, `mweb`, `ios`, `android`, `android_vr`, `tv`, `tv_simply` and `tv_embedded`. By default, `tv,ios,web` is used, or `tv,web` is used when authenticating with cookies. The `web_music` client is added for `music.youtube.com` URLs when logged-in cookies are used. The `web_embedded` client is added for age-restricted videos but only works if the video is embeddable. The `tv_embedded` and `web_creator` clients are added for age-restricted videos if account age-verification is required. Some clients, such as `web` and `web_music`, require a `po_token` for their formats to be downloadable. Some clients, such as `web_creator`, will only work with authentication. Not all clients support authentication via cookies. You can use `default` for the default clients, or you can use `all` for all clients (not recommended). You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=default,-ios` * `player_client`: Clients to extract video data from. The currently available clients are `web`, `web_safari`, `web_embedded`, `web_music`, `web_creator`, `mweb`, `ios`, `android`, `android_vr`, `tv`, `tv_simply` and `tv_embedded`. By default, `tv_simply,tv,web` is used, but `tv,web_safari,web` is used when authenticating with cookies and `tv,web_creator,web` is used with premium accounts. The `web_music` client is added for `music.youtube.com` URLs when logged-in cookies are used. The `web_embedded` client is added for age-restricted videos but only works if the video is embeddable. The `tv_embedded` and `web_creator` clients are added for age-restricted videos if account age-verification is required. Some clients, such as `web` and `web_music`, require a `po_token` for their formats to be downloadable. Some clients, such as `web_creator`, will only work with authentication. Not all clients support authentication via cookies. You can use `default` for the default clients, or you can use `all` for all clients (not recommended). You can prefix a client with `-` to exclude it, e.g. `youtube:player_client=default,-ios`
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player), `initial_data` (skip initial data/next ep request). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause issues such as missing formats or metadata. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) and [#12826](https://github.com/yt-dlp/yt-dlp/issues/12826) for more details * `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player), `initial_data` (skip initial data/next ep request). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause issues such as missing formats or metadata. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) and [#12826](https://github.com/yt-dlp/yt-dlp/issues/12826) for more details
* `webpage_skip`: Skip extraction of embedded webpage data. One or both of `player_response`, `initial_data`. These options are for testing purposes and don't skip any network requests * `webpage_skip`: Skip extraction of embedded webpage data. One or both of `player_response`, `initial_data`. These options are for testing purposes and don't skip any network requests
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp. * `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
* `player_js_variant`: The player javascript variant to use for signature and nsig deciphering. The known variants are: `main`, `tce`, `tv`, `tv_es6`, `phone`, `tablet`. Only `main` is recommended as a possible workaround; the others are for debugging purposes. The default is to use what is prescribed by the site, and can be selected with `actual` * `player_js_variant`: The player javascript variant to use for signature and nsig deciphering. The known variants are: `main`, `tce`, `tv`, `tv_es6`, `phone`, `tablet`. The default is `main`, and the others are for debugging purposes. You can use `actual` to go with what is prescribed by the site
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side) * `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all` * `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total * E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
@@ -1817,6 +1819,7 @@ The following extractors use this feature:
* `po_token`: Proof of Origin (PO) Token(s) to use. Comma seperated list of PO Tokens in the format `CLIENT.CONTEXT+PO_TOKEN`, e.g. `youtube:po_token=web.gvs+XXX,web.player=XXX,web_safari.gvs+YYY`. Context can be any of `gvs` (Google Video Server URLs), `player` (Innertube player request) or `subs` (Subtitles) * `po_token`: Proof of Origin (PO) Token(s) to use. Comma seperated list of PO Tokens in the format `CLIENT.CONTEXT+PO_TOKEN`, e.g. `youtube:po_token=web.gvs+XXX,web.player=XXX,web_safari.gvs+YYY`. Context can be any of `gvs` (Google Video Server URLs), `player` (Innertube player request) or `subs` (Subtitles)
* `pot_trace`: Enable debug logging for PO Token fetching. Either `true` or `false` (default) * `pot_trace`: Enable debug logging for PO Token fetching. Either `true` or `false` (default)
* `fetch_pot`: Policy to use for fetching a PO Token from providers. One of `always` (always try fetch a PO Token regardless if the client requires one for the given context), `never` (never fetch a PO Token), or `auto` (default; only fetch a PO Token if the client requires one for the given context) * `fetch_pot`: Policy to use for fetching a PO Token from providers. One of `always` (always try fetch a PO Token regardless if the client requires one for the given context), `never` (never fetch a PO Token), or `auto` (default; only fetch a PO Token if the client requires one for the given context)
* `playback_wait`: Duration (in seconds) to wait inbetween the extraction and download stages in order to ensure the formats are available. The default is `6` seconds
#### youtubepot-webpo #### youtubepot-webpo
* `bind_to_visitor_id`: Whether to use the Visitor ID instead of Visitor Data for caching WebPO tokens. Either `true` (default) or `false` * `bind_to_visitor_id`: Whether to use the Visitor ID instead of Visitor Data for caching WebPO tokens. Either `true` (default) or `false`

View File

@@ -287,7 +287,7 @@
{ {
"action": "add", "action": "add",
"when": "cc5a5caac5fbc0d605b52bde0778d6fd5f97b5ab", "when": "cc5a5caac5fbc0d605b52bde0778d6fd5f97b5ab",
"short": "[priority] **darwin_legacy_exe builds are being discontinued**\nThis release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13857)" "short": "[priority] **darwin_legacy_exe builds are being discontinued**\nThis release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13856)"
}, },
{ {
"action": "add", "action": "add",

View File

@@ -315,6 +315,7 @@ banned-from = [
"yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead." "yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead."
"yt_dlp.utils.bytes_to_intlist".msg = "Use `list` instead." "yt_dlp.utils.bytes_to_intlist".msg = "Use `list` instead."
"yt_dlp.utils.intlist_to_bytes".msg = "Use `bytes` instead." "yt_dlp.utils.intlist_to_bytes".msg = "Use `bytes` instead."
"yt_dlp.utils.jwt_encode_hs256".msg = "Use `yt_dlp.utils.jwt_encode` instead."
"yt_dlp.utils.decodeArgument".msg = "Do not use" "yt_dlp.utils.decodeArgument".msg = "Do not use"
"yt_dlp.utils.decodeFilename".msg = "Do not use" "yt_dlp.utils.decodeFilename".msg = "Do not use"
"yt_dlp.utils.encodeFilename".msg = "Do not use" "yt_dlp.utils.encodeFilename".msg = "Do not use"

View File

@@ -44,11 +44,7 @@ The only reliable way to check if a site is supported is to try it.
- **ADN**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network - **ADN**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network
- **ADNSeason**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network - **ADNSeason**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network
- **AdobeConnect** - **AdobeConnect**
- **adobetv**: (**Currently broken**) - **adobetv**
- **adobetv:channel**: (**Currently broken**)
- **adobetv:embed**: (**Currently broken**)
- **adobetv:show**: (**Currently broken**)
- **adobetv:video**
- **AdultSwim** - **AdultSwim**
- **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network and History Vault - **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network and History Vault
- **aenetworks:collection** - **aenetworks:collection**
@@ -100,7 +96,6 @@ The only reliable way to check if a site is supported is to try it.
- **ARD** - **ARD**
- **ARDMediathek** - **ARDMediathek**
- **ARDMediathekCollection** - **ARDMediathekCollection**
- **Arkena**
- **Art19** - **Art19**
- **Art19Show** - **Art19Show**
- **arte.sky.it** - **arte.sky.it**
@@ -155,9 +150,8 @@ The only reliable way to check if a site is supported is to try it.
- **Beatport** - **Beatport**
- **Beeg** - **Beeg**
- **BehindKink**: (**Currently broken**) - **BehindKink**: (**Currently broken**)
- **Bellator**
- **BerufeTV** - **BerufeTV**
- **Bet**: (**Currently broken**) - **Bet**
- **bfi:player**: (**Currently broken**) - **bfi:player**: (**Currently broken**)
- **bfmtv** - **bfmtv**
- **bfmtv:article** - **bfmtv:article**
@@ -290,12 +284,10 @@ The only reliable way to check if a site is supported is to try it.
- **CloudyCDN** - **CloudyCDN**
- **Clubic**: (**Currently broken**) - **Clubic**: (**Currently broken**)
- **Clyp** - **Clyp**
- **cmt.com**: (**Currently broken**)
- **CNBCVideo** - **CNBCVideo**
- **CNN** - **CNN**
- **CNNIndonesia** - **CNNIndonesia**
- **ComedyCentral** - **ComedyCentral**
- **ComedyCentralTV**
- **ConanClassic**: (**Currently broken**) - **ConanClassic**: (**Currently broken**)
- **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED - **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED
- **CONtv** - **CONtv**
@@ -445,6 +437,7 @@ The only reliable way to check if a site is supported is to try it.
- **fancode:live**: [*fancode*](## "netrc machine") (**Currently broken**) - **fancode:live**: [*fancode*](## "netrc machine") (**Currently broken**)
- **fancode:vod**: [*fancode*](## "netrc machine") (**Currently broken**) - **fancode:vod**: [*fancode*](## "netrc machine") (**Currently broken**)
- **Fathom** - **Fathom**
- **Faulio**
- **FaulioLive** - **FaulioLive**
- **faz.net** - **faz.net**
- **fc2**: [*fc2*](## "netrc machine") - **fc2**: [*fc2*](## "netrc machine")
@@ -700,8 +693,8 @@ The only reliable way to check if a site is supported is to try it.
- **lbry:channel**: odysee.com channels - **lbry:channel**: odysee.com channels
- **lbry:playlist**: odysee.com playlists - **lbry:playlist**: odysee.com playlists
- **LCI** - **LCI**
- **Lcp** - **Lcp**: (**Currently broken**)
- **LcpPlay** - **LcpPlay**: (**Currently broken**)
- **Le**: 乐视网 - **Le**: 乐视网
- **LearningOnScreen** - **LearningOnScreen**
- **Lecture2Go**: (**Currently broken**) - **Lecture2Go**: (**Currently broken**)
@@ -840,12 +833,6 @@ The only reliable way to check if a site is supported is to try it.
- **MSN** - **MSN**
- **mtg**: MTG services - **mtg**: MTG services
- **mtv** - **mtv**
- **mtv.de**: (**Currently broken**)
- **mtv.it**
- **mtv.it:programma**
- **mtv:video**
- **mtvjapan**
- **mtvservices:embedded**
- **MTVUutisetArticle**: (**Currently broken**) - **MTVUutisetArticle**: (**Currently broken**)
- **MuenchenTV**: münchen.tv (**Currently broken**) - **MuenchenTV**: münchen.tv (**Currently broken**)
- **MujRozhlas** - **MujRozhlas**
@@ -945,9 +932,6 @@ The only reliable way to check if a site is supported is to try it.
- **NhkVodProgram** - **NhkVodProgram**
- **nhl.com** - **nhl.com**
- **nick.com** - **nick.com**
- **nick.de**
- **nickelodeon:br**
- **nickelodeonru**
- **niconico**: [*niconico*](## "netrc machine") ニコニコ動画 - **niconico**: [*niconico*](## "netrc machine") ニコニコ動画
- **niconico:history**: NicoNico user history or likes. Requires cookies. - **niconico:history**: NicoNico user history or likes. Requires cookies.
- **niconico:live**: [*niconico*](## "netrc machine") ニコニコ生放送 - **niconico:live**: [*niconico*](## "netrc machine") ニコニコ生放送
@@ -1049,7 +1033,6 @@ The only reliable way to check if a site is supported is to try it.
- **Panopto** - **Panopto**
- **PanoptoList** - **PanoptoList**
- **PanoptoPlaylist** - **PanoptoPlaylist**
- **ParamountNetwork**
- **ParamountPlus** - **ParamountPlus**
- **ParamountPlusSeries** - **ParamountPlusSeries**
- **ParamountPressExpress** - **ParamountPressExpress**
@@ -1088,7 +1071,6 @@ The only reliable way to check if a site is supported is to try it.
- **PiramideTVChannel** - **PiramideTVChannel**
- **pixiv:sketch** - **pixiv:sketch**
- **pixiv:sketch:user** - **pixiv:sketch:user**
- **Pladform**
- **PlanetMarathi** - **PlanetMarathi**
- **Platzi**: [*platzi*](## "netrc machine") - **Platzi**: [*platzi*](## "netrc machine")
- **PlatziCourse**: [*platzi*](## "netrc machine") - **PlatziCourse**: [*platzi*](## "netrc machine")
@@ -1377,8 +1359,9 @@ The only reliable way to check if a site is supported is to try it.
- **southpark.cc.com:español** - **southpark.cc.com:español**
- **southpark.de** - **southpark.de**
- **southpark.lat** - **southpark.lat**
- **southpark.nl** - **southparkstudios.co.uk**
- **southparkstudios.dk** - **southparkstudios.com.br**
- **southparkstudios.nu**
- **SovietsCloset** - **SovietsCloset**
- **SovietsClosetPlaylist** - **SovietsClosetPlaylist**
- **SpankBang** - **SpankBang**
@@ -1403,6 +1386,7 @@ The only reliable way to check if a site is supported is to try it.
- **startrek**: STAR TREK - **startrek**: STAR TREK
- **startv** - **startv**
- **Steam** - **Steam**
- **SteamCommunity**
- **SteamCommunityBroadcast** - **SteamCommunityBroadcast**
- **Stitcher** - **Stitcher**
- **StitcherShow** - **StitcherShow**
@@ -1557,7 +1541,6 @@ The only reliable way to check if a site is supported is to try it.
- **TVer** - **TVer**
- **tvigle**: Интернет-телевидение Tvigle.ru - **tvigle**: Интернет-телевидение Tvigle.ru
- **TVIPlayer** - **TVIPlayer**
- **tvland.com**
- **TVN24**: (**Currently broken**) - **TVN24**: (**Currently broken**)
- **TVNoe**: (**Currently broken**) - **TVNoe**: (**Currently broken**)
- **tvopengr:embed**: tvopen.gr embedded videos - **tvopengr:embed**: tvopen.gr embedded videos
@@ -1618,8 +1601,6 @@ The only reliable way to check if a site is supported is to try it.
- **Vbox7** - **Vbox7**
- **Veo** - **Veo**
- **Vesti**: Вести.Ru (**Currently broken**) - **Vesti**: Вести.Ru (**Currently broken**)
- **Vevo**
- **VevoPlaylist**
- **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet - **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet
- **vh1.com** - **vh1.com**
- **vhx:embed**: [*vimeo*](## "netrc machine") - **vhx:embed**: [*vimeo*](## "netrc machine")
@@ -1698,7 +1679,7 @@ The only reliable way to check if a site is supported is to try it.
- **vrsquare:section** - **vrsquare:section**
- **VRT**: VRT NWS, Flanders News, Flandern Info and Sporza - **VRT**: VRT NWS, Flanders News, Flandern Info and Sporza
- **vrtmax**: [*vrtnu*](## "netrc machine") VRT MAX (formerly VRT NU) - **vrtmax**: [*vrtnu*](## "netrc machine") VRT MAX (formerly VRT NU)
- **VTM**: (**Currently broken**) - **VTM**
- **VTV** - **VTV**
- **VTVGo** - **VTVGo**
- **VTXTV**: [*vtxtv*](## "netrc machine") - **VTXTV**: [*vtxtv*](## "netrc machine")

View File

@@ -14,7 +14,6 @@ from yt_dlp.extractor import (
NRKTVIE, NRKTVIE,
PBSIE, PBSIE,
CeskaTelevizeIE, CeskaTelevizeIE,
ComedyCentralIE,
DailymotionIE, DailymotionIE,
DemocracynowIE, DemocracynowIE,
LyndaIE, LyndaIE,
@@ -279,23 +278,6 @@ class TestNPOSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4') self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4')
@is_download_test
@unittest.skip('IE broken')
class TestMTVSubtitles(BaseTestSubtitles):
url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans'
IE = ComedyCentralIE
def getInfoDict(self):
return super().getInfoDict()['entries'][0]
def test_allsubtitles(self):
self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), {'en'})
self.assertEqual(md5(subtitles['en']), '78206b8d8a0cfa9da64dc026eea48961')
@is_download_test @is_download_test
class TestNRKSubtitles(BaseTestSubtitles): class TestNRKSubtitles(BaseTestSubtitles):
url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1' url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1'

View File

@@ -84,8 +84,9 @@ lock 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server) lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lock 2024.10.22 py2exe .+ lock 2024.10.22 py2exe .+
lock 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lock 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lock 2024.10.22 (?!\w+_exe).+ Python 3\.8 lock 2024.10.22 zip Python 3\.8
lock 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lock 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lock 2025.08.11 darwin_legacy_exe .+
''' '''
TEST_LOCKFILE_V2_TMPL = r'''%s TEST_LOCKFILE_V2_TMPL = r'''%s
@@ -94,20 +95,23 @@ lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server) lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp 2024.10.22 py2exe .+ lockV2 yt-dlp/yt-dlp 2024.10.22 py2exe .+
lockV2 yt-dlp/yt-dlp 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lockV2 yt-dlp/yt-dlp 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp 2024.10.22 (?!\w+_exe).+ Python 3\.8 lockV2 yt-dlp/yt-dlp 2024.10.22 zip Python 3\.8
lockV2 yt-dlp/yt-dlp 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lockV2 yt-dlp/yt-dlp 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp 2025.08.11 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7 lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server) lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 py2exe .+ lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 py2exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 (?!\w+_exe).+ Python 3\.8 lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-nightly-builds 2025.08.12.233030 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7 lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server) lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.045052 py2exe .+ lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.045052 py2exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 (?!\w+_exe).+ Python 3\.8 lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2) lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-master-builds 2025.08.12.232447 darwin_legacy_exe .+
''' '''
TEST_LOCKFILE_V2 = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_COMMENT TEST_LOCKFILE_V2 = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_COMMENT
@@ -217,6 +221,10 @@ class TestUpdate(unittest.TestCase):
test( # linux_aarch64_exe w/glibc2.3 should only update to glibc<2.31 lock test( # linux_aarch64_exe w/glibc2.3 should only update to glibc<2.31 lock
lockfile, 'linux_aarch64_exe Python 3.8.0 (CPython aarch64 64bit) - Linux-6.5.0-1025-azure-aarch64-with-glibc2.3 (OpenSSL', lockfile, 'linux_aarch64_exe Python 3.8.0 (CPython aarch64 64bit) - Linux-6.5.0-1025-azure-aarch64-with-glibc2.3 (OpenSSL',
'2025.01.01', '2024.10.22') '2025.01.01', '2024.10.22')
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.11', '2025.08.11')
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.11', '2025.08.11', exact=True)
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.12', '2025.08.11')
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.12', None, exact=True)
# Forks can block updates to non-numeric tags rather than lock # Forks can block updates to non-numeric tags rather than lock
test(TEST_LOCKFILE_FORK, 'zip Python 3.6.3', 'pr0000', None, repo='fork/yt-dlp') test(TEST_LOCKFILE_FORK, 'zip Python 3.6.3', 'pr0000', None, repo='fork/yt-dlp')

View File

@@ -71,6 +71,8 @@ from yt_dlp.utils import (
iri_to_uri, iri_to_uri,
is_html, is_html,
js_to_json, js_to_json,
jwt_decode_hs256,
jwt_encode,
limit_length, limit_length,
locked_file, locked_file,
lowercase_escape, lowercase_escape,
@@ -2180,6 +2182,41 @@ Line 1
assert int_or_none(v=10) == 10, 'keyword passed positional should call function' assert int_or_none(v=10) == 10, 'keyword passed positional should call function'
assert int_or_none(scale=0.1)(10) == 100, 'call after partial application should call the function' assert int_or_none(scale=0.1)(10) == 100, 'call after partial application should call the function'
_JWT_KEY = '12345678'
_JWT_HEADERS_1 = {'a': 'b'}
_JWT_HEADERS_2 = {'typ': 'JWT', 'alg': 'HS256'}
_JWT_HEADERS_3 = {'typ': 'JWT', 'alg': 'RS256'}
_JWT_HEADERS_4 = {'c': 'd', 'alg': 'ES256'}
_JWT_DECODED = {
'foo': 'bar',
'qux': 'baz',
}
_JWT_SIMPLE = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.fKojvTWqnjNTbsdoDTmYNc4tgYAG3h_SWRzM77iLH0U'
_JWT_WITH_EXTRA_HEADERS = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImEiOiJiIn0.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.Ia91-B77yasfYM7jsB6iVKLew-3rO6ITjNmjWUVXCvQ'
_JWT_WITH_REORDERED_HEADERS = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.slg-7COta5VOfB36p3tqV4MGPV6TTA_ouGnD48UEVq4'
_JWT_WITH_REORDERED_HEADERS_AND_RS256_ALG = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.XWp496oVgQnoits0OOocutdjxoaQwn4GUWWxUsKENPM'
_JWT_WITH_EXTRA_HEADERS_AND_ES256_ALG = 'eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCIsImMiOiJkIn0.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.oM_tc7IkfrwkoRh43rFFE1wOi3J3mQGwx7_lMyKQqDg'
def test_jwt_encode(self):
def test(expected, headers={}):
self.assertEqual(jwt_encode(self._JWT_DECODED, self._JWT_KEY, headers=headers), expected)
test(self._JWT_SIMPLE)
test(self._JWT_WITH_EXTRA_HEADERS, headers=self._JWT_HEADERS_1)
test(self._JWT_WITH_REORDERED_HEADERS, headers=self._JWT_HEADERS_2)
test(self._JWT_WITH_REORDERED_HEADERS_AND_RS256_ALG, headers=self._JWT_HEADERS_3)
test(self._JWT_WITH_EXTRA_HEADERS_AND_ES256_ALG, headers=self._JWT_HEADERS_4)
def test_jwt_decode_hs256(self):
def test(inp):
self.assertEqual(jwt_decode_hs256(inp), self._JWT_DECODED)
test(self._JWT_SIMPLE)
test(self._JWT_WITH_EXTRA_HEADERS)
test(self._JWT_WITH_REORDERED_HEADERS)
test(self._JWT_WITH_REORDERED_HEADERS_AND_RS256_ALG)
test(self._JWT_WITH_EXTRA_HEADERS_AND_ES256_ALG)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

View File

@@ -138,6 +138,21 @@ _SIG_TESTS = [
'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt', 'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt',
'JC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit0zJAtIEsgOV2SXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-a', 'JC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit0zJAtIEsgOV2SXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-a',
), ),
(
'https://www.youtube.com/s/player/010fbc8d/player_es5.vflset/en_US/base.js',
'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt',
'ttJC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit2zJAsIEggOVaSXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-',
),
(
'https://www.youtube.com/s/player/010fbc8d/player_es6.vflset/en_US/base.js',
'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt',
'ttJC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit2zJAsIEggOVaSXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-',
),
(
'https://www.youtube.com/s/player/5ec65609/player_ias_tcc.vflset/en_US/base.js',
'AAJAJfQdSswRAIgNSN0GDUcHnCIXkKcF61yLBgDHiX1sUhOJdY4_GxunRYCIDeYNYP_16mQTPm5f1OVq3oV1ijUNYPjP4iUSMAjO9bZ',
'AJfQdSswRAIgNSN0GDUcHnCIXkKcF61ZLBgDHiX1sUhOJdY4_GxunRYCIDyYNYP_16mQTPm5f1OVq3oV1ijUNYPjP4iUSMAjO9be',
),
] ]
_NSIG_TESTS = [ _NSIG_TESTS = [
@@ -377,6 +392,18 @@ _NSIG_TESTS = [
'https://www.youtube.com/s/player/ef259203/player_ias_tce.vflset/en_US/base.js', 'https://www.youtube.com/s/player/ef259203/player_ias_tce.vflset/en_US/base.js',
'rPqBC01nJpqhhi2iA2U', 'hY7dbiKFT51UIA', 'rPqBC01nJpqhhi2iA2U', 'hY7dbiKFT51UIA',
), ),
(
'https://www.youtube.com/s/player/010fbc8d/player_es5.vflset/en_US/base.js',
'0hlOAlqjFszVvF4Z', 'R-H23bZGAsRFTg',
),
(
'https://www.youtube.com/s/player/010fbc8d/player_es6.vflset/en_US/base.js',
'0hlOAlqjFszVvF4Z', 'R-H23bZGAsRFTg',
),
(
'https://www.youtube.com/s/player/5ec65609/player_ias_tcc.vflset/en_US/base.js',
'6l5CTNx4AzIqH4MXM', 'NupToduxHBew1g',
),
] ]

View File

@@ -599,7 +599,7 @@ class YoutubeDL:
_NUMERIC_FIELDS = { _NUMERIC_FIELDS = {
'width', 'height', 'asr', 'audio_channels', 'fps', 'width', 'height', 'asr', 'audio_channels', 'fps',
'tbr', 'abr', 'vbr', 'filesize', 'filesize_approx', 'tbr', 'abr', 'vbr', 'filesize', 'filesize_approx',
'timestamp', 'release_timestamp', 'timestamp', 'release_timestamp', 'available_at',
'duration', 'view_count', 'like_count', 'dislike_count', 'repost_count', 'duration', 'view_count', 'like_count', 'dislike_count', 'repost_count',
'average_rating', 'comment_count', 'age_limit', 'average_rating', 'comment_count', 'age_limit',
'start_time', 'end_time', 'start_time', 'end_time',
@@ -609,7 +609,7 @@ class YoutubeDL:
_format_fields = { _format_fields = {
# NB: Keep in sync with the docstring of extractor/common.py # NB: Keep in sync with the docstring of extractor/common.py
'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note', 'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note', 'available_at',
'width', 'height', 'aspect_ratio', 'resolution', 'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'audio_channels', 'width', 'height', 'aspect_ratio', 'resolution', 'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'audio_channels',
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns', 'hls_media_playlist_data', 'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns', 'hls_media_playlist_data',
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start', 'is_dash_periods', 'request_data', 'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start', 'is_dash_periods', 'request_data',

View File

@@ -500,6 +500,14 @@ def validate_options(opts):
'To let yt-dlp download and merge the best available formats, simply do not pass any format selection', 'To let yt-dlp download and merge the best available formats, simply do not pass any format selection',
'If you know what you are doing and want only the best pre-merged format, use "-f b" instead to suppress this warning'))) 'If you know what you are doing and want only the best pre-merged format, use "-f b" instead to suppress this warning')))
# Common mistake: -f mp4
if opts.format == 'mp4':
warnings.append('.\n '.join((
'"-f mp4" selects the best pre-merged mp4 format which is often not what\'s intended',
'Pre-merged mp4 formats are not available from all sites, or may only be available in lower quality',
'To prioritize the best h264 video and aac audio in an mp4 container, use "-t mp4" instead',
'If you know what you are doing and want a pre-merged mp4 format, use "-f b[ext=mp4]" instead to suppress this warning')))
# --(postprocessor/downloader)-args without name # --(postprocessor/downloader)-args without name
def report_args_compat(name, value, key1, key2=None, where=None): def report_args_compat(name, value, key1, key2=None, where=None):
if key1 in value and key2 not in value: if key1 in value and key2 not in value:

View File

@@ -125,6 +125,8 @@ def extract_cookies_from_browser(browser_name, profile=None, logger=YDLLogger(),
def _extract_firefox_cookies(profile, container, logger): def _extract_firefox_cookies(profile, container, logger):
MAX_SUPPORTED_DB_SCHEMA_VERSION = 16
logger.info('Extracting cookies from firefox') logger.info('Extracting cookies from firefox')
if not sqlite3: if not sqlite3:
logger.warning('Cannot extract cookies from firefox without sqlite3 support. ' logger.warning('Cannot extract cookies from firefox without sqlite3 support. '
@@ -159,9 +161,11 @@ def _extract_firefox_cookies(profile, container, logger):
raise ValueError(f'could not find firefox container "{container}" in containers.json') raise ValueError(f'could not find firefox container "{container}" in containers.json')
with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir: with tempfile.TemporaryDirectory(prefix='yt_dlp') as tmpdir:
cursor = None cursor = _open_database_copy(cookie_database_path, tmpdir)
try: with contextlib.closing(cursor.connection):
cursor = _open_database_copy(cookie_database_path, tmpdir) db_schema_version = cursor.execute('PRAGMA user_version;').fetchone()[0]
if db_schema_version > MAX_SUPPORTED_DB_SCHEMA_VERSION:
logger.warning(f'Possibly unsupported firefox cookies database version: {db_schema_version}')
if isinstance(container_id, int): if isinstance(container_id, int):
logger.debug( logger.debug(
f'Only loading cookies from firefox container "{container}", ID {container_id}') f'Only loading cookies from firefox container "{container}", ID {container_id}')
@@ -180,6 +184,10 @@ def _extract_firefox_cookies(profile, container, logger):
total_cookie_count = len(table) total_cookie_count = len(table)
for i, (host, name, value, path, expiry, is_secure) in enumerate(table): for i, (host, name, value, path, expiry, is_secure) in enumerate(table):
progress_bar.print(f'Loading cookie {i: 6d}/{total_cookie_count: 6d}') progress_bar.print(f'Loading cookie {i: 6d}/{total_cookie_count: 6d}')
# FF142 upgraded cookies DB to schema version 16 and started using milliseconds for cookie expiry
# Ref: https://github.com/mozilla-firefox/firefox/commit/5869af852cd20425165837f6c2d9971f3efba83d
if db_schema_version >= 16 and expiry is not None:
expiry /= 1000
cookie = http.cookiejar.Cookie( cookie = http.cookiejar.Cookie(
version=0, name=name, value=value, port=None, port_specified=False, version=0, name=name, value=value, port=None, port_specified=False,
domain=host, domain_specified=bool(host), domain_initial_dot=host.startswith('.'), domain=host, domain_specified=bool(host), domain_initial_dot=host.startswith('.'),
@@ -188,9 +196,6 @@ def _extract_firefox_cookies(profile, container, logger):
jar.set_cookie(cookie) jar.set_cookie(cookie)
logger.info(f'Extracted {len(jar)} cookies from firefox') logger.info(f'Extracted {len(jar)} cookies from firefox')
return jar return jar
finally:
if cursor is not None:
cursor.connection.close()
def _firefox_browser_dirs(): def _firefox_browser_dirs():

View File

@@ -455,14 +455,26 @@ class FileDownloader:
self._finish_multiline_status() self._finish_multiline_status()
return True, False return True, False
sleep_note = ''
if subtitle: if subtitle:
sleep_interval = self.params.get('sleep_interval_subtitles') or 0 sleep_interval = self.params.get('sleep_interval_subtitles') or 0
else: else:
min_sleep_interval = self.params.get('sleep_interval') or 0 min_sleep_interval = self.params.get('sleep_interval') or 0
max_sleep_interval = self.params.get('max_sleep_interval') or 0
if available_at := info_dict.get('available_at'):
forced_sleep_interval = available_at - int(time.time())
if forced_sleep_interval > min_sleep_interval:
sleep_note = 'as required by the site'
min_sleep_interval = forced_sleep_interval
if forced_sleep_interval > max_sleep_interval:
max_sleep_interval = forced_sleep_interval
sleep_interval = random.uniform( sleep_interval = random.uniform(
min_sleep_interval, self.params.get('max_sleep_interval') or min_sleep_interval) min_sleep_interval, max_sleep_interval or min_sleep_interval)
if sleep_interval > 0: if sleep_interval > 0:
self.to_screen(f'[download] Sleeping {sleep_interval:.2f} seconds ...') self.to_screen(f'[download] Sleeping {sleep_interval:.2f} seconds {sleep_note}...')
time.sleep(sleep_interval) time.sleep(sleep_interval)
ret = self.real_download(filename, info_dict) ret = self.real_download(filename, info_dict)

View File

@@ -58,13 +58,7 @@ from .adn import (
ADNSeasonIE, ADNSeasonIE,
) )
from .adobeconnect import AdobeConnectIE from .adobeconnect import AdobeConnectIE
from .adobetv import ( from .adobetv import AdobeTVVideoIE
AdobeTVChannelIE,
AdobeTVEmbedIE,
AdobeTVIE,
AdobeTVShowIE,
AdobeTVVideoIE,
)
from .adultswim import AdultSwimIE from .adultswim import AdultSwimIE
from .aenetworks import ( from .aenetworks import (
AENetworksCollectionIE, AENetworksCollectionIE,
@@ -152,7 +146,6 @@ from .ard import (
ARDBetaMediathekIE, ARDBetaMediathekIE,
ARDMediathekCollectionIE, ARDMediathekCollectionIE,
) )
from .arkena import ArkenaIE
from .arnes import ArnesIE from .arnes import ArnesIE
from .art19 import ( from .art19 import (
Art19IE, Art19IE,
@@ -405,16 +398,12 @@ from .cloudflarestream import CloudflareStreamIE
from .cloudycdn import CloudyCDNIE from .cloudycdn import CloudyCDNIE
from .clubic import ClubicIE from .clubic import ClubicIE
from .clyp import ClypIE from .clyp import ClypIE
from .cmt import CMTIE
from .cnbc import CNBCVideoIE from .cnbc import CNBCVideoIE
from .cnn import ( from .cnn import (
CNNIE, CNNIE,
CNNIndonesiaIE, CNNIndonesiaIE,
) )
from .comedycentral import ( from .comedycentral import ComedyCentralIE
ComedyCentralIE,
ComedyCentralTVIE,
)
from .commonmistakes import ( from .commonmistakes import (
BlobIE, BlobIE,
CommonMistakesIE, CommonMistakesIE,
@@ -636,7 +625,10 @@ from .fancode import (
FancodeVodIE, FancodeVodIE,
) )
from .fathom import FathomIE from .fathom import FathomIE
from .faulio import FaulioLiveIE from .faulio import (
FaulioIE,
FaulioLiveIE,
)
from .faz import FazIE from .faz import FazIE
from .fc2 import ( from .fc2 import (
FC2IE, FC2IE,
@@ -1187,15 +1179,7 @@ from .moview import MoviewPlayIE
from .moviezine import MoviezineIE from .moviezine import MoviezineIE
from .movingimage import MovingImageIE from .movingimage import MovingImageIE
from .msn import MSNIE from .msn import MSNIE
from .mtv import ( from .mtv import MTVIE
MTVDEIE,
MTVIE,
MTVItaliaIE,
MTVItaliaProgrammaIE,
MTVJapanIE,
MTVServicesEmbeddedIE,
MTVVideoIE,
)
from .muenchentv import MuenchenTVIE from .muenchentv import MuenchenTVIE
from .murrtube import ( from .murrtube import (
MurrtubeIE, MurrtubeIE,
@@ -1337,12 +1321,7 @@ from .nhk import (
NhkVodProgramIE, NhkVodProgramIE,
) )
from .nhl import NHLIE from .nhl import NHLIE
from .nick import ( from .nick import NickIE
NickBrIE,
NickDeIE,
NickIE,
NickRuIE,
)
from .niconico import ( from .niconico import (
NiconicoHistoryIE, NiconicoHistoryIE,
NiconicoIE, NiconicoIE,
@@ -1548,7 +1527,6 @@ from .pixivsketch import (
PixivSketchIE, PixivSketchIE,
PixivSketchUserIE, PixivSketchUserIE,
) )
from .pladform import PladformIE
from .planetmarathi import PlanetMarathiIE from .planetmarathi import PlanetMarathiIE
from .platzi import ( from .platzi import (
PlatziCourseIE, PlatziCourseIE,
@@ -1931,12 +1909,13 @@ from .soundgasm import (
SoundgasmProfileIE, SoundgasmProfileIE,
) )
from .southpark import ( from .southpark import (
SouthParkComBrIE,
SouthParkCoUkIE,
SouthParkDeIE, SouthParkDeIE,
SouthParkDkIE, SouthParkDkIE,
SouthParkEsIE, SouthParkEsIE,
SouthParkIE, SouthParkIE,
SouthParkLatIE, SouthParkLatIE,
SouthParkNlIE,
) )
from .sovietscloset import ( from .sovietscloset import (
SovietsClosetIE, SovietsClosetIE,
@@ -1947,10 +1926,6 @@ from .spankbang import (
SpankBangPlaylistIE, SpankBangPlaylistIE,
) )
from .spiegel import SpiegelIE from .spiegel import SpiegelIE
from .spike import (
BellatorIE,
ParamountNetworkIE,
)
from .sport5 import Sport5IE from .sport5 import Sport5IE
from .sportbox import SportBoxIE from .sportbox import SportBoxIE
from .sportdeutschland import SportDeutschlandIE from .sportdeutschland import SportDeutschlandIE
@@ -1984,6 +1959,7 @@ from .startrek import StarTrekIE
from .startv import StarTVIE from .startv import StarTVIE
from .steam import ( from .steam import (
SteamCommunityBroadcastIE, SteamCommunityBroadcastIE,
SteamCommunityIE,
SteamIE, SteamIE,
) )
from .stitcher import ( from .stitcher import (
@@ -2215,7 +2191,6 @@ from .tvc import (
from .tver import TVerIE from .tver import TVerIE
from .tvigle import TvigleIE from .tvigle import TvigleIE
from .tviplayer import TVIPlayerIE from .tviplayer import TVIPlayerIE
from .tvland import TVLandIE
from .tvn24 import TVN24IE from .tvn24 import TVN24IE
from .tvnoe import TVNoeIE from .tvnoe import TVNoeIE
from .tvopengr import ( from .tvopengr import (
@@ -2313,10 +2288,6 @@ from .varzesh3 import Varzesh3IE
from .vbox7 import Vbox7IE from .vbox7 import Vbox7IE
from .veo import VeoIE from .veo import VeoIE
from .vesti import VestiIE from .vesti import VestiIE
from .vevo import (
VevoIE,
VevoPlaylistIE,
)
from .vgtv import ( from .vgtv import (
VGTVIE, VGTVIE,
BTArticleIE, BTArticleIE,

View File

@@ -1,297 +1,100 @@
import functools
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
ISO639Utils, ISO639Utils,
OnDemandPagedList, clean_html,
determine_ext,
float_or_none, float_or_none,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
parse_duration, url_or_none,
str_or_none,
str_to_int,
unified_strdate,
) )
from ..utils.traversal import traverse_obj
class AdobeTVBaseIE(InfoExtractor): class AdobeTVVideoIE(InfoExtractor):
def _call_api(self, path, video_id, query, note=None):
return self._download_json(
'http://tv.adobe.com/api/v4/' + path,
video_id, note, query=query)['data']
def _parse_subtitles(self, video_data, url_key):
subtitles = {}
for translation in video_data.get('translations', []):
vtt_path = translation.get(url_key)
if not vtt_path:
continue
lang = translation.get('language_w3c') or ISO639Utils.long2short(translation['language_medium'])
subtitles.setdefault(lang, []).append({
'ext': 'vtt',
'url': vtt_path,
})
return subtitles
def _parse_video_data(self, video_data):
video_id = str(video_data['id'])
title = video_data['title']
s3_extracted = False
formats = []
for source in video_data.get('videos', []):
source_url = source.get('url')
if not source_url:
continue
f = {
'format_id': source.get('quality_level'),
'fps': int_or_none(source.get('frame_rate')),
'height': int_or_none(source.get('height')),
'tbr': int_or_none(source.get('video_data_rate')),
'width': int_or_none(source.get('width')),
'url': source_url,
}
original_filename = source.get('original_filename')
if original_filename:
if not (f.get('height') and f.get('width')):
mobj = re.search(r'_(\d+)x(\d+)', original_filename)
if mobj:
f.update({
'height': int(mobj.group(2)),
'width': int(mobj.group(1)),
})
if original_filename.startswith('s3://') and not s3_extracted:
formats.append({
'format_id': 'original',
'quality': 1,
'url': original_filename.replace('s3://', 'https://s3.amazonaws.com/'),
})
s3_extracted = True
formats.append(f)
return {
'id': video_id,
'title': title,
'description': video_data.get('description'),
'thumbnail': video_data.get('thumbnail'),
'upload_date': unified_strdate(video_data.get('start_date')),
'duration': parse_duration(video_data.get('duration')),
'view_count': str_to_int(video_data.get('playcount')),
'formats': formats,
'subtitles': self._parse_subtitles(video_data, 'vtt'),
}
class AdobeTVEmbedIE(AdobeTVBaseIE):
_WORKING = False
IE_NAME = 'adobetv:embed'
_VALID_URL = r'https?://tv\.adobe\.com/embed/\d+/(?P<id>\d+)'
_TESTS = [{
'url': 'https://tv.adobe.com/embed/22/4153',
'md5': 'c8c0461bf04d54574fc2b4d07ac6783a',
'info_dict': {
'id': '4153',
'ext': 'flv',
'title': 'Creating Graphics Optimized for BlackBerry',
'description': 'md5:eac6e8dced38bdaae51cd94447927459',
'thumbnail': r're:https?://.+\.jpg',
'upload_date': '20091109',
'duration': 377,
'view_count': int,
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
video_data = self._call_api(
'episode/' + video_id, video_id, {'disclosure': 'standard'})[0]
return self._parse_video_data(video_data)
class AdobeTVIE(AdobeTVBaseIE):
_WORKING = False
IE_NAME = 'adobetv' IE_NAME = 'adobetv'
_VALID_URL = r'https?://tv\.adobe\.com/(?:(?P<language>fr|de|es|jp)/)?watch/(?P<show_urlname>[^/]+)/(?P<id>[^/]+)'
_TESTS = [{
'url': 'http://tv.adobe.com/watch/the-complete-picture-with-julieanne-kost/quick-tip-how-to-draw-a-circle-around-an-object-in-photoshop/',
'md5': '9bc5727bcdd55251f35ad311ca74fa1e',
'info_dict': {
'id': '10981',
'ext': 'mp4',
'title': 'Quick Tip - How to Draw a Circle Around an Object in Photoshop',
'description': 'md5:99ec318dc909d7ba2a1f2b038f7d2311',
'thumbnail': r're:https?://.+\.jpg',
'upload_date': '20110914',
'duration': 60,
'view_count': int,
},
}]
def _real_extract(self, url):
language, show_urlname, urlname = self._match_valid_url(url).groups()
if not language:
language = 'en'
video_data = self._call_api(
'episode/get', urlname, {
'disclosure': 'standard',
'language': language,
'show_urlname': show_urlname,
'urlname': urlname,
})[0]
return self._parse_video_data(video_data)
class AdobeTVPlaylistBaseIE(AdobeTVBaseIE):
_PAGE_SIZE = 25
def _fetch_page(self, display_id, query, page):
page += 1
query['page'] = page
for element_data in self._call_api(
self._RESOURCE, display_id, query, f'Download Page {page}'):
yield self._process_data(element_data)
def _extract_playlist_entries(self, display_id, query):
return OnDemandPagedList(functools.partial(
self._fetch_page, display_id, query), self._PAGE_SIZE)
class AdobeTVShowIE(AdobeTVPlaylistBaseIE):
_WORKING = False
IE_NAME = 'adobetv:show'
_VALID_URL = r'https?://tv\.adobe\.com/(?:(?P<language>fr|de|es|jp)/)?show/(?P<id>[^/]+)'
_TESTS = [{
'url': 'http://tv.adobe.com/show/the-complete-picture-with-julieanne-kost',
'info_dict': {
'id': '36',
'title': 'The Complete Picture with Julieanne Kost',
'description': 'md5:fa50867102dcd1aa0ddf2ab039311b27',
},
'playlist_mincount': 136,
}]
_RESOURCE = 'episode'
_process_data = AdobeTVBaseIE._parse_video_data
def _real_extract(self, url):
language, show_urlname = self._match_valid_url(url).groups()
if not language:
language = 'en'
query = {
'disclosure': 'standard',
'language': language,
'show_urlname': show_urlname,
}
show_data = self._call_api(
'show/get', show_urlname, query)[0]
return self.playlist_result(
self._extract_playlist_entries(show_urlname, query),
str_or_none(show_data.get('id')),
show_data.get('show_name'),
show_data.get('show_description'))
class AdobeTVChannelIE(AdobeTVPlaylistBaseIE):
_WORKING = False
IE_NAME = 'adobetv:channel'
_VALID_URL = r'https?://tv\.adobe\.com/(?:(?P<language>fr|de|es|jp)/)?channel/(?P<id>[^/]+)(?:/(?P<category_urlname>[^/]+))?'
_TESTS = [{
'url': 'http://tv.adobe.com/channel/development',
'info_dict': {
'id': 'development',
},
'playlist_mincount': 96,
}]
_RESOURCE = 'show'
def _process_data(self, show_data):
return self.url_result(
show_data['url'], 'AdobeTVShow', str_or_none(show_data.get('id')))
def _real_extract(self, url):
language, channel_urlname, category_urlname = self._match_valid_url(url).groups()
if not language:
language = 'en'
query = {
'channel_urlname': channel_urlname,
'language': language,
}
if category_urlname:
query['category_urlname'] = category_urlname
return self.playlist_result(
self._extract_playlist_entries(channel_urlname, query),
channel_urlname)
class AdobeTVVideoIE(AdobeTVBaseIE):
IE_NAME = 'adobetv:video'
_VALID_URL = r'https?://video\.tv\.adobe\.com/v/(?P<id>\d+)' _VALID_URL = r'https?://video\.tv\.adobe\.com/v/(?P<id>\d+)'
_EMBED_REGEX = [r'<iframe[^>]+src=[\'"](?P<url>(?:https?:)?//video\.tv\.adobe\.com/v/\d+[^"]+)[\'"]'] _EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>(?:https?:)?//video\.tv\.adobe\.com/v/\d+)']
_TESTS = [{ _TESTS = [{
# From https://helpx.adobe.com/acrobat/how-to/new-experience-acrobat-dc.html?set=acrobat--get-started--essential-beginners 'url': 'https://video.tv.adobe.com/v/2456',
'url': 'https://video.tv.adobe.com/v/2456/',
'md5': '43662b577c018ad707a63766462b1e87', 'md5': '43662b577c018ad707a63766462b1e87',
'info_dict': { 'info_dict': {
'id': '2456', 'id': '2456',
'ext': 'mp4', 'ext': 'mp4',
'title': 'New experience with Acrobat DC', 'title': 'New experience with Acrobat DC',
'description': 'New experience with Acrobat DC', 'description': 'New experience with Acrobat DC',
'duration': 248.667, 'duration': 248.522,
'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg',
},
}, {
'url': 'https://video.tv.adobe.com/v/3463980/adobe-acrobat',
'info_dict': {
'id': '3463980',
'ext': 'mp4',
'title': 'Adobe Acrobat: How to Customize the Toolbar for Faster PDF Editing',
'description': 'md5:94368ab95ae24f9c1bee0cb346e03dc3',
'duration': 97.514,
'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg', 'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg',
}, },
}] }]
_WEBPAGE_TESTS = [{ _WEBPAGE_TESTS = [{
# FIXME: Invalid extension # https://video.tv.adobe.com/v/3442499
'url': 'https://www.adobe.com/learn/acrobat/web/customize-toolbar', 'url': 'https://business.adobe.com/dx-fragments/summit/2025/marquees/S335/ondemand.live.html',
'info_dict': { 'info_dict': {
'id': '3463980', 'id': '3442499',
'ext': 'm3u8', 'ext': 'mp4',
'title': 'Adobe Acrobat: How to Customize the Toolbar for Faster PDF Editing', 'title': 'S335 - Beyond Personalization: Creating Intent-Based Experiences at Scale',
'description': 'md5:94368ab95ae24f9c1bee0cb346e03dc3', 'description': 'Beyond Personalization: Creating Intent-Based Experiences at Scale',
'duration': 97.557, 'duration': 2906.8,
'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg',
}, },
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
video_data = self._search_json(
video_data = self._parse_json(self._search_regex( r'var\s+bridge\s*=', webpage, 'bridged data', video_id)
r'var\s+bridge\s*=\s*([^;]+);', webpage, 'bridged data'), video_id)
title = video_data['title']
formats = [] formats = []
sources = video_data.get('sources') or [] for source in traverse_obj(video_data, (
for source in sources: 'sources', lambda _, v: v['format'] != 'playlist' and url_or_none(v['src']),
source_src = source.get('src') )):
if not source_src: source_url = self._proto_relative_url(source['src'])
continue if determine_ext(source_url) == 'm3u8':
formats.append({ fmts = self._extract_m3u8_formats(
'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000), source_url, video_id, 'mp4', m3u8_id='hls', fatal=False)
'format_id': join_nonempty(source.get('format'), source.get('label')), else:
'height': int_or_none(source.get('height') or None), fmts = [{'url': source_url}]
'tbr': int_or_none(source.get('bitrate') or None),
'width': int_or_none(source.get('width') or None),
'url': source_src,
})
# For both metadata and downloaded files the duration varies among for fmt in fmts:
# formats. I just pick the max one fmt.update(traverse_obj(source, {
duration = max(filter(None, [ 'duration': ('duration', {float_or_none(scale=1000)}),
float_or_none(source.get('duration'), scale=1000) 'filesize': ('kilobytes', {float_or_none(invscale=1000)}),
for source in sources])) 'format_id': (('format', 'label'), {str}, all, {lambda x: join_nonempty(*x)}),
'height': ('height', {int_or_none}),
'tbr': ('bitrate', {int_or_none}),
'width': ('width', {int_or_none}),
}))
formats.extend(fmts)
subtitles = {}
for translation in traverse_obj(video_data, (
'translations', lambda _, v: url_or_none(v['vttPath']),
)):
lang = translation.get('language_w3c') or ISO639Utils.long2short(translation.get('language_medium')) or 'und'
subtitles.setdefault(lang, []).append({
'ext': 'vtt',
'url': self._proto_relative_url(translation['vttPath']),
})
return { return {
'id': video_id, 'id': video_id,
'formats': formats, 'formats': formats,
'title': title, 'subtitles': subtitles,
'description': video_data.get('description'), **traverse_obj(video_data, {
'thumbnail': video_data.get('video', {}).get('poster'), 'title': ('title', {clean_html}),
'duration': duration, 'description': ('description', {clean_html}, filter),
'subtitles': self._parse_subtitles(video_data, 'vttPath'), 'thumbnail': ('video', 'poster', {self._proto_relative_url}, {url_or_none}),
}),
} }

View File

@@ -1,150 +0,0 @@
from .common import InfoExtractor
from ..utils import (
ExtractorError,
float_or_none,
int_or_none,
parse_iso8601,
parse_qs,
try_get,
)
class ArkenaIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://
(?:
video\.(?:arkena|qbrick)\.com/play2/embed/player\?|
play\.arkena\.com/(?:config|embed)/avp/v\d/player/media/(?P<id>[^/]+)/[^/]+/(?P<account_id>\d+)
)
'''
# See https://support.arkena.com/display/PLAY/Ways+to+embed+your+video
_EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//play\.arkena\.com/embed/avp/.+?)\1']
_TESTS = [{
'url': 'https://video.qbrick.com/play2/embed/player?accountId=1034090&mediaId=d8ab4607-00090107-aab86310',
'md5': '97f117754e5f3c020f5f26da4a44ebaf',
'info_dict': {
'id': 'd8ab4607-00090107-aab86310',
'ext': 'mp4',
'title': 'EM_HT20_117_roslund_v2.mp4',
'timestamp': 1608285912,
'upload_date': '20201218',
'duration': 1429.162667,
'subtitles': {
'sv': 'count:3',
},
},
}, {
'url': 'https://play.arkena.com/embed/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411',
'only_matching': True,
}, {
'url': 'https://play.arkena.com/config/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411/?callbackMethod=jQuery1111023664739129262213_1469227693893',
'only_matching': True,
}, {
'url': 'http://play.arkena.com/config/avp/v1/player/media/327336/darkmatter/131064/?callbackMethod=jQuery1111002221189684892677_1469227595972',
'only_matching': True,
}, {
'url': 'http://play.arkena.com/embed/avp/v1/player/media/327336/darkmatter/131064/',
'only_matching': True,
}, {
'url': 'http://video.arkena.com/play2/embed/player?accountId=472718&mediaId=35763b3b-00090078-bf604299&pageStyling=styled',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('id')
account_id = mobj.group('account_id')
# Handle http://video.arkena.com/play2/embed/player URL
if not video_id:
qs = parse_qs(url)
video_id = qs.get('mediaId', [None])[0]
account_id = qs.get('accountId', [None])[0]
if not video_id or not account_id:
raise ExtractorError('Invalid URL', expected=True)
media = self._download_json(
f'https://video.qbrick.com/api/v1/public/accounts/{account_id}/medias/{video_id}',
video_id, query={
# https://video.qbrick.com/docs/api/examples/library-api.html
'fields': 'asset/resources/*/renditions/*(height,id,language,links/*(href,mimeType),type,size,videos/*(audios/*(codec,sampleRate),bitrate,codec,duration,height,width),width),created,metadata/*(title,description),tags',
})
metadata = media.get('metadata') or {}
title = metadata['title']
duration = None
formats = []
thumbnails = []
subtitles = {}
for resource in media['asset']['resources']:
for rendition in (resource.get('renditions') or []):
rendition_type = rendition.get('type')
for i, link in enumerate(rendition.get('links') or []):
href = link.get('href')
if not href:
continue
if rendition_type == 'image':
thumbnails.append({
'filesize': int_or_none(rendition.get('size')),
'height': int_or_none(rendition.get('height')),
'id': rendition.get('id'),
'url': href,
'width': int_or_none(rendition.get('width')),
})
elif rendition_type == 'subtitle':
subtitles.setdefault(rendition.get('language') or 'en', []).append({
'url': href,
})
elif rendition_type == 'video':
f = {
'filesize': int_or_none(rendition.get('size')),
'format_id': rendition.get('id'),
'url': href,
}
video = try_get(rendition, lambda x: x['videos'][i], dict)
if video:
if not duration:
duration = float_or_none(video.get('duration'))
f.update({
'height': int_or_none(video.get('height')),
'tbr': int_or_none(video.get('bitrate'), 1000),
'vcodec': video.get('codec'),
'width': int_or_none(video.get('width')),
})
audio = try_get(video, lambda x: x['audios'][0], dict)
if audio:
f.update({
'acodec': audio.get('codec'),
'asr': int_or_none(audio.get('sampleRate')),
})
formats.append(f)
elif rendition_type == 'index':
mime_type = link.get('mimeType')
if mime_type == 'application/smil+xml':
formats.extend(self._extract_smil_formats(
href, video_id, fatal=False))
elif mime_type == 'application/x-mpegURL':
formats.extend(self._extract_m3u8_formats(
href, video_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
elif mime_type == 'application/hds+xml':
formats.extend(self._extract_f4m_formats(
href, video_id, f4m_id='hds', fatal=False))
elif mime_type == 'application/dash+xml':
formats.extend(self._extract_mpd_formats(
href, video_id, mpd_id='dash', fatal=False))
elif mime_type == 'application/vnd.ms-sstr+xml':
formats.extend(self._extract_ism_formats(
href, video_id, ism_id='mss', fatal=False))
return {
'id': video_id,
'title': title,
'description': metadata.get('description'),
'timestamp': parse_iso8601(media.get('created')),
'thumbnails': thumbnails,
'subtitles': subtitles,
'duration': duration,
'tags': media.get('tags'),
'formats': formats,
}

View File

@@ -4,7 +4,7 @@ from .common import InfoExtractor
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
float_or_none, float_or_none,
jwt_encode_hs256, jwt_encode,
try_get, try_get,
) )
@@ -83,11 +83,10 @@ class ATVAtIE(InfoExtractor):
'nbf': int(not_before.timestamp()), 'nbf': int(not_before.timestamp()),
'exp': int(expire.timestamp()), 'exp': int(expire.timestamp()),
} }
jwt_token = jwt_encode_hs256(payload, self._ENCRYPTION_KEY, headers={'kid': self._ACCESS_ID})
videos = self._download_json( videos = self._download_json(
'https://vas-v4.p7s1video.net/4.0/getsources', 'https://vas-v4.p7s1video.net/4.0/getsources',
content_id, 'Downloading videos JSON', query={ content_id, 'Downloading videos JSON', query={
'token': jwt_token.decode('utf-8'), 'token': jwt_encode(payload, self._ENCRYPTION_KEY, headers={'kid': self._ACCESS_ID}),
}) })
video_id, videos_data = next(iter(videos['data'].items())) video_id, videos_data = next(iter(videos['data'].items()))

View File

@@ -1,79 +1,47 @@
from .mtv import MTVServicesInfoExtractor from .mtv import MTVServicesBaseIE
from ..utils import unified_strdate
class BetIE(MTVServicesInfoExtractor): class BetIE(MTVServicesBaseIE):
_WORKING = False _VALID_URL = r'https?://(?:www\.)?bet\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_VALID_URL = r'https?://(?:www\.)?bet\.com/(?:[^/]+/)+(?P<id>.+?)\.html' _TESTS = [{
_TESTS = [ 'url': 'https://www.bet.com/video-clips/w9mk7v',
{ 'info_dict': {
'url': 'http://www.bet.com/news/politics/2014/12/08/in-bet-exclusive-obama-talks-race-and-racism.html', 'id': '3022d121-d191-43fd-b5fb-b2c26f335497',
'info_dict': { 'ext': 'mp4',
'id': '07e96bd3-8850-3051-b856-271b457f0ab8', 'display_id': 'w9mk7v',
'display_id': 'in-bet-exclusive-obama-talks-race-and-racism', 'title': 'New Normal',
'ext': 'flv', 'description': 'md5:d7898c124713b4646cecad9d16ff01f3',
'title': 'A Conversation With President Obama', 'duration': 30.08,
'description': 'President Obama urges persistence in confronting racism and bias.', 'series': 'Tyler Perry\'s Sistas',
'duration': 1534, 'season': 'Season 0',
'upload_date': '20141208', 'season_number': 0,
'thumbnail': r're:(?i)^https?://.*\.jpg$', 'episode': 'Episode 0',
'subtitles': { 'episode_number': 0,
'en': 'mincount:2', 'timestamp': 1755269073,
}, 'upload_date': '20250815',
},
'params': {
# rtmp download
'skip_download': True,
},
}, },
{ 'params': {'skip_download': 'm3u8'},
'url': 'http://www.bet.com/video/news/national/2014/justice-for-ferguson-a-community-reacts.html', }, {
'info_dict': { 'url': 'https://www.bet.com/episodes/nmce72/tyler-perry-s-sistas-heavy-is-the-crown-season-9-ep-5',
'id': '9f516bf1-7543-39c4-8076-dd441b459ba9', 'info_dict': {
'display_id': 'justice-for-ferguson-a-community-reacts', 'id': '6427562b-3029-11f0-b405-16fff45bc035',
'ext': 'flv', 'ext': 'mp4',
'title': 'Justice for Ferguson: A Community Reacts', 'display_id': 'nmce72',
'description': 'A BET News special.', 'title': 'Heavy Is the Crown',
'duration': 1696, 'description': 'md5:1ed345d3157a50572d2464afcc7a652a',
'upload_date': '20141125', 'channel': 'BET',
'thumbnail': r're:(?i)^https?://.*\.jpg$', 'duration': 2550.0,
'subtitles': { 'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref',
'en': 'mincount:2', 'series': 'Tyler Perry\'s Sistas',
}, 'season': 'Season 9',
}, 'season_number': 9,
'params': { 'episode': 'Episode 5',
# rtmp download 'episode_number': 5,
'skip_download': True, 'timestamp': 1755165600,
}, 'upload_date': '20250814',
'release_timestamp': 1755129600,
'release_date': '20250814',
}, },
] 'params': {'skip_download': 'm3u8'},
'skip': 'Requires provider sign-in',
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/bet-mrss-player' }]
def _get_feed_query(self, uri):
return {
'uuid': uri,
}
def _extract_mgid(self, webpage):
return self._search_regex(r'data-uri="([^"]+)', webpage, 'mgid')
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
mgid = self._extract_mgid(webpage)
videos_info = self._get_videos_info(mgid)
info_dict = videos_info['entries'][0]
upload_date = unified_strdate(self._html_search_meta('date', webpage))
description = self._html_search_meta('description', webpage)
info_dict.update({
'display_id': display_id,
'description': description,
'upload_date': upload_date,
})
return info_dict

View File

@@ -304,7 +304,7 @@ class BilibiliBaseIE(InfoExtractor):
class BiliBiliIE(BilibiliBaseIE): class BiliBiliIE(BilibiliBaseIE):
_VALID_URL = r'https?://(?:www\.)?bilibili\.com/(?:video/|festival/[^/?#]+\?(?:[^#]*&)?bvid=)[aAbB][vV](?P<id>[^/?#&]+)' _VALID_URL = r'https?://(?:www\.)?bilibili\.com/(?:video/|festival/[^/?#]+\?(?:[^#]*&)?bvid=)(?P<prefix>[aAbB][vV])(?P<id>[^/?#&]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.bilibili.com/video/BV13x41117TL', 'url': 'https://www.bilibili.com/video/BV13x41117TL',
@@ -563,7 +563,7 @@ class BiliBiliIE(BilibiliBaseIE):
}, },
}], }],
}, { }, {
'note': '301 redirect to bangumi link', 'note': 'redirect from bvid to bangumi link via redirect_url',
'url': 'https://www.bilibili.com/video/BV1TE411f7f1', 'url': 'https://www.bilibili.com/video/BV1TE411f7f1',
'info_dict': { 'info_dict': {
'id': '288525', 'id': '288525',
@@ -580,7 +580,27 @@ class BiliBiliIE(BilibiliBaseIE):
'duration': 1183.957, 'duration': 1183.957,
'timestamp': 1571648124, 'timestamp': 1571648124,
'upload_date': '20191021', 'upload_date': '20191021',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$', 'thumbnail': r're:https?://.*\.(jpg|jpeg|png)$',
},
}, {
'note': 'redirect from aid to bangumi link via redirect_url',
'url': 'https://www.bilibili.com/video/av114868162141203',
'info_dict': {
'id': '1933368',
'title': 'PV 引爆变革的起点',
'ext': 'mp4',
'duration': 63.139,
'series': '时光代理人',
'series_id': '5183',
'season': '第三季',
'season_number': 4,
'season_id': '105212',
'episode': '引爆变革的起点',
'episode_number': 1,
'episode_id': '1933368',
'timestamp': 1752849001,
'upload_date': '20250718',
'thumbnail': r're:https?://.*\.(jpg|jpeg|png)$',
}, },
}, { }, {
'note': 'video has subtitles, which requires login', 'note': 'video has subtitles, which requires login',
@@ -636,7 +656,7 @@ class BiliBiliIE(BilibiliBaseIE):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id, prefix = self._match_valid_url(url).group('id', 'prefix')
headers = self.geo_verification_headers() headers = self.geo_verification_headers()
webpage, urlh = self._download_webpage_handle(url, video_id, headers=headers) webpage, urlh = self._download_webpage_handle(url, video_id, headers=headers)
if not self._match_valid_url(urlh.url): if not self._match_valid_url(urlh.url):
@@ -644,7 +664,24 @@ class BiliBiliIE(BilibiliBaseIE):
headers['Referer'] = url headers['Referer'] = url
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id) initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id, default=None)
if not initial_state:
if self._search_json(r'\bwindow\._riskdata_\s*=', webpage, 'risk', video_id, default={}).get('v_voucher'):
raise ExtractorError('You have exceeded the rate limit. Try again later', expected=True)
query = {'platform': 'web'}
prefix = prefix.upper()
if prefix == 'BV':
query['bvid'] = prefix + video_id
elif prefix == 'AV':
query['aid'] = video_id
detail = self._download_json(
'https://api.bilibili.com/x/web-interface/wbi/view/detail', video_id,
note='Downloading redirection URL', errnote='Failed to download redirection URL',
query=self._sign_wbi(query, video_id), headers=headers)
new_url = traverse_obj(detail, ('data', 'View', 'redirect_url', {url_or_none}))
if new_url and BiliBiliBangumiIE.suitable(new_url):
return self.url_result(new_url, BiliBiliBangumiIE)
raise ExtractorError('Unable to extract initial state')
if traverse_obj(initial_state, ('error', 'trueCode')) == -403: if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
self.raise_login_required() self.raise_login_required()

View File

@@ -1,55 +0,0 @@
from .mtv import MTVIE
# TODO: Remove - Reason: Outdated Site
class CMTIE(MTVIE): # XXX: Do not subclass from concrete IE
_WORKING = False
IE_NAME = 'cmt.com'
_VALID_URL = r'https?://(?:www\.)?cmt\.com/(?:videos|shows|(?:full-)?episodes|video-clips)/(?P<id>[^/]+)'
_TESTS = [{
'url': 'http://www.cmt.com/videos/garth-brooks/989124/the-call-featuring-trisha-yearwood.jhtml#artist=30061',
'md5': 'e6b7ef3c4c45bbfae88061799bbba6c2',
'info_dict': {
'id': '989124',
'ext': 'mp4',
'title': 'Garth Brooks - "The Call (featuring Trisha Yearwood)"',
'description': 'Blame It All On My Roots',
},
'skip': 'Video not available',
}, {
'url': 'http://www.cmt.com/videos/misc/1504699/still-the-king-ep-109-in-3-minutes.jhtml#id=1739908',
'md5': 'e61a801ca4a183a466c08bd98dccbb1c',
'info_dict': {
'id': '1504699',
'ext': 'mp4',
'title': 'Still The King Ep. 109 in 3 Minutes',
'description': 'Relive or catch up with Still The King by watching this recap of season 1, episode 9.',
'timestamp': 1469421000.0,
'upload_date': '20160725',
},
}, {
'url': 'http://www.cmt.com/shows/party-down-south/party-down-south-ep-407-gone-girl/1738172/playlist/#id=1738172',
'only_matching': True,
}, {
'url': 'http://www.cmt.com/full-episodes/537qb3/nashville-the-wayfaring-stranger-season-5-ep-501',
'only_matching': True,
}, {
'url': 'http://www.cmt.com/video-clips/t9e4ci/nashville-juliette-in-2-minutes',
'only_matching': True,
}]
def _extract_mgid(self, webpage, url):
mgid = self._search_regex(
r'MTVN\.VIDEO\.contentUri\s*=\s*([\'"])(?P<mgid>.+?)\1',
webpage, 'mgid', group='mgid', default=None)
if not mgid:
mgid = self._extract_triforce_mgid(webpage)
return mgid
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
mgid = self._extract_mgid(webpage, url)
return self.url_result(f'http://media.mtvnservices.com/embed/{mgid}')

View File

@@ -1,55 +1,27 @@
from .mtv import MTVServicesInfoExtractor from .mtv import MTVServicesBaseIE
class ComedyCentralIE(MTVServicesInfoExtractor): class ComedyCentralIE(MTVServicesBaseIE):
_VALID_URL = r'https?://(?:www\.)?cc\.com/(?:episodes|video(?:-clips)?|collection-playlist|movies)/(?P<id>[0-9a-z]{6})' _VALID_URL = r'https?://(?:www\.)?cc\.com/video-clips/(?P<id>[\da-z]{6})'
_FEED_URL = 'http://comedycentral.com/feeds/mrss/'
_TESTS = [{ _TESTS = [{
'url': 'http://www.cc.com/video-clips/5ke9v2/the-daily-show-with-trevor-noah-doc-rivers-and-steve-ballmer---the-nba-player-strike', 'url': 'https://www.cc.com/video-clips/wl12cx',
'md5': 'b8acb347177c680ff18a292aa2166f80',
'info_dict': { 'info_dict': {
'id': '89ccc86e-1b02-4f83-b0c9-1d9592ecd025', 'id': 'dec6953e-80c8-43b3-96cd-05e9230e704d',
'ext': 'mp4', 'ext': 'mp4',
'title': 'The Daily Show with Trevor Noah|August 28, 2020|25|25149|Doc Rivers and Steve Ballmer - The NBA Player Strike', 'display_id': 'wl12cx',
'description': 'md5:5334307c433892b85f4f5e5ac9ef7498', 'title': 'Alison Brie and Dave Franco -"Together"- Extended Interview',
'timestamp': 1598670000, 'description': 'md5:ec68e38d3282f863de9cde0ce5cd231c',
'upload_date': '20200829', 'duration': 516.76,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'The Daily Show',
'season': 'Season 30',
'season_number': 30,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1753973314,
'upload_date': '20250731',
'release_timestamp': 1753977914,
'release_date': '20250731',
}, },
}, { 'params': {'skip_download': 'm3u8'},
'url': 'http://www.cc.com/episodes/pnzzci/drawn-together--american-idol--parody-clip-show-season-3-ep-314',
'only_matching': True,
}, {
'url': 'https://www.cc.com/video/k3sdvm/the-daily-show-with-jon-stewart-exclusive-the-fourth-estate',
'only_matching': True,
}, {
'url': 'https://www.cc.com/collection-playlist/cosnej/stand-up-specials/t6vtjb',
'only_matching': True,
}, {
'url': 'https://www.cc.com/movies/tkp406/a-cluesterfuenke-christmas',
'only_matching': True,
}] }]
class ComedyCentralTVIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?comedycentral\.tv/folgen/(?P<id>[0-9a-z]{6})'
_TESTS = [{
'url': 'https://www.comedycentral.tv/folgen/pxdpec/josh-investigates-klimawandel-staffel-1-ep-1',
'info_dict': {
'id': '15907dc3-ec3c-11e8-a442-0e40cf2fc285',
'ext': 'mp4',
'title': 'Josh Investigates',
'description': 'Steht uns das Ende der Welt bevor?',
},
}]
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_GEO_COUNTRIES = ['DE']
def _get_feed_query(self, uri):
return {
'accountOverride': 'intl.mtvi.com',
'arcEp': 'web.cc.tv',
'ep': 'b9032c3a',
'imageEp': 'web.cc.tv',
'mgid': uri,
}

View File

@@ -263,6 +263,7 @@ class InfoExtractor:
* a string in the format of CLIENT[:OS] * a string in the format of CLIENT[:OS]
* a list or a tuple of CLIENT[:OS] strings or ImpersonateTarget instances * a list or a tuple of CLIENT[:OS] strings or ImpersonateTarget instances
* a boolean value; True means any impersonate target is sufficient * a boolean value; True means any impersonate target is sufficient
* available_at Unix timestamp of when a format will be available to download
* downloader_options A dictionary of downloader options * downloader_options A dictionary of downloader options
(For internal use only) (For internal use only)
* http_chunk_size Chunk size for HTTP downloads * http_chunk_size Chunk size for HTTP downloads
@@ -1527,11 +1528,11 @@ class InfoExtractor:
r'>\s*(?:18\s+U(?:\.S\.C\.|SC)\s+)?(?:§+\s*)?2257\b', r'>\s*(?:18\s+U(?:\.S\.C\.|SC)\s+)?(?:§+\s*)?2257\b',
] ]
age_limit = 0 age_limit = None
for marker in AGE_LIMIT_MARKERS: for marker in AGE_LIMIT_MARKERS:
mobj = re.search(marker, html) mobj = re.search(marker, html)
if mobj: if mobj:
age_limit = max(age_limit, int(traverse_obj(mobj, 1, default=18))) age_limit = max(age_limit or 0, int(traverse_obj(mobj, 1, default=18)))
return age_limit return age_limit
def _media_rating_search(self, html): def _media_rating_search(self, html):
@@ -2968,7 +2969,7 @@ class InfoExtractor:
else: else:
codecs = parse_codecs(codec_str) codecs = parse_codecs(codec_str)
if content_type not in ('video', 'audio', 'text'): if content_type not in ('video', 'audio', 'text'):
if mime_type == 'image/jpeg': if mime_type in ('image/avif', 'image/jpeg'):
content_type = mime_type content_type = mime_type
elif codecs.get('vcodec', 'none') != 'none': elif codecs.get('vcodec', 'none') != 'none':
content_type = 'video' content_type = 'video'
@@ -3028,14 +3029,14 @@ class InfoExtractor:
'manifest_url': mpd_url, 'manifest_url': mpd_url,
'filesize': filesize, 'filesize': filesize,
} }
elif content_type == 'image/jpeg': elif content_type in ('image/avif', 'image/jpeg'):
# See test case in VikiIE # See test case in VikiIE
# https://www.viki.com/videos/1175236v-choosing-spouse-by-lottery-episode-1 # https://www.viki.com/videos/1175236v-choosing-spouse-by-lottery-episode-1
f = { f = {
'format_id': format_id, 'format_id': format_id,
'ext': 'mhtml', 'ext': 'mhtml',
'manifest_url': mpd_url, 'manifest_url': mpd_url,
'format_note': 'DASH storyboards (jpeg)', 'format_note': f'DASH storyboards ({mimetype2ext(mime_type)})',
'acodec': 'none', 'acodec': 'none',
'vcodec': 'none', 'vcodec': 'none',
} }
@@ -3107,7 +3108,6 @@ class InfoExtractor:
else: else:
# $Number*$ or $Time$ in media template with S list available # $Number*$ or $Time$ in media template with S list available
# Example $Number*$: http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg # Example $Number*$: http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg
# Example $Time$: https://play.arkena.com/embed/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411
representation_ms_info['fragments'] = [] representation_ms_info['fragments'] = []
segment_time = 0 segment_time = 0
segment_d = None segment_d = None
@@ -3177,7 +3177,7 @@ class InfoExtractor:
'url': mpd_url or base_url, 'url': mpd_url or base_url,
'fragment_base_url': base_url, 'fragment_base_url': base_url,
'fragments': [], 'fragments': [],
'protocol': 'http_dash_segments' if mime_type != 'image/jpeg' else 'mhtml', 'protocol': 'mhtml' if mime_type in ('image/avif', 'image/jpeg') else 'http_dash_segments',
}) })
if 'initialization_url' in representation_ms_info: if 'initialization_url' in representation_ms_info:
initialization_url = representation_ms_info['initialization_url'] initialization_url = representation_ms_info['initialization_url']
@@ -3192,7 +3192,7 @@ class InfoExtractor:
else: else:
# Assuming direct URL to unfragmented media. # Assuming direct URL to unfragmented media.
f['url'] = base_url f['url'] = base_url
if content_type in ('video', 'audio', 'image/jpeg'): if content_type in ('video', 'audio', 'image/avif', 'image/jpeg'):
f['manifest_stream_number'] = stream_numbers[f['url']] f['manifest_stream_number'] = stream_numbers[f['url']]
stream_numbers[f['url']] += 1 stream_numbers[f['url']] += 1
period_entry['formats'].append(f) period_entry['formats'].append(f)

View File

@@ -171,17 +171,6 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
'view_count': int, 'view_count': int,
}, },
'skip': 'video gone', 'skip': 'video gone',
}, {
# Vevo video
'url': 'http://www.dailymotion.com/video/x149uew_katy-perry-roar-official_musi',
'info_dict': {
'title': 'Roar (Official)',
'id': 'USUV71301934',
'ext': 'mp4',
'uploader': 'Katy Perry',
'upload_date': '20130905',
},
'skip': 'Invalid URL',
}, { }, {
# age-restricted video # age-restricted video
'url': 'http://www.dailymotion.com/video/xyh2zz_leanna-decker-cyber-girl-of-the-year-desires-nude-playboy-plus_redband', 'url': 'http://www.dailymotion.com/video/xyh2zz_leanna-decker-cyber-girl-of-the-year-desires-nude-playboy-plus_redband',

View File

@@ -90,10 +90,6 @@ class DisneyIE(InfoExtractor):
webpage, 'embed data'), video_id) webpage, 'embed data'), video_id)
video_data = page_data['video'] video_data = page_data['video']
for external in video_data.get('externals', []):
if external.get('source') == 'vevo':
return self.url_result('vevo:' + external['data_id'], 'Vevo')
video_id = video_data['id'] video_id = video_data['id']
title = video_data['title'] title = video_data['title']

View File

@@ -2,18 +2,152 @@ import re
import urllib.parse import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import js_to_json, url_or_none from ..utils import int_or_none, js_to_json, url_or_none
from ..utils.traversal import traverse_obj from ..utils.traversal import traverse_obj
class FaulioLiveIE(InfoExtractor): class FaulioBaseIE(InfoExtractor):
_DOMAINS = ( _DOMAINS = (
'aloula.sba.sa', 'aloula.sba.sa',
'bahry.com', 'bahry.com',
'maraya.sba.net.ae', 'maraya.sba.net.ae',
'sat7plus.org', 'sat7plus.org',
) )
_VALID_URL = fr'https?://(?:{"|".join(map(re.escape, _DOMAINS))})/(?:(?:en|ar|fa)/)?live/(?P<id>[a-zA-Z0-9-]+)' _LANGUAGES = ('ar', 'en', 'fa')
_BASE_URL_RE = fr'https?://(?:{"|".join(map(re.escape, _DOMAINS))})/(?:(?:{"|".join(_LANGUAGES)})/)?'
def _get_headers(self, url):
parsed_url = urllib.parse.urlparse(url)
return {
'Referer': url,
'Origin': f'{parsed_url.scheme}://{parsed_url.hostname}',
}
def _get_api_base(self, url, video_id):
webpage = self._download_webpage(url, video_id)
config_data = self._search_json(
r'window\.__NUXT__\.config=', webpage, 'config', video_id, transform_source=js_to_json)
return config_data['public']['TRANSLATIONS_API_URL']
class FaulioIE(FaulioBaseIE):
_VALID_URL = fr'{FaulioBaseIE._BASE_URL_RE}(?:episode|media)/(?P<id>[a-zA-Z0-9-]+)'
_TESTS = [{
'url': 'https://aloula.sba.sa/en/episode/29102',
'info_dict': {
'id': 'aloula.faulio.com_29102',
'ext': 'mp4',
'display_id': 'هذا-مكانك-03-004-v-29102',
'title': 'الحلقة 4',
'episode': 'الحلقة 4',
'description': '',
'series': 'هذا مكانك',
'season': 'Season 3',
'season_number': 3,
'episode_number': 4,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 4855,
'age_limit': 3,
},
}, {
'url': 'https://bahry.com/en/media/1191',
'info_dict': {
'id': 'bahry.faulio.com_1191',
'ext': 'mp4',
'display_id': 'Episode-4-1191',
'title': 'Episode 4',
'episode': 'Episode 4',
'description': '',
'series': 'Wild Water',
'season': 'Season 1',
'season_number': 1,
'episode_number': 4,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 1653,
'age_limit': 0,
},
}, {
'url': 'https://maraya.sba.net.ae/episode/127735',
'info_dict': {
'id': 'maraya.faulio.com_127735',
'ext': 'mp4',
'display_id': 'عبدالله-الهاجري---عبدالرحمن-المطروشي-127735',
'title': 'عبدالله الهاجري - عبدالرحمن المطروشي',
'episode': 'عبدالله الهاجري - عبدالرحمن المطروشي',
'description': 'md5:53de01face66d3d6303221e5a49388a0',
'series': 'أبناؤنا في الخارج',
'season': 'Season 3',
'season_number': 3,
'episode_number': 7,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 1316,
'age_limit': 0,
},
}, {
'url': 'https://sat7plus.org/episode/18165',
'info_dict': {
'id': 'sat7.faulio.com_18165',
'ext': 'mp4',
'display_id': 'ep-13-ADHD-18165',
'title': 'ADHD and creativity',
'episode': 'ADHD and creativity',
'description': '',
'series': 'ADHD Podcast',
'season': 'Season 1',
'season_number': 1,
'episode_number': 13,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 2492,
'age_limit': 0,
},
}, {
'url': 'https://aloula.sba.sa/en/episode/0',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
api_base = self._get_api_base(url, video_id)
video_info = self._download_json(f'{api_base}/video/{video_id}', video_id, fatal=False)
player_info = self._download_json(f'{api_base}/video/{video_id}/player', video_id)
headers = self._get_headers(url)
formats = []
subtitles = {}
if hls_url := traverse_obj(player_info, ('settings', 'protocols', 'hls', {url_or_none})):
fmts, subs = self._extract_m3u8_formats_and_subtitles(
hls_url, video_id, 'mp4', m3u8_id='hls', fatal=False, headers=headers)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
if mpd_url := traverse_obj(player_info, ('settings', 'protocols', 'dash', {url_or_none})):
fmts, subs = self._extract_mpd_formats_and_subtitles(
mpd_url, video_id, mpd_id='dash', fatal=False, headers=headers)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
return {
'id': f'{urllib.parse.urlparse(api_base).hostname}_{video_id}',
**traverse_obj(traverse_obj(video_info, ('blocks', 0)), {
'display_id': ('slug', {str}),
'title': ('title', {str}),
'episode': ('title', {str}),
'description': ('description', {str}),
'series': ('program_title', {str}),
'season_number': ('season_number', {int_or_none}),
'episode_number': ('episode', {int_or_none}),
'thumbnail': ('image', {url_or_none}),
'duration': ('duration', 'total', {int_or_none}),
'age_limit': ('age_rating', {int_or_none}),
}),
'formats': formats,
'subtitles': subtitles,
'http_headers': headers,
}
class FaulioLiveIE(FaulioBaseIE):
_VALID_URL = fr'{FaulioBaseIE._BASE_URL_RE}live/(?P<id>[a-zA-Z0-9-]+)'
_TESTS = [{ _TESTS = [{
'url': 'https://aloula.sba.sa/live/saudiatv', 'url': 'https://aloula.sba.sa/live/saudiatv',
'info_dict': { 'info_dict': {
@@ -69,27 +203,24 @@ class FaulioLiveIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id) api_base = self._get_api_base(url, video_id)
config_data = self._search_json(
r'window\.__NUXT__\.config=', webpage, 'config', video_id, transform_source=js_to_json)
api_base = config_data['public']['TRANSLATIONS_API_URL']
channel = traverse_obj( channel = traverse_obj(
self._download_json(f'{api_base}/channels', video_id), self._download_json(f'{api_base}/channels', video_id),
(lambda k, v: v['url'] == video_id, any)) (lambda k, v: v['url'] == video_id, any))
headers = self._get_headers(url)
formats = [] formats = []
subtitles = {} subtitles = {}
if hls_url := traverse_obj(channel, ('streams', 'hls', {url_or_none})): if hls_url := traverse_obj(channel, ('streams', 'hls', {url_or_none})):
fmts, subs = self._extract_m3u8_formats_and_subtitles( fmts, subs = self._extract_m3u8_formats_and_subtitles(
hls_url, video_id, 'mp4', m3u8_id='hls', live=True, fatal=False) hls_url, video_id, 'mp4', m3u8_id='hls', live=True, fatal=False, headers=headers)
formats.extend(fmts) formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles) self._merge_subtitles(subs, target=subtitles)
if mpd_url := traverse_obj(channel, ('streams', 'mpd', {url_or_none})): if mpd_url := traverse_obj(channel, ('streams', 'mpd', {url_or_none})):
fmts, subs = self._extract_mpd_formats_and_subtitles( fmts, subs = self._extract_mpd_formats_and_subtitles(
mpd_url, video_id, mpd_id='dash', fatal=False) mpd_url, video_id, mpd_id='dash', fatal=False, headers=headers)
formats.extend(fmts) formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles) self._merge_subtitles(subs, target=subtitles)
@@ -101,5 +232,6 @@ class FaulioLiveIE(InfoExtractor):
}), }),
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': subtitles,
'http_headers': headers,
'is_live': True, 'is_live': True,
} }

View File

@@ -363,13 +363,7 @@ class FranceTVSiteIE(FranceTVBaseInfoExtractor):
display_id = self._match_id(url) display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
nextjs_data = self._search_nextjs_v13_data(webpage, display_id) nextjs_data = self._search_nextjs_v13_data(webpage, display_id)
video_id = get_first(nextjs_data, ('options', 'id', {str}))
if get_first(nextjs_data, ('isLive', {bool})):
# For livestreams we need the id of the stream instead of the currently airing episode id
video_id = get_first(nextjs_data, ('options', 'id', {str}))
else:
video_id = get_first(nextjs_data, ('video', ('playerReplayId', 'siId'), {str}))
if not video_id: if not video_id:
raise ExtractorError('Unable to extract video ID') raise ExtractorError('Unable to extract video ID')

View File

@@ -121,7 +121,6 @@ class GenericIE(InfoExtractor):
'id': 'cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867', 'id': 'cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867',
'ext': 'mp4', 'ext': 'mp4',
'title': 'čauky lidi 70 finall', 'title': 'čauky lidi 70 finall',
'age_limit': 0,
'description': 'md5:47b2673a5b76780d9d329783e1fbf5aa', 'description': 'md5:47b2673a5b76780d9d329783e1fbf5aa',
'direct': True, 'direct': True,
'duration': 318.0, 'duration': 318.0,
@@ -244,7 +243,6 @@ class GenericIE(InfoExtractor):
'id': 'paris-d-moll', 'id': 'paris-d-moll',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Paris d-moll', 'title': 'Paris d-moll',
'age_limit': 0,
'description': 'md5:319e37ea5542293db37e1e13072fe330', 'description': 'md5:319e37ea5542293db37e1e13072fe330',
'thumbnail': r're:https?://www\.filmarkivet\.se/wp-content/uploads/.+\.jpg', 'thumbnail': r're:https?://www\.filmarkivet\.se/wp-content/uploads/.+\.jpg',
}, },
@@ -255,7 +253,6 @@ class GenericIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': '60413035', 'id': '60413035',
'title': 'Etter ett års planlegging, klaffet endelig alt: - Jeg måtte ta en liten dans', 'title': 'Etter ett års planlegging, klaffet endelig alt: - Jeg måtte ta en liten dans',
'age_limit': 0,
'description': 'md5:bbb4e12e42e78609a74fd421b93b1239', 'description': 'md5:bbb4e12e42e78609a74fd421b93b1239',
'thumbnail': r're:https?://www\.dagbladet\.no/images/.+', 'thumbnail': r're:https?://www\.dagbladet\.no/images/.+',
}, },
@@ -267,7 +264,6 @@ class GenericIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': 'single_clip', 'id': 'single_clip',
'title': 'Single Clip player examples', 'title': 'Single Clip player examples',
'age_limit': 0,
}, },
'playlist_count': 3, 'playlist_count': 3,
}, { }, {
@@ -324,7 +320,6 @@ class GenericIE(InfoExtractor):
'id': 'videos-1', 'id': 'videos-1',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Videos & Audio - King Machine (1)', 'title': 'Videos & Audio - King Machine (1)',
'age_limit': 0,
'description': 'Browse King Machine videos & audio for sweet media. Your eyes will thank you.', 'description': 'Browse King Machine videos & audio for sweet media. Your eyes will thank you.',
'thumbnail': r're:https?://media\.indiedb\.com/cache/images/.+\.jpg', 'thumbnail': r're:https?://media\.indiedb\.com/cache/images/.+\.jpg',
'_old_archive_ids': ['generic videos'], '_old_archive_ids': ['generic videos'],
@@ -363,7 +358,6 @@ class GenericIE(InfoExtractor):
'id': '21217', 'id': '21217',
'ext': 'mp4', 'ext': 'mp4',
'title': '40 ночей (2016) - BogMedia.org', 'title': '40 ночей (2016) - BogMedia.org',
'age_limit': 0,
'description': 'md5:4e6d7d622636eb7948275432eb256dc3', 'description': 'md5:4e6d7d622636eb7948275432eb256dc3',
'display_id': '40-nochey-2016', 'display_id': '40-nochey-2016',
'thumbnail': r're:https?://bogmedia\.org/contents/videos_screenshots/.+\.jpg', 'thumbnail': r're:https?://bogmedia\.org/contents/videos_screenshots/.+\.jpg',
@@ -378,7 +372,6 @@ class GenericIE(InfoExtractor):
'id': '18485', 'id': '18485',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Клип: Ленинград - ЗОЖ скачать, смотреть онлайн | Youix.com', 'title': 'Клип: Ленинград - ЗОЖ скачать, смотреть онлайн | Youix.com',
'age_limit': 0,
'display_id': 'leningrad-zoj', 'display_id': 'leningrad-zoj',
'thumbnail': r're:https?://youix\.com/contents/videos_screenshots/.+\.jpg', 'thumbnail': r're:https?://youix\.com/contents/videos_screenshots/.+\.jpg',
}, },
@@ -419,7 +412,6 @@ class GenericIE(InfoExtractor):
'id': '105', 'id': '105',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Kelis - 4th Of July / Embed Player', 'title': 'Kelis - 4th Of July / Embed Player',
'age_limit': 0,
'display_id': 'kelis-4th-of-july', 'display_id': 'kelis-4th-of-july',
'thumbnail': r're:https?://www\.kvs-demo\.com/contents/videos_screenshots/.+\.jpg', 'thumbnail': r're:https?://www\.kvs-demo\.com/contents/videos_screenshots/.+\.jpg',
}, },
@@ -430,9 +422,8 @@ class GenericIE(InfoExtractor):
'info_dict': { 'info_dict': {
'id': 'beltzlaw-1', 'id': 'beltzlaw-1',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Beltz Law Group | Dallas Traffic Ticket, Accident & Criminal Attorney (1)', 'title': str,
'age_limit': 0, 'description': str,
'description': 'md5:5bdf23fcb76801dc3b31e74cabf82147',
'thumbnail': r're:https?://beltzlaw\.com/wp-content/uploads/.+\.jpg', 'thumbnail': r're:https?://beltzlaw\.com/wp-content/uploads/.+\.jpg',
'timestamp': int, # varies 'timestamp': int, # varies
'upload_date': str, 'upload_date': str,
@@ -447,7 +438,6 @@ class GenericIE(InfoExtractor):
'id': 'cine-1', 'id': 'cine-1',
'ext': 'webm', 'ext': 'webm',
'title': 'CINE.AR (1)', 'title': 'CINE.AR (1)',
'age_limit': 0,
'description': 'md5:a4e58f9e2291c940e485f34251898c4a', 'description': 'md5:a4e58f9e2291c940e485f34251898c4a',
'thumbnail': r're:https?://cine\.ar/img/.+\.png', 'thumbnail': r're:https?://cine\.ar/img/.+\.png',
'_old_archive_ids': ['generic cine'], '_old_archive_ids': ['generic cine'],
@@ -461,7 +451,6 @@ class GenericIE(InfoExtractor):
'id': 'ipy2AcGL', 'id': 'ipy2AcGL',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Hoe een bladvlo dit verwoestende Japanse onkruid moet vernietigen', 'title': 'Hoe een bladvlo dit verwoestende Japanse onkruid moet vernietigen',
'age_limit': 0,
'description': 'md5:6a9d644bab0dc2dc06849c2505d8383d', 'description': 'md5:6a9d644bab0dc2dc06849c2505d8383d',
'duration': 111.0, 'duration': 111.0,
'thumbnail': r're:https?://images\.nu\.nl/.+\.jpg', 'thumbnail': r're:https?://images\.nu\.nl/.+\.jpg',
@@ -477,7 +466,6 @@ class GenericIE(InfoExtractor):
'id': 'porsche-911-gt3-rs-rij-impressie-2', 'id': 'porsche-911-gt3-rs-rij-impressie-2',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Test: Porsche 911 GT3 RS - AutoWeek', 'title': 'Test: Porsche 911 GT3 RS - AutoWeek',
'age_limit': 0,
'description': 'md5:a17b5bd84288448d8f11b838505718fc', 'description': 'md5:a17b5bd84288448d8f11b838505718fc',
'direct': True, 'direct': True,
'thumbnail': r're:https?://images\.autoweek\.nl/.+', 'thumbnail': r're:https?://images\.autoweek\.nl/.+',
@@ -493,7 +481,6 @@ class GenericIE(InfoExtractor):
'id': 'k6gl2kt2eq', 'id': 'k6gl2kt2eq',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Breezy HR\'s ATS helps you find & hire employees sooner', 'title': 'Breezy HR\'s ATS helps you find & hire employees sooner',
'age_limit': 0,
'average_rating': 4.5, 'average_rating': 4.5,
'description': 'md5:eee75fdd3044c538003f3be327ba01e1', 'description': 'md5:eee75fdd3044c538003f3be327ba01e1',
'duration': 60.1, 'duration': 60.1,
@@ -509,7 +496,6 @@ class GenericIE(InfoExtractor):
'id': 'videojs_hls_test', 'id': 'videojs_hls_test',
'ext': 'mp4', 'ext': 'mp4',
'title': 'video', 'title': 'video',
'age_limit': 0,
'duration': 1800, 'duration': 1800,
}, },
'params': {'skip_download': 'm3u8'}, 'params': {'skip_download': 'm3u8'},
@@ -786,8 +772,8 @@ class GenericIE(InfoExtractor):
if default_search in ('auto', 'auto_warning', 'fixup_error'): if default_search in ('auto', 'auto_warning', 'fixup_error'):
if re.match(r'[^\s/]+\.[^\s/]+/', url): if re.match(r'[^\s/]+\.[^\s/]+/', url):
self.report_warning('The url doesn\'t specify the protocol, trying with http') self.report_warning('The url doesn\'t specify the protocol, trying with https')
return self.url_result('http://' + url) return self.url_result('https://' + url)
elif default_search != 'fixup_error': elif default_search != 'fixup_error':
if default_search == 'auto_warning': if default_search == 'auto_warning':
if re.match(r'^(?:url|URL)$', url): if re.match(r'^(?:url|URL)$', url):
@@ -800,9 +786,7 @@ class GenericIE(InfoExtractor):
return self.url_result('ytsearch:' + url) return self.url_result('ytsearch:' + url)
if default_search in ('error', 'fixup_error'): if default_search in ('error', 'fixup_error'):
raise ExtractorError( raise ExtractorError(f'{url!r} is not a valid URL', expected=True)
f'{url!r} is not a valid URL. '
f'Set --default-search "ytsearch" (or run yt-dlp "ytsearch:{url}" ) to search YouTube', expected=True)
else: else:
if ':' not in default_search: if ':' not in default_search:
default_search += ':' default_search += ':'

View File

@@ -1,8 +1,8 @@
from .arkena import ArkenaIE
from .common import InfoExtractor from .common import InfoExtractor
class LcpPlayIE(ArkenaIE): # XXX: Do not subclass from concrete IE class LcpPlayIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://play\.lcp\.fr/embed/(?P<id>[^/]+)/(?P<account_id>[^/]+)/[^/]+/[^/]+' _VALID_URL = r'https?://play\.lcp\.fr/embed/(?P<id>[^/]+)/(?P<account_id>[^/]+)/[^/]+/[^/]+'
_TESTS = [{ _TESTS = [{
'url': 'http://play.lcp.fr/embed/327336/131064/darkmatter/0', 'url': 'http://play.lcp.fr/embed/327336/131064/darkmatter/0',
@@ -21,24 +21,9 @@ class LcpPlayIE(ArkenaIE): # XXX: Do not subclass from concrete IE
class LcpIE(InfoExtractor): class LcpIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://(?:www\.)?lcp\.fr/(?:[^/]+/)*(?P<id>[^/]+)' _VALID_URL = r'https?://(?:www\.)?lcp\.fr/(?:[^/]+/)*(?P<id>[^/]+)'
_TESTS = [{ _TESTS = [{
# arkena embed
'url': 'http://www.lcp.fr/la-politique-en-video/schwartzenberg-prg-preconise-francois-hollande-de-participer-une-primaire',
'md5': 'b8bd9298542929c06c1c15788b1f277a',
'info_dict': {
'id': 'd56d03e9',
'ext': 'mp4',
'title': 'Schwartzenberg (PRG) préconise à François Hollande de participer à une primaire à gauche',
'description': 'md5:96ad55009548da9dea19f4120c6c16a8',
'timestamp': 1456488895,
'upload_date': '20160226',
},
'params': {
'skip_download': True,
},
}, {
# dailymotion live stream # dailymotion live stream
'url': 'http://www.lcp.fr/le-direct', 'url': 'http://www.lcp.fr/le-direct',
'info_dict': { 'info_dict': {

View File

@@ -11,121 +11,65 @@ from ..utils import (
class MediaKlikkIE(InfoExtractor): class MediaKlikkIE(InfoExtractor):
_VALID_URL = r'''(?x)https?://(?:www\.)? _VALID_URL = r'''(?x)https?://(?:www\.)?
(?:mediaklikk|m4sport|hirado|petofilive)\.hu/.*?(?:videok?|cikk)/ (?:mediaklikk|m4sport|hirado)\.hu/.*?(?:videok?|cikk)/
(?:(?P<year>[0-9]{4})/(?P<month>[0-9]{1,2})/(?P<day>[0-9]{1,2})/)? (?:(?P<year>[0-9]{4})/(?P<month>[0-9]{1,2})/(?P<day>[0-9]{1,2})/)?
(?P<id>[^/#?_]+)''' (?P<id>[^/#?_]+)'''
_TESTS = [{ _TESTS = [{
'url': 'https://mediaklikk.hu/filmajanlo/cikk/az-ajto/', # mediaklikk
'url': 'https://mediaklikk.hu/ajanlo/video/2025/08/04/heviz-dzsungel-a-viz-alatt-ajanlo-08-10/',
'info_dict': { 'info_dict': {
'id': '668177', 'id': '8573769',
'title': 'Az ajtó', 'title': 'Hévíz - dzsungel a víz alatt Ajánló (08.10.)',
'display_id': 'az-ajto', 'display_id': 'heviz-dzsungel-a-viz-alatt-ajanlo-08-10',
'ext': 'mp4', 'ext': 'mp4',
'thumbnail': 'https://cdn.cms.mtv.hu/wp-content/uploads/sites/4/2016/01/vlcsnap-2023-07-31-14h18m52s111.jpg', 'upload_date': '20250804',
'thumbnail': 'https://cdn.cms.mtv.hu/wp-content/uploads/sites/4/2025/08/vlcsnap-2025-08-04-13h48m24s336.jpg',
}, },
}, { }, {
# (old) mediaklikk. date in html. # mediaklikk - date in html
'url': 'https://mediaklikk.hu/video/hazajaro-delnyugat-bacska-a-duna-menten-palankatol-doroszloig/', 'url': 'https://mediaklikk.hu/video/hazajaro-bilo-hegyseg-verocei-barangolas-a-drava-menten/',
'info_dict': { 'info_dict': {
'id': '4754129', 'id': '8482167',
'title': 'Hazajáró, DÉLNYUGAT-BÁCSKA A Duna mentén Palánkától Doroszlóig', 'title': 'Hazajáró, Bilo-hegység - Verőcei barangolás a Dráva mentén',
'display_id': 'hazajaro-bilo-hegyseg-verocei-barangolas-a-drava-menten',
'ext': 'mp4', 'ext': 'mp4',
'upload_date': '20210901', 'upload_date': '20250703',
'thumbnail': 'http://mediaklikk.hu/wp-content/uploads/sites/4/2014/02/hazajarouj_JO.jpg', 'thumbnail': 'https://cdn.cms.mtv.hu/wp-content/uploads/sites/4/2025/07/2024-000307-M0010-01_3700_cover_01.jpg',
}, },
'skip': 'Webpage redirects to 404 page',
}, {
# mediaklikk. date in html.
'url': 'https://mediaklikk.hu/video/hazajaro-fabova-hegyseg-kishont-koronaja/',
'info_dict': {
'id': '6696133',
'title': 'Hazajáró, Fabova-hegység - Kishont koronája',
'display_id': 'hazajaro-fabova-hegyseg-kishont-koronaja',
'ext': 'mp4',
'upload_date': '20230903',
'thumbnail': 'https://mediaklikk.hu/wp-content/uploads/sites/4/2014/02/hazajarouj_JO.jpg',
},
'skip': 'Webpage redirects to 404 page',
}, {
# (old) m4sport
'url': 'https://m4sport.hu/video/2021/08/30/gyemant-liga-parizs/',
'info_dict': {
'id': '4754999',
'title': 'Gyémánt Liga, Párizs',
'ext': 'mp4',
'upload_date': '20210830',
'thumbnail': 'http://m4sport.hu/wp-content/uploads/sites/4/2021/08/vlcsnap-2021-08-30-18h21m20s10-1024x576.jpg',
},
'skip': 'Webpage redirects to 404 page',
}, { }, {
# m4sport # m4sport
'url': 'https://m4sport.hu/sportkozvetitesek/video/2023/09/08/atletika-gyemant-liga-brusszel/', 'url': 'https://m4sport.hu/video/2025/08/07/holnap-kezdodik-a-12-vilagjatekok/',
'info_dict': { 'info_dict': {
'id': '6711136', 'id': '8581887',
'title': 'Atlétika Gyémánt Liga, Brüsszel', 'title': 'Holnap kezdődik a 12. Világjátékok',
'display_id': 'atletika-gyemant-liga-brusszel', 'display_id': 'holnap-kezdodik-a-12-vilagjatekok',
'ext': 'mp4', 'ext': 'mp4',
'upload_date': '20230908', 'upload_date': '20250807',
'thumbnail': 'https://m4sport.hu/wp-content/uploads/sites/4/2023/09/vlcsnap-2023-09-08-22h43m18s691.jpg', 'thumbnail': 'https://cdn.cms.mtv.hu/wp-content/uploads/sites/4/2025/08/vlcsnap-2025-08-06-20h30m48s817.jpg',
}, },
'skip': 'Webpage redirects to 404 page',
}, {
# m4sport with *video/ url and no date
'url': 'https://m4sport.hu/bl-video/real-madrid-chelsea-1-1/',
'info_dict': {
'id': '4492099',
'title': 'Real Madrid - Chelsea 1-1',
'display_id': 'real-madrid-chelsea-1-1',
'ext': 'mp4',
'thumbnail': 'https://m4sport.hu/wp-content/uploads/sites/4/2021/04/Sequence-01.Still001-1024x576.png',
},
'skip': 'Webpage redirects to 404 page',
}, {
# (old) hirado
'url': 'https://hirado.hu/videok/felteteleket-szabott-a-fovaros/',
'info_dict': {
'id': '4760120',
'title': 'Feltételeket szabott a főváros',
'ext': 'mp4',
'thumbnail': 'http://hirado.hu/wp-content/uploads/sites/4/2021/09/vlcsnap-2021-09-01-20h20m37s165.jpg',
},
'skip': 'Webpage redirects to video list page',
}, { }, {
# hirado # hirado
'url': 'https://hirado.hu/belfold/video/2023/09/11/marad-az-eves-elszamolas-a-napelemekre-beruhazo-csaladoknal', 'url': 'https://hirado.hu/video/2025/08/09/idojaras-jelentes-2025-augusztus-9-2230',
'info_dict': { 'info_dict': {
'id': '6716068', 'id': '8592033',
'title': 'Marad az éves elszámolás a napelemekre beruházó családoknál', 'title': 'Időjárás-jelentés, 2025. augusztus 9. 22:30',
'display_id': 'marad-az-eves-elszamolas-a-napelemekre-beruhazo-csaladoknal', 'display_id': 'idojaras-jelentes-2025-augusztus-9-2230',
'ext': 'mp4', 'ext': 'mp4',
'upload_date': '20230911', 'upload_date': '20250809',
'thumbnail': 'https://hirado.hu/wp-content/uploads/sites/4/2023/09/vlcsnap-2023-09-11-09h16m09s882.jpg', 'thumbnail': 'https://cdn.cms.mtv.hu/wp-content/uploads/sites/4/2025/08/Idojaras-jelentes-35-1.jpg',
}, },
'skip': 'Webpage redirects to video list page',
}, { }, {
# (old) petofilive # hirado - subcategory
'url': 'https://petofilive.hu/video/2021/06/07/tha-shudras-az-akusztikban/', 'url': 'https://hirado.hu/belfold/video/2025/08/09/nyitott-porta-napok-2025/',
'info_dict': { 'info_dict': {
'id': '4571948', 'id': '8590581',
'title': 'Tha Shudras az Akusztikban', 'title': 'Nyitott Porta Napok 2025',
'display_id': 'nyitott-porta-napok-2025',
'ext': 'mp4', 'ext': 'mp4',
'upload_date': '20210607', 'upload_date': '20250809',
'thumbnail': 'http://petofilive.hu/wp-content/uploads/sites/4/2021/06/vlcsnap-2021-06-07-22h14m23s915-1024x576.jpg', 'thumbnail': 'https://cdn.cms.mtv.hu/wp-content/uploads/sites/4/2025/08/vlcsnap-2025-08-09-10h35m01s887.jpg',
}, },
'skip': 'Webpage redirects to empty page',
}, {
# petofilive
'url': 'https://petofilive.hu/video/2023/09/09/futball-fesztival-a-margitszigeten/',
'info_dict': {
'id': '6713233',
'title': 'Futball Fesztivál a Margitszigeten',
'display_id': 'futball-fesztival-a-margitszigeten',
'ext': 'mp4',
'upload_date': '20230909',
'thumbnail': 'https://petofilive.hu/wp-content/uploads/sites/4/2023/09/Clipboard11-2.jpg',
},
'skip': 'Webpage redirects to video list page',
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -133,9 +77,8 @@ class MediaKlikkIE(InfoExtractor):
display_id = mobj.group('id') display_id = mobj.group('id')
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
player_data_str = self._html_search_regex( player_data = self._search_json(
r'mtva_player_manager\.player\(document.getElementById\(.*\),\s?(\{.*\}).*\);', webpage, 'player data') r'loadPlayer\((?:\s*["\'][^"\']+["\']\s*,)?', webpage, 'player data', mobj)
player_data = self._parse_json(player_data_str, display_id, urllib.parse.unquote)
video_id = str(player_data['contentId']) video_id = str(player_data['contentId'])
title = player_data.get('title') or self._og_search_title(webpage, fatal=False) or \ title = player_data.get('title') or self._og_search_title(webpage, fatal=False) or \
self._html_search_regex(r'<h\d+\b[^>]+\bclass="article_title">([^<]+)<', webpage, 'title') self._html_search_regex(r'<h\d+\b[^>]+\bclass="article_title">([^<]+)<', webpage, 'title')
@@ -146,7 +89,7 @@ class MediaKlikkIE(InfoExtractor):
upload_date = unified_strdate(self._html_search_regex( upload_date = unified_strdate(self._html_search_regex(
r'<p+\b[^>]+\bclass="article_date">([^<]+)<', webpage, 'upload date', default=None)) r'<p+\b[^>]+\bclass="article_date">([^<]+)<', webpage, 'upload date', default=None))
player_data['video'] = player_data.pop('token') player_data['video'] = urllib.parse.unquote(player_data.pop('token'))
player_page = self._download_webpage( player_page = self._download_webpage(
'https://player.mediaklikk.hu/playernew/player.php', video_id, 'https://player.mediaklikk.hu/playernew/player.php', video_id,
query=player_data, headers={'Referer': url}) query=player_data, headers={'Referer': url})

View File

@@ -1,15 +1,73 @@
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..utils import (
clean_html,
determine_ext,
extract_attributes, extract_attributes,
int_or_none, int_or_none,
mimetype2ext, parse_resolution,
parse_iso8601, str_or_none,
url_or_none,
) )
from ..utils.traversal import find_elements, traverse_obj
class MedialaanIE(InfoExtractor): class MedialaanBaseIE(InfoExtractor):
def _extract_from_mychannels_api(self, mychannels_id):
webpage = self._download_webpage(
f'https://mychannels.video/embed/{mychannels_id}', mychannels_id)
brand_config = self._search_json(
r'window\.mychannels\.brand_config\s*=', webpage, 'brand config', mychannels_id)
response = self._download_json(
f'https://api.mychannels.world/v1/embed/video/{mychannels_id}',
mychannels_id, headers={'X-Mychannels-Brand': brand_config['brand']})
formats = []
for stream in traverse_obj(response, (
'streams', lambda _, v: url_or_none(v['url']),
)):
source_url = stream['url']
ext = determine_ext(source_url)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
source_url, mychannels_id, 'mp4', m3u8_id='hls', fatal=False))
else:
format_id = traverse_obj(stream, ('quality', {str}))
formats.append({
'ext': ext,
'format_id': format_id,
'url': source_url,
**parse_resolution(format_id),
})
return {
'id': mychannels_id,
'formats': formats,
**traverse_obj(response, {
'title': ('title', {clean_html}),
'description': ('description', {clean_html}, filter),
'duration': ('durationMs', {int_or_none(scale=1000)}, {lambda x: x if x >= 0 else None}),
'genres': ('genre', 'title', {str}, filter, all, filter),
'is_live': ('live', {bool}),
'release_timestamp': ('publicationTimestampMs', {int_or_none(scale=1000)}),
'tags': ('tags', ..., 'title', {str}, filter, all, filter),
'thumbnail': ('image', 'baseUrl', {url_or_none}),
}),
**traverse_obj(response, ('channel', {
'channel': ('title', {clean_html}),
'channel_id': ('id', {str_or_none}),
})),
**traverse_obj(response, ('organisation', {
'uploader': ('title', {clean_html}),
'uploader_id': ('id', {str_or_none}),
})),
**traverse_obj(response, ('show', {
'series': ('title', {clean_html}),
'series_id': ('id', {str_or_none}),
})),
}
class MedialaanIE(MedialaanBaseIE):
_VALID_URL = r'''(?x) _VALID_URL = r'''(?x)
https?:// https?://
(?: (?:
@@ -32,7 +90,7 @@ class MedialaanIE(InfoExtractor):
tubantia| tubantia|
volkskrant volkskrant
)\.nl )\.nl
)/video/(?:[^/]+/)*[^/?&#]+~p )/videos?/(?:[^/?#]+/)*[^/?&#]+(?:-|~p)
) )
(?P<id>\d+) (?P<id>\d+)
''' '''
@@ -42,19 +100,83 @@ class MedialaanIE(InfoExtractor):
'id': '193993', 'id': '193993',
'ext': 'mp4', 'ext': 'mp4',
'title': 'De terugkeer van Ally de Aap en wie vertrekt er nog bij NAC?', 'title': 'De terugkeer van Ally de Aap en wie vertrekt er nog bij NAC?',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+', 'description': 'In een nieuwe Gegenpressing video bespreken Yadran Blanco en Dennis Kas het nieuws omrent NAC.',
'timestamp': 1611663540,
'upload_date': '20210126',
'duration': 238, 'duration': 238,
}, 'channel': 'BN DeStem',
'params': { 'channel_id': '418',
'skip_download': True, 'genres': ['Sports'],
'release_date': '20210126',
'release_timestamp': 1611663540,
'series': 'Korte Reportage',
'series_id': '972',
'tags': 'count:2',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'BN De Stem',
'uploader_id': '26',
}, },
}, { }, {
'url': 'https://www.gelderlander.nl/video/kanalen/degelderlander~c320/series/snel-nieuws~s984/noodbevel-in-doetinchem-politie-stuurt-mensen-centrum-uit~p194093', 'url': 'https://www.gelderlander.nl/video/kanalen/degelderlander~c320/series/snel-nieuws~s984/noodbevel-in-doetinchem-politie-stuurt-mensen-centrum-uit~p194093',
'only_matching': True, 'info_dict': {
'id': '194093',
'ext': 'mp4',
'title': 'Noodbevel in Doetinchem: politie stuurt mensen centrum uit',
'description': 'md5:77e85b2cb26cfff9dc1fe2b1db524001',
'duration': 44,
'channel': 'De Gelderlander',
'channel_id': '320',
'genres': ['News'],
'release_date': '20210126',
'release_timestamp': 1611690600,
'series': 'Snel Nieuws',
'series_id': '984',
'tags': 'count:1',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'De Gelderlander',
'uploader_id': '25',
},
}, { }, {
'url': 'https://embed.mychannels.video/sdk/production/193993?options=TFTFF_default', 'url': 'https://www.7sur7.be/videos/production/lla-tendance-tiktok-qui-enflamme-lespagne-707650',
'info_dict': {
'id': '707650',
'ext': 'mp4',
'title': 'La tendance TikTok qui enflamme lEspagne',
'description': 'md5:c7ec4cb733190f227fc8935899f533b5',
'duration': 70,
'channel': 'Lifestyle',
'channel_id': '770',
'genres': ['Beauty & Lifestyle'],
'release_date': '20240906',
'release_timestamp': 1725617330,
'series': 'Lifestyle',
'series_id': '1848',
'tags': 'count:1',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': '7sur7',
'uploader_id': '67',
},
}, {
'url': 'https://mychannels.video/embed/313117',
'info_dict': {
'id': '313117',
'ext': 'mp4',
'title': str,
'description': 'md5:255e2e52f6fe8a57103d06def438f016',
'channel': 'AD',
'channel_id': '238',
'genres': ['News'],
'live_status': 'is_live',
'release_date': '20241225',
'release_timestamp': 1735169425,
'series': 'Nieuws Update',
'series_id': '3337',
'tags': 'count:1',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'AD',
'uploader_id': '1',
},
'params': {'skip_download': 'Livestream'},
}, {
'url': 'https://embed.mychannels.video/sdk/production/193993',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://embed.mychannels.video/script/production/193993', 'url': 'https://embed.mychannels.video/script/production/193993',
@@ -62,9 +184,6 @@ class MedialaanIE(InfoExtractor):
}, { }, {
'url': 'https://embed.mychannels.video/production/193993', 'url': 'https://embed.mychannels.video/production/193993',
'only_matching': True, 'only_matching': True,
}, {
'url': 'https://mychannels.video/embed/193993',
'only_matching': True,
}, { }, {
'url': 'https://embed.mychannels.video/embed/193993', 'url': 'https://embed.mychannels.video/embed/193993',
'only_matching': True, 'only_matching': True,
@@ -75,51 +194,31 @@ class MedialaanIE(InfoExtractor):
'id': '1576607', 'id': '1576607',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Tom Waes blaastest', 'title': 'Tom Waes blaastest',
'channel': 'De Morgen',
'channel_id': '352',
'description': 'Tom Waes werkt mee aan een alcoholcampagne op Werchter',
'duration': 62, 'duration': 62,
'genres': ['News'],
'release_date': '20250705',
'release_timestamp': 1751730795,
'series': 'Nieuwsvideo\'s',
'series_id': '1683',
'tags': 'count:1',
'thumbnail': r're:https?://video-images\.persgroep\.be/aws_generated.+\.jpg', 'thumbnail': r're:https?://video-images\.persgroep\.be/aws_generated.+\.jpg',
'timestamp': 1751730795, 'uploader': 'De Morgen',
'upload_date': '20250705', 'uploader_id': '17',
}, },
'params': {'extractor_args': {'generic': {'impersonate': ['chrome']}}}, 'params': {'extractor_args': {'generic': {'impersonate': ['chrome']}}},
}] }]
@classmethod @classmethod
def _extract_embed_urls(cls, url, webpage): def _extract_embed_urls(cls, url, webpage):
entries = [] yield from traverse_obj(webpage, (
for element in re.findall(r'(<div[^>]+data-mychannels-type="video"[^>]*>)', webpage): {find_elements(tag='div', attr='data-mychannels-type', value='video', html=True)},
mychannels_id = extract_attributes(element).get('data-mychannels-id') ..., {extract_attributes}, 'data-mychannels-id', {str}, filter,
if mychannels_id: {lambda x: f'https://mychannels.video/embed/{x}'}))
entries.append('https://mychannels.video/embed/' + mychannels_id)
return entries
def _real_extract(self, url): def _real_extract(self, url):
production_id = self._match_id(url) mychannels_id = self._match_id(url)
production = self._download_json(
'https://embed.mychannels.video/sdk/production/' + production_id,
production_id, query={'options': 'UUUU_default'})['productions'][0]
title = production['title']
formats = [] return self._extract_from_mychannels_api(mychannels_id)
for source in (production.get('sources') or []):
src = source.get('src')
if not src:
continue
ext = mimetype2ext(source.get('type'))
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
src, production_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
else:
formats.append({
'ext': ext,
'url': src,
})
return {
'id': production_id,
'title': title,
'formats': formats,
'thumbnail': production.get('posterUrl'),
'timestamp': parse_iso8601(production.get('publicationDate'), ' '),
'duration': int_or_none(production.get('duration')) or None,
}

View File

@@ -1,652 +1,268 @@
import re import base64
import xml.etree.ElementTree import json
import time
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..networking import HEADRequest, Request from ..networking.exceptions import HTTPError
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
RegexNotFoundError,
find_xpath_attr,
fix_xml_ampersands,
float_or_none, float_or_none,
int_or_none, int_or_none,
join_nonempty, js_to_json,
strip_or_none, jwt_decode_hs256,
timeconvert, parse_iso8601,
try_get, parse_qs,
unescapeHTML, update_url,
update_url_query, update_url_query,
url_basename, url_or_none,
xpath_text,
) )
from ..utils.traversal import require, traverse_obj
def _media_xml_tag(tag): class MTVServicesBaseIE(InfoExtractor):
return f'{{http://search.yahoo.com/mrss/}}{tag}' _GEO_BYPASS = False
_GEO_COUNTRIES = ['US']
_CACHE_SECTION = 'mtvservices'
class MTVServicesInfoExtractor(InfoExtractor): _ACCESS_TOKEN_KEY = 'access'
_MOBILE_TEMPLATE = None _REFRESH_TOKEN_KEY = 'refresh'
_LANG = None _MEDIA_TOKEN_KEY = 'media'
_token_cache = {}
@staticmethod @staticmethod
def _id_from_uri(uri): def _jwt_is_expired(token):
return uri.split(':')[-1] return jwt_decode_hs256(token)['exp'] - time.time() < 120
@staticmethod @staticmethod
def _remove_template_parameter(url): def _get_auth_suite_data(config):
# Remove the templates, like &device={device} return traverse_obj(config, {
return re.sub(r'&[^=]*?={.*?}(?=(&|$))', '', url) 'clientId': ('clientId', {str}),
'countryCode': ('countryCode', {str}),
})
def _get_feed_url(self, uri, url=None): def _call_auth_api(self, path, config, display_id=None, note=None, data=None, headers=None, query=None):
return self._FEED_URL headers = {
'Accept': 'application/json',
def _get_thumbnail_url(self, uri, itemdoc): 'Client-Description': 'deviceName=Chrome Windows;deviceType=desktop;system=Windows NT 10.0',
search_path = '{}/{}'.format(_media_xml_tag('group'), _media_xml_tag('thumbnail')) 'Api-Version': '2025-07-09',
thumb_node = itemdoc.find(search_path) **(headers or {}),
if thumb_node is None:
return None
return thumb_node.get('url') or thumb_node.text or None
def _extract_mobile_video_formats(self, mtvn_id):
webpage_url = self._MOBILE_TEMPLATE % mtvn_id
req = Request(webpage_url)
# Otherwise we get a webpage that would execute some javascript
req.headers['User-Agent'] = 'curl/7'
webpage = self._download_webpage(req, mtvn_id,
'Downloading mobile page')
metrics_url = unescapeHTML(self._search_regex(r'<a href="(http://metrics.+?)"', webpage, 'url'))
req = HEADRequest(metrics_url)
response = self._request_webpage(req, mtvn_id, 'Resolving url')
url = response.url
# Transform the url to get the best quality:
url = re.sub(r'.+pxE=mp4', 'http://mtvnmobile.vo.llnwd.net/kip0/_pxn=0+_pxK=18639+_pxE=mp4', url, count=1)
return [{'url': url, 'ext': 'mp4'}]
def _extract_video_formats(self, mdoc, mtvn_id, video_id):
if re.match(r'.*/(error_country_block\.swf|geoblock\.mp4|copyright_error\.flv(?:\?geo\b.+?)?)$', mdoc.find('.//src').text) is not None:
if mtvn_id is not None and self._MOBILE_TEMPLATE is not None:
self.to_screen('The normal version is not available from your '
'country, trying with the mobile version')
return self._extract_mobile_video_formats(mtvn_id)
raise ExtractorError('This video is not available from your country.',
expected=True)
formats = []
for rendition in mdoc.findall('.//rendition'):
if rendition.get('method') == 'hls':
hls_url = rendition.find('./src').text
formats.extend(self._extract_m3u8_formats(
hls_url, video_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
else:
# fms
try:
_, _, ext = rendition.attrib['type'].partition('/')
rtmp_video_url = rendition.find('./src').text
if 'error_not_available.swf' in rtmp_video_url:
raise ExtractorError(
f'{self.IE_NAME} said: video is not available',
expected=True)
if rtmp_video_url.endswith('siteunavail.png'):
continue
formats.extend([{
'ext': 'flv' if rtmp_video_url.startswith('rtmp') else ext,
'url': rtmp_video_url,
'format_id': join_nonempty(
'rtmp' if rtmp_video_url.startswith('rtmp') else None,
rendition.get('bitrate')),
'width': int(rendition.get('width')),
'height': int(rendition.get('height')),
}])
except (KeyError, TypeError):
raise ExtractorError('Invalid rendition field.')
return formats
def _extract_subtitles(self, mdoc, mtvn_id):
subtitles = {}
for transcript in mdoc.findall('.//transcript'):
if transcript.get('kind') != 'captions':
continue
lang = transcript.get('srclang')
for typographic in transcript.findall('./typographic'):
sub_src = typographic.get('src')
if not sub_src:
continue
ext = typographic.get('format')
if ext == 'cea-608':
ext = 'scc'
subtitles.setdefault(lang, []).append({
'url': str(sub_src),
'ext': ext,
})
return subtitles
def _get_video_info(self, itemdoc, use_hls=True):
uri = itemdoc.find('guid').text
video_id = self._id_from_uri(uri)
self.report_extraction(video_id)
content_el = itemdoc.find('{}/{}'.format(_media_xml_tag('group'), _media_xml_tag('content')))
mediagen_url = self._remove_template_parameter(content_el.attrib['url'])
mediagen_url = mediagen_url.replace('device={device}', '')
if 'acceptMethods' not in mediagen_url:
mediagen_url += '&' if '?' in mediagen_url else '?'
mediagen_url += 'acceptMethods='
mediagen_url += 'hls' if use_hls else 'fms'
mediagen_doc = self._download_xml(
mediagen_url, video_id, 'Downloading video urls', fatal=False)
if not isinstance(mediagen_doc, xml.etree.ElementTree.Element):
return None
item = mediagen_doc.find('./video/item')
if item is not None and item.get('type') == 'text':
message = f'{self.IE_NAME} returned error: '
if item.get('code') is not None:
message += '{} - '.format(item.get('code'))
message += item.text
raise ExtractorError(message, expected=True)
description = strip_or_none(xpath_text(itemdoc, 'description'))
timestamp = timeconvert(xpath_text(itemdoc, 'pubDate'))
title_el = None
if title_el is None:
title_el = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:video_title')
if title_el is None:
title_el = itemdoc.find('.//{http://search.yahoo.com/mrss/}title')
if title_el is None:
title_el = itemdoc.find('.//title')
if title_el.text is None:
title_el = None
title = title_el.text
if title is None:
raise ExtractorError('Could not find video title')
title = title.strip()
series = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:franchise')
season = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:seasonN')
episode = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:episodeN')
series = series.text if series is not None else None
season = season.text if season is not None else None
episode = episode.text if episode is not None else None
if season and episode:
# episode number includes season, so remove it
episode = re.sub(rf'^{season}', '', episode)
# This a short id that's used in the webpage urls
mtvn_id = None
mtvn_id_node = find_xpath_attr(itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:id')
if mtvn_id_node is not None:
mtvn_id = mtvn_id_node.text
formats = self._extract_video_formats(mediagen_doc, mtvn_id, video_id)
# Some parts of complete video may be missing (e.g. missing Act 3 in
# http://www.southpark.de/alle-episoden/s14e01-sexual-healing)
if not formats:
return None
return {
'title': title,
'formats': formats,
'subtitles': self._extract_subtitles(mediagen_doc, mtvn_id),
'id': video_id,
'thumbnail': self._get_thumbnail_url(uri, itemdoc),
'description': description,
'duration': float_or_none(content_el.attrib.get('duration')),
'timestamp': timestamp,
'series': series,
'season_number': int_or_none(season),
'episode_number': int_or_none(episode),
} }
if data is not None:
headers['Content-Type'] = 'application/json'
if isinstance(data, dict):
data = json.dumps(data, separators=(',', ':')).encode()
def _get_feed_query(self, uri): return self._download_json(
data = {'uri': uri} f'https://auth.mtvnservices.com/{path}', display_id,
if self._LANG: note=note or 'Calling authentication API', data=data,
data['lang'] = self._LANG headers=headers, query={**self._get_auth_suite_data(config), **(query or {})})
return data
def _get_videos_info(self, uri, use_hls=True, url=None): def _get_fresh_access_token(self, config, display_id=None, force_refresh=False):
video_id = self._id_from_uri(uri) resource_id = config['resourceId']
feed_url = self._get_feed_url(uri, url) # resource_id should already be in _token_cache since _get_media_token is the caller
info_url = update_url_query(feed_url, self._get_feed_query(uri)) tokens = self._token_cache[resource_id]
return self._get_videos_info_from_url(info_url, video_id, use_hls)
def _get_videos_info_from_url(self, url, video_id, use_hls=True): access_token = tokens.get(self._ACCESS_TOKEN_KEY)
idoc = self._download_xml( if not force_refresh and access_token and not self._jwt_is_expired(access_token):
url, video_id, return access_token
'Downloading info', transform_source=fix_xml_ampersands)
title = xpath_text(idoc, './channel/title') if self._REFRESH_TOKEN_KEY not in tokens:
description = xpath_text(idoc, './channel/description') response = self._call_auth_api(
'accessToken', config, display_id, 'Retrieving auth tokens', data=b'')
else:
response = self._call_auth_api(
'accessToken/refresh', config, display_id, 'Refreshing auth tokens',
data={'refreshToken': tokens[self._REFRESH_TOKEN_KEY]},
headers={'Authorization': f'Bearer {access_token}'})
entries = [] tokens[self._ACCESS_TOKEN_KEY] = response['applicationAccessToken']
for item in idoc.findall('.//item'): tokens[self._REFRESH_TOKEN_KEY] = response['deviceRefreshToken']
info = self._get_video_info(item, use_hls) self.cache.store(self._CACHE_SECTION, resource_id, tokens)
if info:
entries.append(info)
# TODO: should be multi-video return tokens[self._ACCESS_TOKEN_KEY]
return self.playlist_result(
entries, playlist_title=title, playlist_description=description)
def _extract_triforce_mgid(self, webpage, data_zone=None, video_id=None): def _get_media_token(self, video_config, config, display_id=None):
triforce_feed = self._parse_json(self._search_regex( resource_id = config['resourceId']
r'triforceManifestFeed\s*=\s*({.+?})\s*;\s*\n', webpage, if resource_id in self._token_cache:
'triforce feed', default='{}'), video_id, fatal=False) tokens = self._token_cache[resource_id]
else:
tokens = self._token_cache[resource_id] = self.cache.load(self._CACHE_SECTION, resource_id) or {}
data_zone = self._search_regex( media_token = tokens.get(self._MEDIA_TOKEN_KEY)
r'data-zone=(["\'])(?P<zone>.+?_lc_promo.*?)\1', webpage, if media_token and not self._jwt_is_expired(media_token):
'data zone', default=data_zone, group='zone') return media_token
feed_url = try_get( access_token = self._get_fresh_access_token(config, display_id)
triforce_feed, lambda x: x['manifest']['zones'][data_zone]['feed'], if not jwt_decode_hs256(access_token).get('accessMethods'):
str) # MTVServices uses a custom AdobePass oauth flow which is incompatible with AdobePassIE
if not feed_url: mso_id = self.get_param('ap_mso')
return if not mso_id:
raise ExtractorError(
'This video is only available for users of participating TV providers. '
'Use --ap-mso to specify Adobe Pass Multiple-system operator Identifier and pass '
'cookies from a browser session where you are signed-in to your provider.', expected=True)
feed = self._download_json(feed_url, video_id, fatal=False) auth_suite_data = json.dumps(
if not feed: self._get_auth_suite_data(config), separators=(',', ':')).encode()
return callback_url = update_url_query(config['callbackURL'], {
'authSuiteData': urllib.parse.quote(base64.b64encode(auth_suite_data).decode()),
'mvpdCode': mso_id,
})
auth_url = self._call_auth_api(
f'mvpd/{mso_id}/login', config, display_id,
'Retrieving provider authentication URL',
query={'callbackUrl': callback_url},
headers={'Authorization': f'Bearer {access_token}'})['authenticationUrl']
res = self._download_webpage_handle(auth_url, display_id, 'Downloading provider auth page')
# XXX: The following "provider-specific code" likely only works if mso_id == Comcast_SSO
# BEGIN provider-specific code
redirect_url = self._search_json(
r'initInterstitialRedirect\(', res[0], 'redirect JSON',
display_id, transform_source=js_to_json)['continue']
urlh = self._request_webpage(redirect_url, display_id, 'Requesting provider redirect page')
authorization_code = parse_qs(urlh.url)['authorizationCode'][-1]
# END provider-specific code
self._call_auth_api(
f'access/mvpd/{mso_id}', config, display_id,
'Submitting authorization code to MTVNServices',
query={'authorizationCode': authorization_code}, data=b'',
headers={'Authorization': f'Bearer {access_token}'})
access_token = self._get_fresh_access_token(config, display_id, force_refresh=True)
return try_get(feed, lambda x: x['result']['data']['id'], str) tokens[self._MEDIA_TOKEN_KEY] = self._call_auth_api(
'mediaToken', config, display_id, 'Fetching media token', data={
'content': {('id' if k == 'videoId' else k): v for k, v in video_config.items()},
'resourceId': resource_id,
}, headers={'Authorization': f'Bearer {access_token}'})['mediaToken']
@staticmethod self.cache.store(self._CACHE_SECTION, resource_id, tokens)
def _extract_child_with_type(parent, t): return tokens[self._MEDIA_TOKEN_KEY]
for c in parent['children']:
if c.get('type') == t: def _real_extract(self, url):
return c display_id = self._match_id(url)
def _extract_mgid(self, webpage):
try: try:
# the url can be http://media.mtvnservices.com/fb/{mgid}.swf data = self._download_json(
# or http://media.mtvnservices.com/{mgid} update_url(url, query=None), display_id,
og_url = self._og_search_video_url(webpage) query={'json': 'true'})
mgid = url_basename(og_url) except ExtractorError as e:
if mgid.endswith('.swf'): if isinstance(e.cause, HTTPError) and e.cause.status == 404 and not self.suitable(e.cause.response.url):
mgid = mgid[:-4] self.raise_geo_restricted(countries=self._GEO_COUNTRIES)
except RegexNotFoundError: raise
mgid = None
if mgid is None or ':' not in mgid: flex_wrapper = traverse_obj(data, (
mgid = self._search_regex( 'children', lambda _, v: v['type'] == 'MainContainer',
[r'data-mgid="(.*?)"', r'swfobject\.embedSWF\(".*?(mgid:.*?)"'], (None, ('children', lambda _, v: v['type'] == 'AviaWrapper')),
webpage, 'mgid', default=None) 'children', lambda _, v: v['type'] == 'FlexWrapper', {dict}, any))
video_detail = traverse_obj(flex_wrapper, (
(None, ('children', lambda _, v: v['type'] == 'AuthSuiteWrapper')),
'children', lambda _, v: v['type'] == 'Player',
'props', 'videoDetail', {dict}, any))
if not video_detail:
video_detail = traverse_obj(data, (
'children', ..., ('handleTVEAuthRedirection', None),
'videoDetail', {dict}, any, {require('video detail')}))
if not mgid: mgid = video_detail['mgid']
sm4_embed = self._html_search_meta( video_id = mgid.rpartition(':')[2]
'sm4:video:embed', webpage, 'sm4 embed', default='') service_url = traverse_obj(video_detail, ('videoServiceUrl', {url_or_none}, {update_url(query=None)}))
mgid = self._search_regex( if not service_url:
r'embed/(mgid:.+?)["\'&?/]', sm4_embed, 'mgid', default=None) raise ExtractorError('This content is no longer available', expected=True)
if not mgid: headers = {}
mgid = self._extract_triforce_mgid(webpage) if video_detail.get('authRequired'):
# The vast majority of provider-locked content has been moved to Paramount+
# BetIE is the only extractor that is currently known to reach this code path
video_config = traverse_obj(flex_wrapper, (
'children', lambda _, v: v['type'] == 'AuthSuiteWrapper',
'props', 'videoConfig', {dict}, any, {require('video config')}))
config = traverse_obj(data, (
'props', 'authSuiteConfig', {dict}, {require('auth suite config')}))
headers['X-VIA-TVE-MEDIATOKEN'] = self._get_media_token(video_config, config, display_id)
if not mgid: stream_info = self._download_json(
data = self._parse_json(self._search_regex( service_url, video_id, 'Downloading API JSON', 'Unable to download API JSON',
r'__DATA__\s*=\s*({.+?});', webpage, 'data'), None) query={'clientPlatform': 'desktop'}, headers=headers)['stitchedstream']
main_container = self._extract_child_with_type(data, 'MainContainer')
ab_testing = self._extract_child_with_type(main_container, 'ABTesting')
video_player = self._extract_child_with_type(ab_testing or main_container, 'VideoPlayer')
if video_player:
mgid = try_get(video_player, lambda x: x['props']['media']['video']['config']['uri'])
else:
flex_wrapper = self._extract_child_with_type(ab_testing or main_container, 'FlexWrapper')
auth_suite_wrapper = self._extract_child_with_type(flex_wrapper, 'AuthSuiteWrapper')
player = self._extract_child_with_type(auth_suite_wrapper or flex_wrapper, 'Player')
if player:
mgid = try_get(player, lambda x: x['props']['videoDetail']['mgid'])
if not mgid: manifest_type = stream_info['manifesttype']
raise ExtractorError('Could not extract mgid') if manifest_type == 'hls':
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
stream_info['source'], video_id, 'mp4', m3u8_id=manifest_type)
elif manifest_type == 'dash':
formats, subtitles = self._extract_mpd_formats_and_subtitles(
stream_info['source'], video_id, mpd_id=manifest_type)
else:
self.raise_no_formats(f'Unsupported manifest type "{manifest_type}"')
formats, subtitles = [], {}
return mgid return {
**traverse_obj(video_detail, {
def _real_extract(self, url): 'title': ('title', {str}),
title = url_basename(url) 'channel': ('channel', 'name', {str}),
webpage = self._download_webpage(url, title) 'thumbnails': ('images', ..., {'url': ('url', {url_or_none})}),
mgid = self._extract_mgid(webpage) 'description': (('fullDescription', 'description'), {str}, any),
return self._get_videos_info(mgid, url=url) 'series': ('parentEntity', 'title', {str}),
'season_number': ('seasonNumber', {int_or_none}),
'episode_number': ('episodeAiringOrder', {int_or_none}),
'duration': ('duration', 'milliseconds', {float_or_none(scale=1000)}),
'timestamp': ((
('originalPublishDate', {parse_iso8601}),
('publishDate', 'timestamp', {int_or_none})), any),
'release_timestamp': ((
('originalAirDate', {parse_iso8601}),
('airDate', 'timestamp', {int_or_none})), any),
}),
'id': video_id,
'display_id': display_id,
'formats': formats,
'subtitles': subtitles,
}
class MTVServicesEmbeddedIE(MTVServicesInfoExtractor): class MTVIE(MTVServicesBaseIE):
IE_NAME = 'mtvservices:embedded'
_VALID_URL = r'https?://media\.mtvnservices\.com/embed/(?P<mgid>.+?)(\?|/|$)'
_EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//media\.mtvnservices\.com/embed/.+?)\1']
_TEST = {
# From http://www.thewrap.com/peter-dinklage-sums-up-game-of-thrones-in-45-seconds-video/
'url': 'http://media.mtvnservices.com/embed/mgid:uma:video:mtv.com:1043906/cp~vid%3D1043906%26uri%3Dmgid%3Auma%3Avideo%3Amtv.com%3A1043906',
'md5': 'cb349b21a7897164cede95bd7bf3fbb9',
'info_dict': {
'id': '1043906',
'ext': 'mp4',
'title': 'Peter Dinklage Sums Up \'Game Of Thrones\' In 45 Seconds',
'description': '"Sexy sexy sexy, stabby stabby stabby, beautiful language," says Peter Dinklage as he tries summarizing "Game of Thrones" in under a minute.',
'timestamp': 1400126400,
'upload_date': '20140515',
},
}
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge', video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
def _real_extract(self, url):
mobj = self._match_valid_url(url)
mgid = mobj.group('mgid')
return self._get_videos_info(mgid)
class MTVIE(MTVServicesInfoExtractor):
IE_NAME = 'mtv' IE_NAME = 'mtv'
_VALID_URL = r'https?://(?:www\.)?mtv\.com/(?:video-clips|(?:full-)?episodes)/(?P<id>[^/?#.]+)' _VALID_URL = r'https?://(?:www\.)?mtv\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_FEED_URL = 'http://www.mtv.com/feeds/mrss/'
_TESTS = [{ _TESTS = [{
'url': 'http://www.mtv.com/video-clips/vl8qof/unlocking-the-truth-trailer', 'url': 'https://www.mtv.com/video-clips/syolsj',
'md5': '1edbcdf1e7628e414a8c5dcebca3d32b',
'info_dict': { 'info_dict': {
'id': '5e14040d-18a4-47c4-a582-43ff602de88e', 'id': '213ea7f8-bac7-4a43-8cd5-8d8cb8c8160f',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Unlocking The Truth|July 18, 2016|1|101|Trailer', 'display_id': 'syolsj',
'description': '"Unlocking the Truth" premieres August 17th at 11/10c.', 'title': 'The Challenge: Vets & New Threats',
'timestamp': 1468846800, 'description': 'md5:c4d2e90a5fff6463740fbf96b2bb6a41',
'upload_date': '20160718', 'duration': 95.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref',
'series': 'The Challenge',
'season': 'Season 41',
'season_number': 41,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1753945200,
'upload_date': '20250731',
'release_timestamp': 1753945200,
'release_date': '20250731',
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'http://www.mtv.com/full-episodes/94tujl/unlocking-the-truth-gates-of-hell-season-1-ep-101', 'url': 'https://www.mtv.com/episodes/uzvigh',
'only_matching': True, 'info_dict': {
}, { 'id': '364e8b9e-e415-11ef-b405-16fff45bc035',
'url': 'http://www.mtv.com/episodes/g8xu7q/teen-mom-2-breaking-the-wall-season-7-ep-713', 'ext': 'mp4',
'only_matching': True, 'display_id': 'uzvigh',
'title': 'CT Tamburello and Johnny Bananas',
'description': 'md5:364cea52001e9c13f92784e3365c6606',
'channel': 'MTV',
'duration': 1260.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref',
'series': 'Ridiculousness',
'season': 'Season 47',
'season_number': 47,
'episode': 'Episode 19',
'episode_number': 19,
'timestamp': 1753318800,
'upload_date': '20250724',
'release_timestamp': 1753318800,
'release_date': '20250724',
},
'params': {'skip_download': 'm3u8'},
}] }]
class MTVJapanIE(MTVServicesInfoExtractor):
IE_NAME = 'mtvjapan'
_VALID_URL = r'https?://(?:www\.)?mtvjapan\.com/videos/(?P<id>[0-9a-z]+)'
_TEST = {
'url': 'http://www.mtvjapan.com/videos/prayht/fresh-info-cadillac-escalade',
'info_dict': {
'id': 'bc01da03-6fe5-4284-8880-f291f4e368f5',
'ext': 'mp4',
'title': '【Fresh Info】Cadillac ESCALADE Sport Edition',
},
'params': {
'skip_download': True,
},
}
_GEO_COUNTRIES = ['JP']
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
def _get_feed_query(self, uri):
return {
'arcEp': 'mtvjapan.com',
'mgid': uri,
}
class MTVVideoIE(MTVServicesInfoExtractor):
IE_NAME = 'mtv:video'
_VALID_URL = r'''(?x)^https?://
(?:(?:www\.)?mtv\.com/videos/.+?/(?P<videoid>[0-9]+)/[^/]+$|
m\.mtv\.com/videos/video\.rbml\?.*?id=(?P<mgid>[^&]+))'''
_FEED_URL = 'http://www.mtv.com/player/embed/AS3/rss/'
_TESTS = [
{
'url': 'http://www.mtv.com/videos/misc/853555/ours-vh1-storytellers.jhtml',
'md5': '850f3f143316b1e71fa56a4edfd6e0f8',
'info_dict': {
'id': '853555',
'ext': 'mp4',
'title': 'Taylor Swift - "Ours (VH1 Storytellers)"',
'description': 'Album: Taylor Swift performs "Ours" for VH1 Storytellers at Harvey Mudd College.',
'timestamp': 1352610000,
'upload_date': '20121111',
},
},
]
def _get_thumbnail_url(self, uri, itemdoc):
return 'http://mtv.mtvnimages.com/uri/' + uri
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('videoid')
uri = mobj.groupdict().get('mgid')
if uri is None:
webpage = self._download_webpage(url, video_id)
# Some videos come from Vevo.com
m_vevo = re.search(
r'(?s)isVevoVideo = true;.*?vevoVideoId = "(.*?)";', webpage)
if m_vevo:
vevo_id = m_vevo.group(1)
self.to_screen(f'Vevo video detected: {vevo_id}')
return self.url_result(f'vevo:{vevo_id}', ie='Vevo')
uri = self._html_search_regex(r'/uri/(.*?)\?', webpage, 'uri')
return self._get_videos_info(uri)
class MTVDEIE(MTVServicesInfoExtractor):
_WORKING = False
IE_NAME = 'mtv.de'
_VALID_URL = r'https?://(?:www\.)?mtv\.de/(?:musik/videoclips|folgen|news)/(?P<id>[0-9a-z]+)'
_TESTS = [{
'url': 'http://www.mtv.de/musik/videoclips/2gpnv7/Traum',
'info_dict': {
'id': 'd5d472bc-f5b7-11e5-bffd-a4badb20dab5',
'ext': 'mp4',
'title': 'Traum',
'description': 'Traum',
},
'params': {
# rtmp download
'skip_download': True,
},
'skip': 'Blocked at Travis CI',
}, {
# mediagen URL without query (e.g. http://videos.mtvnn.com/mediagen/e865da714c166d18d6f80893195fcb97)
'url': 'http://www.mtv.de/folgen/6b1ylu/teen-mom-2-enthuellungen-S5-F1',
'info_dict': {
'id': '1e5a878b-31c5-11e7-a442-0e40cf2fc285',
'ext': 'mp4',
'title': 'Teen Mom 2',
'description': 'md5:dc65e357ef7e1085ed53e9e9d83146a7',
},
'params': {
# rtmp download
'skip_download': True,
},
'skip': 'Blocked at Travis CI',
}, {
'url': 'http://www.mtv.de/news/glolix/77491-mtv-movies-spotlight--pixels--teil-3',
'info_dict': {
'id': 'local_playlist-4e760566473c4c8c5344',
'ext': 'mp4',
'title': 'Article_mtv-movies-spotlight-pixels-teil-3_short-clips_part1',
'description': 'MTV Movies Supercut',
},
'params': {
# rtmp download
'skip_download': True,
},
'skip': 'Das Video kann zur Zeit nicht abgespielt werden.',
}]
_GEO_COUNTRIES = ['DE']
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
def _get_feed_query(self, uri):
return {
'arcEp': 'mtv.de',
'mgid': uri,
}
class MTVItaliaIE(MTVServicesInfoExtractor):
IE_NAME = 'mtv.it'
_VALID_URL = r'https?://(?:www\.)?mtv\.it/(?:episodi|video|musica)/(?P<id>[0-9a-z]+)'
_TESTS = [{
'url': 'http://www.mtv.it/episodi/24bqab/mario-una-serie-di-maccio-capatonda-cavoli-amario-episodio-completo-S1-E1',
'info_dict': {
'id': '0f0fc78e-45fc-4cce-8f24-971c25477530',
'ext': 'mp4',
'title': 'Cavoli amario (episodio completo)',
'description': 'md5:4962bccea8fed5b7c03b295ae1340660',
'series': 'Mario - Una Serie Di Maccio Capatonda',
'season_number': 1,
'episode_number': 1,
},
'params': {
'skip_download': True,
},
}]
_GEO_COUNTRIES = ['IT']
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
def _get_feed_query(self, uri):
return {
'arcEp': 'mtv.it',
'mgid': uri,
}
class MTVItaliaProgrammaIE(MTVItaliaIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'mtv.it:programma'
_VALID_URL = r'https?://(?:www\.)?mtv\.it/(?:programmi|playlist)/(?P<id>[0-9a-z]+)'
_TESTS = [{
# program page: general
'url': 'http://www.mtv.it/programmi/s2rppv/mario-una-serie-di-maccio-capatonda',
'info_dict': {
'id': 'a6f155bc-8220-4640-aa43-9b95f64ffa3d',
'title': 'Mario - Una Serie Di Maccio Capatonda',
'description': 'md5:72fbffe1f77ccf4e90757dd4e3216153',
},
'playlist_count': 2,
'params': {
'skip_download': True,
},
}, {
# program page: specific season
'url': 'http://www.mtv.it/programmi/d9ncjf/mario-una-serie-di-maccio-capatonda-S2',
'info_dict': {
'id': '4deeb5d8-f272-490c-bde2-ff8d261c6dd1',
'title': 'Mario - Una Serie Di Maccio Capatonda - Stagione 2',
},
'playlist_count': 34,
'params': {
'skip_download': True,
},
}, {
# playlist page + redirect
'url': 'http://www.mtv.it/playlist/sexy-videos/ilctal',
'info_dict': {
'id': 'dee8f9ee-756d-493b-bf37-16d1d2783359',
'title': 'Sexy Videos',
},
'playlist_mincount': 145,
'params': {
'skip_download': True,
},
}]
_GEO_COUNTRIES = ['IT']
_FEED_URL = 'http://www.mtv.it/feeds/triforce/manifest/v8'
def _get_entries(self, title, url):
while True:
pg = self._search_regex(r'/(\d+)$', url, 'entries', '1')
entries = self._download_json(url, title, f'page {pg}')
url = try_get(
entries, lambda x: x['result']['nextPageURL'], str)
entries = try_get(
entries, (
lambda x: x['result']['data']['items'],
lambda x: x['result']['data']['seasons']),
list)
for entry in entries or []:
if entry.get('canonicalURL'):
yield self.url_result(entry['canonicalURL'])
if not url:
break
def _real_extract(self, url):
query = {'url': url}
info_url = update_url_query(self._FEED_URL, query)
video_id = self._match_id(url)
info = self._download_json(info_url, video_id).get('manifest')
redirect = try_get(
info, lambda x: x['newLocation']['url'], str)
if redirect:
return self.url_result(redirect)
title = info.get('title')
video_id = try_get(
info, lambda x: x['reporting']['itemId'], str)
parent_id = try_get(
info, lambda x: x['reporting']['parentId'], str)
playlist_url = current_url = None
for z in (info.get('zones') or {}).values():
if z.get('moduleName') in ('INTL_M304', 'INTL_M209'):
info_url = z.get('feed')
if z.get('moduleName') in ('INTL_M308', 'INTL_M317'):
playlist_url = playlist_url or z.get('feed')
if z.get('moduleName') in ('INTL_M300',):
current_url = current_url or z.get('feed')
if not info_url:
raise ExtractorError('No info found')
if video_id == parent_id:
video_id = self._search_regex(
r'([^\/]+)/[^\/]+$', info_url, 'video_id')
info = self._download_json(info_url, video_id, 'Show infos')
info = try_get(info, lambda x: x['result']['data'], dict)
title = title or try_get(
info, (
lambda x: x['title'],
lambda x: x['headline']),
str)
description = try_get(info, lambda x: x['content'], str)
if current_url:
season = try_get(
self._download_json(playlist_url, video_id, 'Seasons info'),
lambda x: x['result']['data'], dict)
current = try_get(
season, lambda x: x['currentSeason'], str)
seasons = try_get(
season, lambda x: x['seasons'], list) or []
if current in [s.get('eTitle') for s in seasons]:
playlist_url = current_url
title = re.sub(
r'[-|]\s*(?:mtv\s*italia|programma|playlist)',
'', title, flags=re.IGNORECASE).strip()
return self.playlist_result(
self._get_entries(title, playlist_url),
video_id, title, description)

View File

@@ -111,12 +111,8 @@ class MySpaceIE(InfoExtractor):
search_data('stream-url'), search_data('hls-stream-url'), search_data('stream-url'), search_data('hls-stream-url'),
search_data('http-stream-url')) search_data('http-stream-url'))
if not formats: if not formats:
vevo_id = search_data('vevo-id')
youtube_id = search_data('youtube-id') youtube_id = search_data('youtube-id')
if vevo_id: if youtube_id:
self.to_screen(f'Vevo video detected: {vevo_id}')
return self.url_result(f'vevo:{vevo_id}', ie='Vevo')
elif youtube_id:
self.to_screen(f'Youtube video detected: {youtube_id}') self.to_screen(f'Youtube video detected: {youtube_id}')
return self.url_result(youtube_id, ie='Youtube') return self.url_result(youtube_id, ie='Youtube')
else: else:

View File

@@ -1,224 +1,48 @@
from .mtv import MTVServicesInfoExtractor from .mtv import MTVServicesBaseIE
from ..utils import update_url_query
class NickIE(MTVServicesInfoExtractor): class NickIE(MTVServicesBaseIE):
IE_NAME = 'nick.com' IE_NAME = 'nick.com'
_VALID_URL = r'https?://(?P<domain>(?:www\.)?nick(?:jr)?\.com)/(?:[^/]+/)?(?P<type>videos/clip|[^/]+/videos|episodes/[^/]+)/(?P<id>[^/?#.]+)' _VALID_URL = r'https?://(?:www\.)?nick\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm'
_GEO_COUNTRIES = ['US']
_TESTS = [{ _TESTS = [{
'url': 'https://www.nick.com/episodes/sq47rw/spongebob-squarepants-a-place-for-pets-lockdown-for-love-season-13-ep-1', 'url': 'https://www.nick.com/episodes/u3smw8/wylde-pak-best-summer-ever-season-1-ep-1',
'info_dict': { 'info_dict': {
'description': 'md5:0650a9eb88955609d5c1d1c79292e234', 'id': 'eb9d4db0-274a-11ef-a913-0e37995d42c9',
'title': 'A Place for Pets/Lockdown for Love',
},
'playlist': [
{
'md5': 'cb8a2afeafb7ae154aca5a64815ec9d6',
'info_dict': {
'id': '85ee8177-d6ce-48f8-9eee-a65364f8a6df',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S1',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
{
'md5': '839a04f49900a1fcbf517020d94e0737',
'info_dict': {
'id': '2e2a9960-8fd4-411d-868b-28eb1beb7fae',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S2',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
{
'md5': 'f1145699f199770e2919ee8646955d46',
'info_dict': {
'id': 'dc91c304-6876-40f7-84a6-7aece7baa9d0',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S3',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
{
'md5': 'd463116875aee2585ee58de3b12caebd',
'info_dict': {
'id': '5d929486-cf4c-42a1-889a-6e0d183a101a',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S4',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
],
}, {
'url': 'http://www.nickjr.com/blues-clues-and-you/videos/blues-clues-and-you-original-209-imagination-station/',
'info_dict': {
'id': '31631529-2fc5-430b-b2ef-6a74b4609abd',
'ext': 'mp4', 'ext': 'mp4',
'description': 'md5:9d65a66df38e02254852794b2809d1cf', 'display_id': 'u3smw8',
'title': 'Blue\'s Imagination Station', 'title': 'Best Summer Ever?',
'description': 'md5:c737a0ade3fbc09d569c3b3d029a7792',
'channel': 'Nickelodeon',
'duration': 1296.0,
'thumbnail': r're:https://assets\.nick\.com/uri/mgid:arc:imageassetref:',
'series': 'Wylde Pak',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 1746100800,
'upload_date': '20250501',
'release_timestamp': 1746100800,
'release_date': '20250501',
}, },
'skip': 'Not accessible?', 'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.nick.com/video-clips/0p4706/spongebob-squarepants-spongebob-loving-the-krusty-krab-for-7-minutes',
'info_dict': {
'id': '4aac2228-5295-4076-b986-159513cf4ce4',
'ext': 'mp4',
'display_id': '0p4706',
'title': 'SpongeBob Loving the Krusty Krab for 7 Minutes!',
'description': 'md5:72bf59babdf4e6d642187502864e111d',
'duration': 423.423,
'thumbnail': r're:https://assets\.nick\.com/uri/mgid:arc:imageassetref:',
'series': 'SpongeBob SquarePants',
'season': 'Season 0',
'season_number': 0,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1663819200,
'upload_date': '20220922',
},
'params': {'skip_download': 'm3u8'},
}] }]
def _get_feed_query(self, uri):
return {
'feed': 'nick_arc_player_prime',
'mgid': uri,
}
def _real_extract(self, url):
domain, video_type, display_id = self._match_valid_url(url).groups()
if video_type.startswith('episodes'):
return super()._real_extract(url)
video_data = self._download_json(
f'http://{domain}/data/video.endLevel.json',
display_id, query={
'urlKey': display_id,
})
return self._get_videos_info(video_data['player'] + video_data['id'])
class NickBrIE(MTVServicesInfoExtractor):
IE_NAME = 'nickelodeon:br'
_VALID_URL = r'''(?x)
https?://
(?:
(?P<domain>(?:www\.)?nickjr|mundonick\.uol)\.com\.br|
(?:www\.)?nickjr\.[a-z]{2}|
(?:www\.)?nickelodeonjunior\.fr
)
/(?:programas/)?[^/]+/videos/(?:episodios/)?(?P<id>[^/?\#.]+)
'''
_TESTS = [{
'url': 'http://www.nickjr.com.br/patrulha-canina/videos/210-labirinto-de-pipoca/',
'only_matching': True,
}, {
'url': 'http://mundonick.uol.com.br/programas/the-loud-house/videos/muitas-irmas/7ljo9j',
'only_matching': True,
}, {
'url': 'http://www.nickjr.nl/paw-patrol/videos/311-ge-wol-dig-om-terug-te-zijn/',
'only_matching': True,
}, {
'url': 'http://www.nickjr.de/blaze-und-die-monster-maschinen/videos/f6caaf8f-e4e8-4cc1-b489-9380d6dcd059/',
'only_matching': True,
}, {
'url': 'http://www.nickelodeonjunior.fr/paw-patrol-la-pat-patrouille/videos/episode-401-entier-paw-patrol/',
'only_matching': True,
}]
def _real_extract(self, url):
domain, display_id = self._match_valid_url(url).groups()
webpage = self._download_webpage(url, display_id)
uri = self._search_regex(
r'data-(?:contenturi|mgid)="([^"]+)', webpage, 'mgid')
video_id = self._id_from_uri(uri)
config = self._download_json(
'http://media.mtvnservices.com/pmt/e1/access/index.html',
video_id, query={
'uri': uri,
'configtype': 'edge',
}, headers={
'Referer': url,
})
info_url = self._remove_template_parameter(config['feedWithQueryParams'])
if info_url == 'None':
if domain.startswith('www.'):
domain = domain[4:]
content_domain = {
'mundonick.uol': 'mundonick.com.br',
'nickjr': 'br.nickelodeonjunior.tv',
}[domain]
query = {
'mgid': uri,
'imageEp': content_domain,
'arcEp': content_domain,
}
if domain == 'nickjr.com.br':
query['ep'] = 'c4b16088'
info_url = update_url_query(
'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed', query)
return self._get_videos_info_from_url(info_url, video_id)
class NickDeIE(MTVServicesInfoExtractor):
IE_NAME = 'nick.de'
_VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.(?:de|com\.pl|ch)|nickelodeon\.(?:nl|be|at|dk|no|se))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',
'only_matching': True,
}, {
'url': 'http://www.nick.de/shows/342-icarly',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.nl/shows/474-spongebob/videos/17403-een-kijkje-in-de-keuken-met-sandy-van-binnenuit',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.at/playlist/3773-top-videos/videos/episode/77993-das-letzte-gefecht',
'only_matching': True,
}, {
'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.no/program/2626-bulderhuset/videoer/90947-femteklasse-veronica-vs-vanzilla',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.dk/serier/2626-hojs-hus/videoer/761-tissepause',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.se/serier/2626-lugn-i-stormen/videos/998-',
'only_matching': True,
}, {
'url': 'http://www.nick.ch/shows/2304-adventure-time-abenteuerzeit-mit-finn-und-jake',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.be/afspeellijst/4530-top-videos/videos/episode/73917-inval-broodschapper-lariekoek-arie',
'only_matching': True,
}]
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge&ref={url}', video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
class NickRuIE(MTVServicesInfoExtractor):
IE_NAME = 'nickelodeonru'
_VALID_URL = r'https?://(?:www\.)nickelodeon\.(?:ru|fr|es|pt|ro|hu|com\.tr)/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.nickelodeon.ru/shows/henrydanger/videos/episodes/3-sezon-15-seriya-licenziya-na-polyot/pmomfb#playlist/7airc6',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.ru/videos/smotri-na-nickelodeon-v-iyule/g9hvh7',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.fr/programmes/bob-l-eponge/videos/le-marathon-de-booh-kini-bottom-mardi-31-octobre/nfn7z0',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.es/videos/nickelodeon-consejos-tortitas/f7w7xy',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.pt/series/spongebob-squarepants/videos/a-bolha-de-tinta-gigante/xutq1b',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.ro/emisiuni/shimmer-si-shine/video/nahal-din-bomboane/uw5u2k',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.hu/musorok/spongyabob-kockanadrag/videok/episodes/buborekfujas-az-elszakadt-nadrag/q57iob#playlist/k6te4y',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.com.tr/programlar/sunger-bob/videolar/kayip-yatak/mgqbjy',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
mgid = self._extract_mgid(webpage, url)
return self.url_result(f'http://media.mtvnservices.com/embed/{mgid}')

View File

@@ -873,39 +873,67 @@ class NiconicoLiveIE(NiconicoBaseIE):
IE_DESC = 'ニコニコ生放送' IE_DESC = 'ニコニコ生放送'
_VALID_URL = r'https?://(?:sp\.)?live2?\.nicovideo\.jp/(?:watch|gate)/(?P<id>lv\d+)' _VALID_URL = r'https?://(?:sp\.)?live2?\.nicovideo\.jp/(?:watch|gate)/(?P<id>lv\d+)'
_TESTS = [{ _TESTS = [{
'note': 'this test case includes invisible characters for title, pasting them as-is', 'url': 'https://live.nicovideo.jp/watch/lv329299587',
'url': 'https://live.nicovideo.jp/watch/lv339533123',
'info_dict': { 'info_dict': {
'id': 'lv339533123', 'id': 'lv329299587',
'title': '激辛ペヤング食べます\u202a( ;ᯅ; )\u202c(歌枠オーディション参加中)', 'ext': 'mp4',
'view_count': 1526, 'title': str,
'comment_count': 1772, 'channel': 'ニコニコエンタメチャンネル',
'description': '初めましてもかって言います❕\nのんびり自由に適当に暮らしてます', 'channel_id': 'ch2640322',
'uploader': 'もか', 'channel_url': 'https://ch.nicovideo.jp/channel/ch2640322',
'channel': 'ゲストさんのコミュニティ', 'comment_count': int,
'channel_id': 'co5776900', 'description': 'md5:281edd7f00309e99ec46a87fb16d7033',
'channel_url': 'https://com.nicovideo.jp/community/co5776900', 'live_status': 'is_live',
'timestamp': 1670677328, 'thumbnail': r're:https?://.+',
'is_live': True, 'timestamp': 1608803400,
'upload_date': '20201224',
'uploader': '株式会社ドワンゴ',
'view_count': int,
}, },
'skip': 'livestream', 'params': {'skip_download': True},
}, { }, {
'url': 'https://live2.nicovideo.jp/watch/lv339533123', 'url': 'https://live.nicovideo.jp/watch/lv331050399',
'only_matching': True, 'info_dict': {
}, { 'id': 'lv331050399',
'url': 'https://sp.live.nicovideo.jp/watch/lv339533123', 'ext': 'mp4',
'only_matching': True, 'title': str,
}, { 'age_limit': 18,
'url': 'https://sp.live2.nicovideo.jp/watch/lv339533123', 'channel': 'みんなのおもちゃ REBOOT',
'only_matching': True, 'channel_id': 'ch2642088',
'channel_url': 'https://ch.nicovideo.jp/channel/ch2642088',
'comment_count': int,
'description': 'md5:8d0bb5beaca73b911725478a1e7c7b91',
'live_status': 'is_live',
'thumbnail': r're:https?://.+',
'timestamp': 1617029400,
'upload_date': '20210329',
'uploader': '株式会社ドワンゴ',
'view_count': int,
},
'params': {'skip_download': True},
}] }]
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id, expected_status=404) webpage, urlh = self._download_webpage_handle(url, video_id, expected_status=404)
if err_msg := traverse_obj(webpage, ({find_element(cls='message')}, {clean_html})): if err_msg := traverse_obj(webpage, ({find_element(cls='message')}, {clean_html})):
raise ExtractorError(err_msg, expected=True) raise ExtractorError(err_msg, expected=True)
age_limit = 18 if 'age_auth' in urlh.url else None
if age_limit:
if not self.is_logged_in:
self.raise_login_required('Login is required to access age-restricted content')
my = self._download_webpage('https://www.nicovideo.jp/my', None, 'Checking age verification')
if traverse_obj(my, (
{find_element(id='js-initial-userpage-data', html=True)}, {extract_attributes},
'data-environment', {json.loads}, 'allowSensitiveContents', {bool},
)):
self._set_cookie('.nicovideo.jp', 'age_auth', '1')
webpage = self._download_webpage(url, video_id)
else:
raise ExtractorError('Sensitive content setting must be enabled', expected=True)
embedded_data = traverse_obj(webpage, ( embedded_data = traverse_obj(webpage, (
{find_element(tag='script', id='embedded-data', html=True)}, {find_element(tag='script', id='embedded-data', html=True)},
{extract_attributes}, 'data-props', {json.loads})) {extract_attributes}, 'data-props', {json.loads}))
@@ -1008,6 +1036,7 @@ class NiconicoLiveIE(NiconicoBaseIE):
return { return {
'id': video_id, 'id': video_id,
'title': title, 'title': title,
'age_limit': age_limit,
'downloader_options': { 'downloader_options': {
'max_quality': traverse_obj(embedded_data, ('program', 'stream', 'maxQuality', {str})) or 'normal', 'max_quality': traverse_obj(embedded_data, ('program', 'stream', 'maxQuality', {str})) or 'normal',
'ws': ws, 'ws': ws,

View File

@@ -446,9 +446,10 @@ class NRKTVIE(InfoExtractor):
class NRKTVEpisodeIE(InfoExtractor): class NRKTVEpisodeIE(InfoExtractor):
_VALID_URL = r'https?://tv\.nrk\.no/serie/(?P<id>[^/]+/sesong/(?P<season_number>\d+)/episode/(?P<episode_number>\d+))' _VALID_URL = r'https?://tv\.nrk\.no/serie/(?P<id>[^/?#]+/sesong/(?P<season_number>\d+)/episode/(?P<episode_number>\d+))'
_TESTS = [{ _TESTS = [{
'url': 'https://tv.nrk.no/serie/hellums-kro/sesong/1/episode/2', 'url': 'https://tv.nrk.no/serie/hellums-kro/sesong/1/episode/2',
'add_ie': [NRKIE.ie_key()],
'info_dict': { 'info_dict': {
'id': 'MUHH36005220', 'id': 'MUHH36005220',
'ext': 'mp4', 'ext': 'mp4',
@@ -485,26 +486,23 @@ class NRKTVEpisodeIE(InfoExtractor):
}] }]
def _real_extract(self, url): def _real_extract(self, url):
display_id, season_number, episode_number = self._match_valid_url(url).groups() display_id, season_number, episode_number = self._match_valid_url(url).group(
'id', 'season_number', 'episode_number')
webpage, urlh = self._download_webpage_handle(url, display_id)
webpage = self._download_webpage(url, display_id) if NRKTVIE.suitable(urlh.url):
nrk_id = NRKTVIE._match_id(urlh.url)
else:
nrk_id = self._search_json(
r'<script\b[^>]+\bid="pageData"[^>]*>', webpage,
'page data', display_id)['initialState']['selectedEpisodePrfId']
if not re.fullmatch(NRKTVIE._EPISODE_RE, nrk_id):
raise ExtractorError('Unable to extract NRK ID')
info = self._search_json_ld(webpage, display_id, default={}) return self.url_result(
nrk_id = info.get('@id') or self._html_search_meta( f'nrk:{nrk_id}', NRKIE, nrk_id,
'nrk:program-id', webpage, default=None) or self._search_regex( season_number=int(season_number),
rf'data-program-id=["\']({NRKTVIE._EPISODE_RE})', webpage, episode_number=int(episode_number))
'nrk id')
assert re.match(NRKTVIE._EPISODE_RE, nrk_id)
info.update({
'_type': 'url',
'id': nrk_id,
'url': f'nrk:{nrk_id}',
'ie_key': NRKIE.ie_key(),
'season_number': int(season_number),
'episode_number': int(episode_number),
})
return info
class NRKTVSerieBaseIE(NRKBaseIE): class NRKTVSerieBaseIE(NRKBaseIE):

View File

@@ -352,14 +352,6 @@ class OdnoklassnikiIE(InfoExtractor):
'subtitles': subtitles, 'subtitles': subtitles,
} }
# pladform
if provider == 'OPEN_GRAPH':
info.update({
'_type': 'url_transparent',
'url': movie['contentId'],
})
return info
if provider == 'USER_YOUTUBE': if provider == 'USER_YOUTUBE':
info.update({ info.update({
'_type': 'url_transparent', '_type': 'url_transparent',

View File

@@ -1,135 +0,0 @@
from .common import InfoExtractor
from ..utils import (
ExtractorError,
determine_ext,
int_or_none,
parse_qs,
qualities,
xpath_text,
)
class PladformIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://
(?:
(?:
out\.pladform\.ru/player|
static\.pladform\.ru/player\.swf
)
\?.*\bvideoid=|
video\.pladform\.ru/catalog/video/videoid/
)
(?P<id>\d+)
'''
_EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//out\.pladform\.ru/player\?.+?)\1']
_TESTS = [{
'url': 'http://out.pladform.ru/player?pl=18079&type=html5&videoid=100231282',
'info_dict': {
'id': '6216d548e755edae6e8280667d774791',
'ext': 'mp4',
'timestamp': 1406117012,
'title': 'Гарик Мартиросян и Гарик Харламов - Кастинг на концерт ко Дню милиции',
'age_limit': 0,
'upload_date': '20140723',
'thumbnail': str,
'view_count': int,
'description': str,
'uploader_id': '12082',
'uploader': 'Comedy Club',
'duration': 367,
},
'expected_warnings': ['HTTP Error 404: Not Found'],
}, {
'url': 'https://out.pladform.ru/player?pl=64471&videoid=3777899&vk_puid15=0&vk_puid34=0',
'md5': '53362fac3a27352da20fa2803cc5cd6f',
'info_dict': {
'id': '3777899',
'ext': 'mp4',
'title': 'СТУДИЯ СОЮЗ • Шоу Студия Союз, 24 выпуск (01.02.2018) Нурлан Сабуров и Слава Комиссаренко',
'description': 'md5:05140e8bf1b7e2d46e7ba140be57fd95',
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 3190,
},
}, {
'url': 'http://static.pladform.ru/player.swf?pl=21469&videoid=100183293&vkcid=0',
'only_matching': True,
}, {
'url': 'http://video.pladform.ru/catalog/video/videoid/100183293/vkcid/0',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
qs = parse_qs(url)
pl = qs.get('pl', ['1'])[0]
video = self._download_xml(
'http://out.pladform.ru/getVideo', video_id, query={
'pl': pl,
'videoid': video_id,
}, fatal=False)
def fail(text):
raise ExtractorError(
f'{self.IE_NAME} returned error: {text}',
expected=True)
if not video:
target_url = self._request_webpage(url, video_id, note='Resolving final URL').url
if target_url == url:
raise ExtractorError('Can\'t parse page')
return self.url_result(target_url)
if video.tag == 'error':
fail(video.text)
quality = qualities(('ld', 'sd', 'hd'))
formats = []
for src in video.findall('./src'):
if src is None:
continue
format_url = src.text
if not format_url:
continue
if src.get('type') == 'hls' or determine_ext(format_url) == 'm3u8':
formats.extend(self._extract_m3u8_formats(
format_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
else:
formats.append({
'url': src.text,
'format_id': src.get('quality'),
'quality': quality(src.get('quality')),
})
if not formats:
error = xpath_text(video, './cap', 'error', default=None)
if error:
fail(error)
webpage = self._download_webpage(
f'http://video.pladform.ru/catalog/video/videoid/{video_id}',
video_id)
title = self._og_search_title(webpage, fatal=False) or xpath_text(
video, './/title', 'title', fatal=True)
description = self._search_regex(
r'</h3>\s*<p>([^<]+)</p>', webpage, 'description', fatal=False)
thumbnail = self._og_search_thumbnail(webpage) or xpath_text(
video, './/cover', 'cover')
duration = int_or_none(xpath_text(video, './/time', 'duration'))
age_limit = int_or_none(xpath_text(video, './/age18', 'age limit'))
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'duration': duration,
'age_limit': age_limit,
'formats': formats,
}

View File

@@ -6,6 +6,7 @@ from ..utils import (
int_or_none, int_or_none,
parse_resolution, parse_resolution,
str_or_none, str_or_none,
traverse_obj,
try_get, try_get,
unified_timestamp, unified_timestamp,
url_or_none, url_or_none,
@@ -18,20 +19,21 @@ class PuhuTVIE(InfoExtractor):
IE_NAME = 'puhutv' IE_NAME = 'puhutv'
_TESTS = [{ _TESTS = [{
# film # film
'url': 'https://puhutv.com/sut-kardesler-izle', 'url': 'https://puhutv.com/bi-kucuk-eylul-meselesi-izle',
'md5': 'a347470371d56e1585d1b2c8dab01c96', 'md5': '4de98170ccb84c05779b1f046b3c86f8',
'info_dict': { 'info_dict': {
'id': '5085', 'id': '11909',
'display_id': 'sut-kardesler', 'display_id': 'bi-kucuk-eylul-meselesi',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Süt Kardeşler', 'title': 'Bi Küçük Eylül Meselesi',
'description': 'md5:ca09da25b7e57cbb5a9280d6e48d17aa', 'description': 'md5:c2ab964c6542b7b26acacca0773adce2',
'thumbnail': r're:^https?://.*\.jpg$', 'thumbnail': r're:^https?://.*\.jpg$',
'duration': 4832.44, 'duration': 6176.96,
'creator': 'Arzu Film', 'creator': 'Ay Yapım',
'timestamp': 1561062602, 'creators': ['Ay Yapım'],
'timestamp': 1561062749,
'upload_date': '20190620', 'upload_date': '20190620',
'release_year': 1976, 'release_year': 2014,
'view_count': int, 'view_count': int,
'tags': list, 'tags': list,
}, },
@@ -181,7 +183,7 @@ class PuhuTVSerieIE(InfoExtractor):
'playlist_mincount': 205, 'playlist_mincount': 205,
}, { }, {
# a film detail page which is using same url with serie page # a film detail page which is using same url with serie page
'url': 'https://puhutv.com/kaybedenler-kulubu-detay', 'url': 'https://puhutv.com/bizim-icin-sampiyon-detay',
'only_matching': True, 'only_matching': True,
}] }]
@@ -194,24 +196,19 @@ class PuhuTVSerieIE(InfoExtractor):
has_more = True has_more = True
while has_more is True: while has_more is True:
season = self._download_json( season = self._download_json(
f'https://galadriel.puhutv.com/seasons/{season_id}', f'https://appservice.puhutv.com/api/seasons/{season_id}/episodes?v=2',
season_id, f'Downloading page {page}', query={ season_id, f'Downloading page {page}', query={
'page': page, 'page': page,
'per': 40, 'per': 100,
}) })['data']
episodes = season.get('episodes')
if isinstance(episodes, list): for episode in traverse_obj(season, ('episodes', lambda _, v: v.get('slug') or v['assets'][0]['slug'])):
for ep in episodes: slug = episode.get('slug') or episode['assets'][0]['slug']
slug_path = str_or_none(ep.get('slugPath')) yield self.url_result(
if not slug_path: f'https://puhutv.com/{slug}', PuhuTVIE, episode.get('id'), episode.get('name'))
continue
video_id = str_or_none(int_or_none(ep.get('id')))
yield self.url_result(
f'https://puhutv.com/{slug_path}',
ie=PuhuTVIE.ie_key(), video_id=video_id,
video_title=ep.get('name') or ep.get('eventLabel'))
page += 1 page += 1
has_more = season.get('hasMore') has_more = traverse_obj(season, 'has_more')
def _real_extract(self, url): def _real_extract(self, url):
playlist_id = self._match_id(url) playlist_id = self._match_id(url)
@@ -226,7 +223,10 @@ class PuhuTVSerieIE(InfoExtractor):
self._extract_entries(seasons), playlist_id, info.get('name')) self._extract_entries(seasons), playlist_id, info.get('name'))
# For films, these are using same url with series # For films, these are using same url with series
video_id = info.get('slug') or info['assets'][0]['slug'] video_id = info.get('slug')
if video_id:
video_id = video_id.removesuffix('-detay')
else:
video_id = info['assets'][0]['slug'].removesuffix('-izle')
return self.url_result( return self.url_result(
f'https://puhutv.com/{video_id}-izle', f'https://puhutv.com/{video_id}-izle', PuhuTVIE, video_id)
PuhuTVIE.ie_key(), video_id)

View File

@@ -51,6 +51,20 @@ class SkebIE(InfoExtractor):
}, },
'playlist_count': 2, 'playlist_count': 2,
'expected_warnings': ['Skipping unsupported extension'], 'expected_warnings': ['Skipping unsupported extension'],
}, {
'url': 'https://skeb.jp/@Yossshy_Music/works/13',
'info_dict': {
'ext': 'wav',
'id': '5566495',
'title': '13-1',
'description': 'md5:1026b8b9ae38c67c2d995970ec196550',
'uploader': 'Yossshy',
'uploader_id': 'Yossshy_Music',
'duration': 336,
'thumbnail': r're:https?://.+',
'tags': 'count:59',
'genres': ['music'],
},
}] }]
def _call_api(self, uploader_id, work_id): def _call_api(self, uploader_id, work_id):
@@ -87,7 +101,7 @@ class SkebIE(InfoExtractor):
entries = [] entries = []
for idx, preview in enumerate(traverse_obj(works, ('previews', lambda _, v: url_or_none(v['url']))), 1): for idx, preview in enumerate(traverse_obj(works, ('previews', lambda _, v: url_or_none(v['url']))), 1):
ext = traverse_obj(preview, ('information', 'extension', {str})) ext = traverse_obj(preview, ('information', 'extension', {str}))
if ext not in ('mp3', 'mp4'): if ext not in ('mp3', 'mp4', 'wav'):
self.report_warning(f'Skipping unsupported extension "{ext}"') self.report_warning(f'Skipping unsupported extension "{ext}"')
continue continue
@@ -100,7 +114,7 @@ class SkebIE(InfoExtractor):
'url': preview['vtt_url'], 'url': preview['vtt_url'],
}], }],
} if url_or_none(preview.get('vtt_url')) else None, } if url_or_none(preview.get('vtt_url')) else None,
'vcodec': 'none' if ext == 'mp3' else None, 'vcodec': 'none' if ext in ('mp3', 'wav') else None,
**info, **info,
**traverse_obj(preview, { **traverse_obj(preview, {
'id': ('id', {str_or_none}), 'id': ('id', {str_or_none}),

View File

@@ -1,58 +1,94 @@
from .mtv import MTVServicesInfoExtractor from .mtv import MTVServicesBaseIE
class SouthParkIE(MTVServicesInfoExtractor): class SouthParkIE(MTVServicesBaseIE):
IE_NAME = 'southpark.cc.com' IE_NAME = 'southpark.cc.com'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark(?:\.cc|studios)\.com/((?:video-)?clips|(?:full-)?episodes|collections)/(?P<id>.+?)(\?|#|$))' _VALID_URL = r'https?://(?:www\.)?southpark(?:\.cc|studios)\.com/(?:video-clips|episodes|collections)/(?P<id>[^?#]+)'
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_TESTS = [{ _TESTS = [{
'url': 'https://southpark.cc.com/video-clips/d7wr06/south-park-you-all-agreed-to-counseling', 'url': 'https://southpark.cc.com/video-clips/d7wr06/south-park-you-all-agreed-to-counseling',
'info_dict': { 'info_dict': {
'id': '31929ad5-8269-11eb-8774-70df2f866ace',
'ext': 'mp4', 'ext': 'mp4',
'display_id': 'd7wr06/south-park-you-all-agreed-to-counseling',
'title': 'You All Agreed to Counseling', 'title': 'You All Agreed to Counseling',
'description': 'Kenny, Cartman, Stan, and Kyle visit Mr. Mackey and ask for his help getting Mrs. Nelson to come back. Mr. Mackey reveals the only way to get things back to normal is to get the teachers vaccinated.', 'description': 'md5:01f78fb306c7042f3f05f3c78edfc212',
'duration': 134.552,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 24',
'season_number': 24,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 1615352400, 'timestamp': 1615352400,
'upload_date': '20210310', 'upload_date': '20210310',
'release_timestamp': 1615352400,
'release_date': '20210310',
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'http://southpark.cc.com/collections/7758/fan-favorites/1', 'url': 'https://southpark.cc.com/episodes/940f8z/south-park-cartman-gets-an-anal-probe-season-1-ep-1',
'info_dict': {
'id': '5fb8887e-ecfd-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'display_id': '940f8z/south-park-cartman-gets-an-anal-probe-season-1-ep-1',
'title': 'Cartman Gets An Anal Probe',
'description': 'md5:964e1968c468545752feef102b140300',
'channel': 'Comedy Central',
'duration': 1319.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 871473600,
'upload_date': '19970813',
'release_timestamp': 871473600,
'release_date': '19970813',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://southpark.cc.com/collections/dejukt/south-park-best-of-mr-mackey/tphx9j',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'https://www.southparkstudios.com/episodes/h4o269/south-park-stunning-and-brave-season-19-ep-1', 'url': 'https://www.southparkstudios.com/episodes/h4o269/south-park-stunning-and-brave-season-19-ep-1',
'only_matching': True, 'only_matching': True,
}] }]
def _get_feed_query(self, uri):
return {
'accountOverride': 'intl.mtvi.com',
'arcEp': 'shared.southpark.global',
'ep': '90877963',
'imageEp': 'shared.southpark.global',
'mgid': uri,
}
class SouthParkEsIE(MTVServicesBaseIE):
class SouthParkEsIE(SouthParkIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'southpark.cc.com:español' IE_NAME = 'southpark.cc.com:español'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark\.cc\.com/es/episodios/(?P<id>.+?)(\?|#|$))' _VALID_URL = r'https?://(?:www\.)?southpark\.cc\.com/es/episodios/(?P<id>[^?#]+)'
_LANG = 'es'
_TESTS = [{ _TESTS = [{
'url': 'http://southpark.cc.com/es/episodios/s01e01-cartman-consigue-una-sonda-anal#source=351c1323-0b96-402d-a8b9-40d01b2e9bde&position=1&sort=!airdate', 'url': 'https://southpark.cc.com/es/episodios/er4a32/south-park-aumento-de-peso-4000-temporada-1-ep-2',
'info_dict': { 'info_dict': {
'title': 'Cartman Consigue Una Sonda Anal', 'id': '5fb94f0c-ecfd-11e0-aca6-0026b9414f30',
'description': 'Cartman Consigue Una Sonda Anal', 'ext': 'mp4',
'display_id': 'er4a32/south-park-aumento-de-peso-4000-temporada-1-ep-2',
'title': 'Aumento de peso 4000',
'description': 'md5:a939b4819ea74c245a0cde180de418c0',
'channel': 'Comedy Central',
'duration': 1320.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 872078400,
'upload_date': '19970820',
'release_timestamp': 872078400,
'release_date': '19970820',
}, },
'playlist_count': 4, 'params': {'skip_download': 'm3u8'},
'skip': 'Geo-restricted',
}] }]
class SouthParkDeIE(SouthParkIE): # XXX: Do not subclass from concrete IE class SouthParkDeIE(MTVServicesBaseIE):
IE_NAME = 'southpark.de' IE_NAME = 'southpark.de'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark\.de/(?:(en/(videoclip|collections|episodes|video-clips))|(videoclip|collections|folgen))/(?P<id>(?P<unique_id>.+?)/.+?)(?:\?|#|$))' _VALID_URL = r'https?://(?:www\.)?southpark\.de/(?:en/)?(?:videoclip|collections|episodes|video-clips|folgen)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['DE']
_GEO_BYPASS = True
_TESTS = [{ _TESTS = [{
'url': 'https://www.southpark.de/videoclip/rsribv/south-park-rueckzug-zum-gummibonbon-wald', 'url': 'https://www.southpark.de/videoclip/rsribv/south-park-rueckzug-zum-gummibonbon-wald',
'only_matching': True, 'only_matching': True,
@@ -66,123 +102,253 @@ class SouthParkDeIE(SouthParkIE): # XXX: Do not subclass from concrete IE
# clip # clip
'url': 'https://www.southpark.de/en/video-clips/ct46op/south-park-tooth-fairy-cartman', 'url': 'https://www.southpark.de/en/video-clips/ct46op/south-park-tooth-fairy-cartman',
'info_dict': { 'info_dict': {
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4', 'ext': 'mp4',
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'display_id': 'ct46op/south-park-tooth-fairy-cartman',
'title': 'Tooth Fairy Cartman', 'title': 'Tooth Fairy Cartman',
'description': 'md5:db02e23818b4dc9cb5f0c5a7e8833a68', 'description': 'Cartman steals Butters\' tooth and gets four dollars for it.',
'duration': 93.26,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 4',
'season_number': 4,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 954990360,
'upload_date': '20000406',
'release_timestamp': 954990360,
'release_date': '20000406',
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
# episode # episode
'url': 'https://www.southpark.de/en/episodes/yy0vjs/south-park-the-pandemic-special-season-24-ep-1', 'url': 'https://www.southpark.de/en/episodes/yy0vjs/south-park-the-pandemic-special-season-24-ep-1',
'info_dict': { 'info_dict': {
'id': 'f5fbd823-04bc-11eb-9b1b-0e40cf2fc285',
'ext': 'mp4', 'ext': 'mp4',
'title': 'South Park', 'id': '230a4f02-f583-11ea-834d-70df2f866ace',
'display_id': 'yy0vjs/south-park-the-pandemic-special-season-24-ep-1',
'title': 'The Pandemic Special',
'description': 'md5:ae0d875eff169dcbed16b21531857ac1', 'description': 'md5:ae0d875eff169dcbed16b21531857ac1',
'channel': 'Comedy Central',
'duration': 2724.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 24',
'season_number': 24,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 1601932260,
'upload_date': '20201005',
'release_timestamp': 1601932270,
'release_date': '20201005',
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
# clip # clip
'url': 'https://www.southpark.de/videoclip/ct46op/south-park-zahnfee-cartman', 'url': 'https://www.southpark.de/videoclip/ct46op/south-park-zahnfee-cartman',
'info_dict': { 'info_dict': {
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4', 'ext': 'mp4',
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'display_id': 'ct46op/south-park-zahnfee-cartman',
'title': 'Zahnfee Cartman', 'title': 'Zahnfee Cartman',
'description': 'md5:b917eec991d388811d911fd1377671ac', 'description': 'md5:b917eec991d388811d911fd1377671ac',
'duration': 93.26,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 4',
'season_number': 4,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 954990360,
'upload_date': '20000406',
'release_timestamp': 954990360,
'release_date': '20000406',
}, },
'params': {'skip_download': 'm3u8'},
}, { }, {
# episode # episode
'url': 'https://www.southpark.de/folgen/242csn/south-park-her-mit-dem-hirn-staffel-1-ep-7', 'url': 'https://www.southpark.de/folgen/4r4367/south-park-katerstimmung-staffel-12-ep-3',
'info_dict': { 'info_dict': {
'id': '607115f3-496f-40c3-8647-2b0bcff486c0',
'ext': 'mp4', 'ext': 'mp4',
'title': 'md5:South Park | Pink Eye | E 0107 | HDSS0107X deu | Version: 634312 | Comedy Central S1', 'id': '68c79aa4-ecfd-11e0-aca6-0026b9414f30',
'display_id': '4r4367/south-park-katerstimmung-staffel-12-ep-3',
'title': 'Katerstimmung',
'description': 'md5:94e0e2cd568ffa635e0725518bb4b180',
'channel': 'Comedy Central',
'duration': 1320.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 12',
'season_number': 12,
'episode': 'Episode 3',
'episode_number': 3,
'timestamp': 1206504000,
'upload_date': '20080326',
'release_timestamp': 1206504000,
'release_date': '20080326',
}, },
'params': {'skip_download': 'm3u8'},
}] }]
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge&ref={url}', video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
def _get_feed_query(self, uri): class SouthParkLatIE(MTVServicesBaseIE):
return
class SouthParkLatIE(SouthParkIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'southpark.lat' IE_NAME = 'southpark.lat'
_VALID_URL = r'https?://(?:www\.)?southpark\.lat/(?:en/)?(?:video-?clips?|collections|episod(?:e|io)s)/(?P<id>[^/?#&]+)' _VALID_URL = r'https?://(?:www\.)?southpark\.lat/(?:en/)?(?:video-?clips?|collections|episod(?:e|io)s)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['MX']
_GEO_BYPASS = True
_TESTS = [{ _TESTS = [{
'url': 'https://www.southpark.lat/en/video-clips/ct46op/south-park-tooth-fairy-cartman', 'url': 'https://www.southpark.lat/en/video-clips/ct46op/south-park-tooth-fairy-cartman',
'only_matching': True, 'info_dict': {
'ext': 'mp4',
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'display_id': 'ct46op/south-park-tooth-fairy-cartman',
'title': 'Tooth Fairy Cartman',
'description': 'Cartman steals Butters\' tooth and gets four dollars for it.',
'duration': 93.26,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 4',
'season_number': 4,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 954990360,
'upload_date': '20000406',
'release_timestamp': 954990360,
'release_date': '20000406',
},
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'https://www.southpark.lat/episodios/9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7', 'url': 'https://www.southpark.lat/episodios/9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7',
'only_matching': True, 'info_dict': {
'ext': 'mp4',
'id': '600d273a-ecfd-11e0-aca6-0026b9414f30',
'display_id': '9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7',
'title': 'Orgía Gatuna ',
'description': 'md5:73c6648413f5977026abb792a25c65d5',
'channel': 'Comedy Central',
'duration': 1319.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 3',
'season_number': 3,
'episode': 'Episode 7',
'episode_number': 7,
'timestamp': 931924800,
'upload_date': '19990714',
'release_timestamp': 931924800,
'release_date': '19990714',
},
'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'https://www.southpark.lat/en/collections/29ve08/south-park-heating-up/lydbrc', 'url': 'https://www.southpark.lat/en/collections/29ve08/south-park-heating-up/lydbrc',
'only_matching': True, 'only_matching': True,
}, {
# clip
'url': 'https://www.southpark.lat/en/video-clips/ct46op/south-park-tooth-fairy-cartman',
'info_dict': {
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'title': 'Tooth Fairy Cartman',
'description': 'md5:db02e23818b4dc9cb5f0c5a7e8833a68',
},
}, {
# episode
'url': 'https://www.southpark.lat/episodios/9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7',
'info_dict': {
'id': 'f5fbd823-04bc-11eb-9b1b-0e40cf2fc285',
'ext': 'mp4',
'title': 'South Park',
'description': 'md5:ae0d875eff169dcbed16b21531857ac1',
},
}]
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge&ref={url}',
video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
def _get_feed_query(self, uri):
return
class SouthParkNlIE(SouthParkIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'southpark.nl'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark\.nl/(?:clips|(?:full-)?episodes|collections)/(?P<id>.+?)(\?|#|$))'
_FEED_URL = 'http://www.southpark.nl/feeds/video-player/mrss/'
_TESTS = [{
'url': 'http://www.southpark.nl/full-episodes/s18e06-freemium-isnt-free',
'info_dict': {
'title': 'Freemium Isn\'t Free',
'description': 'Stan is addicted to the new Terrance and Phillip mobile game.',
},
'playlist_mincount': 3,
}] }]
class SouthParkDkIE(SouthParkIE): # XXX: Do not subclass from concrete IE class SouthParkDkIE(MTVServicesBaseIE):
IE_NAME = 'southparkstudios.dk' IE_NAME = 'southparkstudios.nu'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southparkstudios\.(?:dk|nu)/(?:clips|full-episodes|collections)/(?P<id>.+?)(\?|#|$))' _VALID_URL = r'https?://(?:www\.)?southparkstudios\.nu/(?:video-clips|episodes|collections)/(?P<id>[^?#]+)'
_FEED_URL = 'http://www.southparkstudios.dk/feeds/video-player/mrss/' _GEO_COUNTRIES = ['DK']
_GEO_BYPASS = True
_TESTS = [{ _TESTS = [{
'url': 'http://www.southparkstudios.dk/full-episodes/s18e07-grounded-vindaloop', 'url': 'https://www.southparkstudios.nu/episodes/y3uvvc/south-park-grounded-vindaloop-season-18-ep-7',
'info_dict': { 'info_dict': {
'ext': 'mp4',
'id': 'f60690a7-21a7-4ee7-8834-d7099a8707ab',
'display_id': 'y3uvvc/south-park-grounded-vindaloop-season-18-ep-7',
'title': 'Grounded Vindaloop', 'title': 'Grounded Vindaloop',
'description': 'Butters is convinced he\'s living in a virtual reality.', 'description': 'Butters is convinced he\'s living in a virtual reality.',
'channel': 'Comedy Central',
'duration': 1319.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 18',
'season_number': 18,
'episode': 'Episode 7',
'episode_number': 7,
'timestamp': 1415847600,
'upload_date': '20141113',
'release_timestamp': 1415768400,
'release_date': '20141112',
}, },
'playlist_mincount': 3, 'params': {'skip_download': 'm3u8'},
}, { }, {
'url': 'http://www.southparkstudios.dk/collections/2476/superhero-showdown/1', 'url': 'https://www.southparkstudios.nu/collections/8dk7kr/south-park-best-of-south-park/sd5ean',
'only_matching': True, 'only_matching': True,
}, { }, {
'url': 'http://www.southparkstudios.nu/collections/2476/superhero-showdown/1', 'url': 'https://www.southparkstudios.nu/video-clips/k42mrf/south-park-kick-the-baby',
'only_matching': True,
}]
class SouthParkComBrIE(MTVServicesBaseIE):
IE_NAME = 'southparkstudios.com.br'
_VALID_URL = r'https?://(?:www\.)?southparkstudios\.com\.br/(?:en/)?(?:video-clips|episodios|collections|episodes)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['BR']
_GEO_BYPASS = True
_TESTS = [{
'url': 'https://www.southparkstudios.com.br/video-clips/3vifo0/south-park-welcome-to-mar-a-lago7',
'info_dict': {
'ext': 'mp4',
'id': 'ccc3e952-7352-11f0-b405-16fff45bc035',
'display_id': '3vifo0/south-park-welcome-to-mar-a-lago7',
'title': 'Welcome to Mar-a-Lago',
'description': 'The President welcomes Mr. Mackey to Mar-a-Lago, a magical place where anything can happen.',
'duration': 139.223,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 27',
'season_number': 27,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 1754546400,
'upload_date': '20250807',
'release_timestamp': 1754546400,
'release_date': '20250807',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.southparkstudios.com.br/episodios/940f8z/south-park-cartman-ganha-uma-sonda-anal-temporada-1-ep-1',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.com.br/collections/8dk7kr/south-park-best-of-south-park/sd5ean',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.com.br/en/episodes/5v0oap/south-park-south-park-the-25th-anniversary-concert-ep-1',
'only_matching': True,
}]
class SouthParkCoUkIE(MTVServicesBaseIE):
IE_NAME = 'southparkstudios.co.uk'
_VALID_URL = r'https?://(?:www\.)?southparkstudios\.co\.uk/(?:video-clips|collections|episodes)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['UK']
_GEO_BYPASS = True
_TESTS = [{
'url': 'https://www.southparkstudios.co.uk/video-clips/8kabfr/south-park-respectclydesauthority',
'info_dict': {
'ext': 'mp4',
'id': 'f6d9af23-734e-11f0-b405-16fff45bc035',
'display_id': '8kabfr/south-park-respectclydesauthority',
'title': '#RespectClydesAuthority',
'description': 'After learning about Clyde\'s Podcast, Cartman needs to see it for himself.',
'duration': 45.045,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 27',
'season_number': 27,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 1754546400,
'upload_date': '20250807',
'release_timestamp': 1754546400,
'release_date': '20250807',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.southparkstudios.co.uk/episodes/e1yoxn/south-park-imaginationland-season-11-ep-10',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.co.uk/collections/8dk7kr/south-park-best-of-south-park/sd5ean',
'only_matching': True, 'only_matching': True,
}] }]

View File

@@ -1,46 +0,0 @@
from .mtv import MTVServicesInfoExtractor
class BellatorIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?bellator\.com/[^/]+/[\da-z]{6}(?:[/?#&]|$)'
_TESTS = [{
'url': 'http://www.bellator.com/fight/atwr7k/bellator-158-michael-page-vs-evangelista-cyborg',
'info_dict': {
'title': 'Michael Page vs. Evangelista Cyborg',
'description': 'md5:0d917fc00ffd72dd92814963fc6cbb05',
},
'playlist_count': 3,
}, {
'url': 'http://www.bellator.com/video-clips/bw6k7n/bellator-158-foundations-michael-venom-page',
'only_matching': True,
}]
_FEED_URL = 'http://www.bellator.com/feeds/mrss/'
_GEO_COUNTRIES = ['US']
class ParamountNetworkIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?paramountnetwork\.com/[^/]+/[\da-z]{6}(?:[/?#&]|$)'
_TESTS = [{
'url': 'http://www.paramountnetwork.com/episodes/j830qm/lip-sync-battle-joel-mchale-vs-jim-rash-season-2-ep-13',
'info_dict': {
'id': '37ace3a8-1df6-48be-85b8-38df8229e241',
'ext': 'mp4',
'title': 'Lip Sync Battle|April 28, 2016|2|209|Joel McHale Vs. Jim Rash|Act 1',
'description': 'md5:a739ca8f978a7802f67f8016d27ce114',
},
'params': {
# m3u8 download
'skip_download': True,
},
}]
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_GEO_COUNTRIES = ['US']
def _get_feed_query(self, uri):
return {
'arcEp': 'paramountnetwork.com',
'imageEp': 'paramountnetwork.com',
'mgid': uri,
}

View File

@@ -1,142 +1,175 @@
import re import json
from .common import InfoExtractor from .common import InfoExtractor
from .youtube import YoutubeIE
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
clean_html,
extract_attributes, extract_attributes,
get_element_by_class, join_nonempty,
js_to_json,
str_or_none, str_or_none,
url_or_none,
)
from ..utils.traversal import (
find_element,
find_elements,
traverse_obj,
trim_str,
) )
class SteamIE(InfoExtractor): class SteamIE(InfoExtractor):
_VALID_URL = r'''(?x) _VALID_URL = r'https?://store\.steampowered\.com(?:/agecheck)?/app/(?P<id>\d+)/?(?:[^?/#]+/?)?(?:[?#]|$)'
https?://(?:store\.steampowered|steamcommunity)\.com/
(?:agecheck/)?
(?P<urltype>video|app)/ #If the page is only for videos or for a game
(?P<gameID>\d+)/?
(?P<videoID>\d*)(?P<extra>\??) # For urltype == video we sometimes get the videoID
|
https?://(?:www\.)?steamcommunity\.com/sharedfiles/filedetails/\?id=(?P<fileID>[0-9]+)
'''
_VIDEO_PAGE_TEMPLATE = 'http://store.steampowered.com/video/%s/'
_AGECHECK_TEMPLATE = 'http://store.steampowered.com/agecheck/video/%s/?snr=1_agecheck_agecheck__age-gate&ageDay=1&ageMonth=January&ageYear=1970'
_TESTS = [{ _TESTS = [{
'url': 'http://store.steampowered.com/video/105600/', 'url': 'https://store.steampowered.com/app/105600',
'playlist': [
{
'md5': '695242613303ffa2a4c44c9374ddc067',
'info_dict': {
'id': '256785003',
'ext': 'mp4',
'title': 'Terraria video 256785003',
'thumbnail': r're:^https://cdn\.[^\.]+\.steamstatic\.com',
},
},
{
'md5': '6a294ee0c4b1f47f5bb76a65e31e3592',
'info_dict': {
'id': '2040428',
'ext': 'mp4',
'title': 'Terraria video 2040428',
'thumbnail': r're:^https://cdn\.[^\.]+\.steamstatic\.com',
},
},
],
'info_dict': { 'info_dict': {
'id': '105600', 'id': '105600',
'title': 'Terraria', 'title': 'Terraria',
}, },
'params': { 'playlist_mincount': 3,
'playlistend': 2,
},
}, { }, {
'url': 'https://store.steampowered.com/app/271590/Grand_Theft_Auto_V/', 'url': 'https://store.steampowered.com/app/271590/Grand_Theft_Auto_V/',
'info_dict': { 'info_dict': {
'id': '271590', 'id': '271590',
'title': 'Grand Theft Auto V', 'title': 'Grand Theft Auto V Legacy',
}, },
'playlist_count': 23, 'playlist_mincount': 26,
}] }]
def _real_extract(self, url): def _real_extract(self, url):
m = self._match_valid_url(url) app_id = self._match_id(url)
file_id = m.group('fileID')
if file_id:
video_url = url
playlist_id = file_id
else:
game_id = m.group('gameID')
playlist_id = game_id
video_url = self._VIDEO_PAGE_TEMPLATE % playlist_id
self._set_cookie('steampowered.com', 'wants_mature_content', '1') self._set_cookie('store.steampowered.com', 'wants_mature_content', '1')
self._set_cookie('steampowered.com', 'birthtime', '944006401') self._set_cookie('store.steampowered.com', 'birthtime', '946652401')
self._set_cookie('steampowered.com', 'lastagecheckage', '1-0-2000') self._set_cookie('store.steampowered.com', 'lastagecheckage', '1-January-2000')
webpage = self._download_webpage(video_url, playlist_id) webpage = self._download_webpage(url, app_id)
app_name = traverse_obj(webpage, ({find_element(cls='apphub_AppName')}, {clean_html}))
if re.search('<div[^>]+>Please enter your birth date to continue:</div>', webpage) is not None:
video_url = self._AGECHECK_TEMPLATE % playlist_id
self.report_age_confirmation()
webpage = self._download_webpage(video_url, playlist_id)
videos = re.findall(r'(<div[^>]+id=[\'"]highlight_movie_(\d+)[\'"][^>]+>)', webpage)
entries = [] entries = []
playlist_title = get_element_by_class('apphub_AppName', webpage) for data_prop in traverse_obj(webpage, (
for movie, movie_id in videos: {find_elements(cls='highlight_player_item highlight_movie', html=True)},
if not movie: ..., {extract_attributes}, 'data-props', {json.loads}, {dict},
continue )):
movie = extract_attributes(movie)
if not movie_id:
continue
entry = {
'id': movie_id,
'title': f'{playlist_title} video {movie_id}',
}
formats = [] formats = []
if movie: if hls_manifest := traverse_obj(data_prop, ('hlsManifest', {url_or_none})):
entry['thumbnail'] = movie.get('data-poster') formats.extend(self._extract_m3u8_formats(
for quality in ('', '-hd'): hls_manifest, app_id, 'mp4', m3u8_id='hls', fatal=False))
for ext in ('webm', 'mp4'): for dash_manifest in traverse_obj(data_prop, ('dashManifests', ..., {url_or_none})):
video_url = movie.get(f'data-{ext}{quality}-source') formats.extend(self._extract_mpd_formats(
if video_url: dash_manifest, app_id, mpd_id='dash', fatal=False))
formats.append({
'format_id': ext + quality,
'url': video_url,
})
entry['formats'] = formats
entries.append(entry)
embedded_videos = re.findall(r'(<iframe[^>]+>)', webpage)
for evideos in embedded_videos:
evideos = extract_attributes(evideos).get('src')
video_id = self._search_regex(r'youtube\.com/embed/([0-9A-Za-z_-]{11})', evideos, 'youtube_video_id', default=None)
if video_id:
entries.append({
'_type': 'url_transparent',
'id': video_id,
'url': video_id,
'ie_key': 'Youtube',
})
if not entries:
raise ExtractorError('Could not find any videos')
return self.playlist_result(entries, playlist_id, playlist_title) movie_id = traverse_obj(data_prop, ('id', {trim_str(start='highlight_movie_')}))
entries.append({
'id': movie_id,
'title': join_nonempty(app_name, 'video', movie_id, delim=' '),
'formats': formats,
'series': app_name,
'series_id': app_id,
'thumbnail': traverse_obj(data_prop, ('screenshot', {url_or_none})),
})
return self.playlist_result(entries, app_id, app_name)
class SteamCommunityIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?steamcommunity\.com/sharedfiles/filedetails(?:/?\?(?:[^#]+&)?id=|/)(?P<id>\d+)'
_TESTS = [{
'url': 'https://steamcommunity.com/sharedfiles/filedetails/2717708756',
'info_dict': {
'id': '39Sp2mB1Ly8',
'ext': 'mp4',
'title': 'Gmod Stamina System + Customisable HUD',
'age_limit': 0,
'availability': 'public',
'categories': ['Gaming'],
'channel': 'Zworld Gmod',
'channel_follower_count': int,
'channel_id': 'UCER1FWFSdMMiTKBnnEDBPaw',
'channel_url': 'https://www.youtube.com/channel/UCER1FWFSdMMiTKBnnEDBPaw',
'chapters': 'count:3',
'comment_count': int,
'description': 'md5:0ba8d8e550231211fa03fac920e5b0bf',
'duration': 162,
'like_count': int,
'live_status': 'not_live',
'media_type': 'video',
'playable_in_embed': True,
'tags': 'count:20',
'thumbnail': r're:https?://i\.ytimg\.com/vi/.+',
'timestamp': 1641955348,
'upload_date': '20220112',
'uploader': 'Zworld Gmod',
'uploader_id': '@gmod-addons',
'uploader_url': 'https://www.youtube.com/@gmod-addons',
'view_count': int,
},
'add_ie': ['Youtube'],
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://steamcommunity.com/sharedfiles/filedetails/?id=3544291945',
'info_dict': {
'id': '5JZZlsAdsvI',
'ext': 'mp4',
'title': 'Memories',
'age_limit': 0,
'availability': 'public',
'categories': ['Gaming'],
'channel': 'Bombass Team',
'channel_follower_count': int,
'channel_id': 'UCIJgtNyCV53IeSkzg3FWSFA',
'channel_url': 'https://www.youtube.com/channel/UCIJgtNyCV53IeSkzg3FWSFA',
'comment_count': int,
'description': 'md5:1b8a103a5d67a3c48d07c065de7e2c63',
'duration': 83,
'like_count': int,
'live_status': 'not_live',
'media_type': 'video',
'playable_in_embed': True,
'tags': 'count:10',
'thumbnail': r're:https?://i\.ytimg\.com/vi/.+',
'timestamp': 1754427291,
'upload_date': '20250805',
'uploader': 'Bombass Team',
'uploader_id': '@BombassTeam',
'uploader_url': 'https://www.youtube.com/@BombassTeam',
'view_count': int,
},
'add_ie': ['Youtube'],
'params': {'skip_download': 'm3u8'},
}]
def _real_extract(self, url):
file_id = self._match_id(url)
webpage = self._download_webpage(url, file_id)
flashvars = self._search_json(
r'var\s+rgMovieFlashvars\s*=', webpage, 'flashvars',
file_id, default={}, transform_source=js_to_json)
youtube_id = (
traverse_obj(flashvars, (..., 'YOUTUBE_VIDEO_ID', {str}, any))
or traverse_obj(webpage, (
{find_element(cls='movieFrame modal', html=True)}, {extract_attributes}, 'id', {str})))
if not youtube_id:
raise ExtractorError('No video found', expected=True)
return self.url_result(youtube_id, YoutubeIE)
class SteamCommunityBroadcastIE(InfoExtractor): class SteamCommunityBroadcastIE(InfoExtractor):
_VALID_URL = r'https?://steamcommunity\.(?:com)/broadcast/watch/(?P<id>\d+)' _VALID_URL = r'https?://(?:www\.)?steamcommunity\.com/broadcast/watch/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://steamcommunity.com/broadcast/watch/76561199073851486', 'url': 'https://steamcommunity.com/broadcast/watch/76561199073851486',
'info_dict': { 'info_dict': {
'id': '76561199073851486', 'id': '76561199073851486',
'title': r're:Steam Community :: pepperm!nt :: Broadcast 2022-06-26 \d{2}:\d{2}',
'ext': 'mp4', 'ext': 'mp4',
'title': str,
'uploader_id': '1113585758', 'uploader_id': '1113585758',
'uploader': 'pepperm!nt', 'uploader': 'pepperm!nt',
'live_status': 'is_live', 'live_status': 'is_live',
}, },
'skip': 'Stream has ended', 'params': {'skip_download': 'Livestream'},
}] }]
def _real_extract(self, url): def _real_extract(self, url):

View File

@@ -88,7 +88,7 @@ class SVTBaseIE(InfoExtractor):
'id': video_id, 'id': video_id,
'title': title, 'title': title,
'formats': formats, 'formats': formats,
'subtitles': subtitles, 'subtitles': self._fixup_subtitles(subtitles),
'duration': duration, 'duration': duration,
'timestamp': timestamp, 'timestamp': timestamp,
'age_limit': age_limit, 'age_limit': age_limit,
@@ -99,6 +99,16 @@ class SVTBaseIE(InfoExtractor):
'is_live': is_live, 'is_live': is_live,
} }
@staticmethod
def _fixup_subtitles(subtitles):
# See: https://github.com/yt-dlp/yt-dlp/issues/14020
fixed_subtitles = {}
for lang, subs in subtitles.items():
for sub in subs:
fixed_lang = f'{lang}-forced' if 'text-open' in sub['url'] else lang
fixed_subtitles.setdefault(fixed_lang, []).append(sub)
return fixed_subtitles
class SVTPlayIE(SVTBaseIE): class SVTPlayIE(SVTBaseIE):
IE_NAME = 'svt:play' IE_NAME = 'svt:play'
@@ -115,6 +125,26 @@ class SVTPlayIE(SVTBaseIE):
) )
''' '''
_TESTS = [{ _TESTS = [{
'url': 'https://www.svtplay.se/video/eXYgwZb/sverige-och-kriget/1-utbrottet',
'md5': '2382036fd6f8c994856c323fe51c426e',
'info_dict': {
'id': 'ePBvGRq',
'ext': 'mp4',
'title': '1. Utbrottet',
'description': 'md5:02291cc3159dbc9aa95d564e77a8a92b',
'series': 'Sverige och kriget',
'episode': '1. Utbrottet',
'timestamp': 1746921600,
'upload_date': '20250511',
'duration': 3585,
'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$',
'age_limit': 0,
'subtitles': {'sv': 'count:3', 'sv-forced': 'count:3'},
},
'params': {
'skip_download': 'm3u8',
},
}, {
'url': 'https://www.svtplay.se/video/30479064', 'url': 'https://www.svtplay.se/video/30479064',
'md5': '2382036fd6f8c994856c323fe51c426e', 'md5': '2382036fd6f8c994856c323fe51c426e',
'info_dict': { 'info_dict': {

View File

@@ -65,7 +65,7 @@ class TikTokBaseIE(InfoExtractor):
@functools.cached_property @functools.cached_property
def _DEVICE_ID(self): def _DEVICE_ID(self):
return self._KNOWN_DEVICE_ID or str(random.randint(7250000000000000000, 7351147085025500000)) return self._KNOWN_DEVICE_ID or str(random.randint(7250000000000000000, 7325099899999994577))
@functools.cached_property @functools.cached_property
def _API_HOSTNAME(self): def _API_HOSTNAME(self):
@@ -921,6 +921,8 @@ class TikTokUserIE(TikTokBaseIE):
'id': 'MS4wLjABAAAAepiJKgwWhulvCpSuUVsp7sgVVsFJbbNaLeQ6OQ0oAJERGDUIXhb2yxxHZedsItgT', 'id': 'MS4wLjABAAAAepiJKgwWhulvCpSuUVsp7sgVVsFJbbNaLeQ6OQ0oAJERGDUIXhb2yxxHZedsItgT',
'title': 'corgibobaa', 'title': 'corgibobaa',
}, },
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}, { }, {
'url': 'https://www.tiktok.com/@6820838815978423302', 'url': 'https://www.tiktok.com/@6820838815978423302',
'playlist_mincount': 5, 'playlist_mincount': 5,
@@ -928,6 +930,8 @@ class TikTokUserIE(TikTokBaseIE):
'id': 'MS4wLjABAAAA0tF1nBwQVVMyrGu3CqttkNgM68Do1OXUFuCY0CRQk8fEtSVDj89HqoqvbSTmUP2W', 'id': 'MS4wLjABAAAA0tF1nBwQVVMyrGu3CqttkNgM68Do1OXUFuCY0CRQk8fEtSVDj89HqoqvbSTmUP2W',
'title': '6820838815978423302', 'title': '6820838815978423302',
}, },
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}, { }, {
'url': 'https://www.tiktok.com/@meme', 'url': 'https://www.tiktok.com/@meme',
'playlist_mincount': 593, 'playlist_mincount': 593,
@@ -935,14 +939,17 @@ class TikTokUserIE(TikTokBaseIE):
'id': 'MS4wLjABAAAAiKfaDWeCsT3IHwY77zqWGtVRIy9v4ws1HbVi7auP1Vx7dJysU_hc5yRiGywojRD6', 'id': 'MS4wLjABAAAAiKfaDWeCsT3IHwY77zqWGtVRIy9v4ws1HbVi7auP1Vx7dJysU_hc5yRiGywojRD6',
'title': 'meme', 'title': 'meme',
}, },
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}, { }, {
'url': 'tiktokuser:MS4wLjABAAAAM3R2BtjzVT-uAtstkl2iugMzC6AtnpkojJbjiOdDDrdsTiTR75-8lyWJCY5VvDrZ', 'url': 'tiktokuser:MS4wLjABAAAAM3R2BtjzVT-uAtstkl2iugMzC6AtnpkojJbjiOdDDrdsTiTR75-8lyWJCY5VvDrZ',
'playlist_mincount': 31, 'playlist_mincount': 31,
'info_dict': { 'info_dict': {
'id': 'MS4wLjABAAAAM3R2BtjzVT-uAtstkl2iugMzC6AtnpkojJbjiOdDDrdsTiTR75-8lyWJCY5VvDrZ', 'id': 'MS4wLjABAAAAM3R2BtjzVT-uAtstkl2iugMzC6AtnpkojJbjiOdDDrdsTiTR75-8lyWJCY5VvDrZ',
}, },
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}] }]
_USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:115.0) Gecko/20100101 Firefox/115.0'
_API_BASE_URL = 'https://www.tiktok.com/api/creator/item_list/' _API_BASE_URL = 'https://www.tiktok.com/api/creator/item_list/'
def _build_web_query(self, sec_uid, cursor): def _build_web_query(self, sec_uid, cursor):
@@ -980,15 +987,29 @@ class TikTokUserIE(TikTokBaseIE):
'webcast_language': 'en', 'webcast_language': 'en',
} }
def _entries(self, sec_uid, user_name): def _entries(self, sec_uid, user_name, fail_early=False):
display_id = user_name or sec_uid display_id = user_name or sec_uid
seen_ids = set() seen_ids = set()
cursor = int(time.time() * 1E3) cursor = int(time.time() * 1E3)
for page in itertools.count(1): for page in itertools.count(1):
response = self._download_json( for retry in self.RetryManager():
self._API_BASE_URL, display_id, f'Downloading page {page}', response = self._download_json(
query=self._build_web_query(sec_uid, cursor), headers={'User-Agent': self._USER_AGENT}) self._API_BASE_URL, display_id, f'Downloading page {page}',
query=self._build_web_query(sec_uid, cursor))
# Avoid infinite loop caused by bad device_id
# See: https://github.com/yt-dlp/yt-dlp/issues/14031
current_batch = sorted(traverse_obj(response, ('itemList', ..., 'id', {str})))
if current_batch and current_batch == sorted(seen_ids):
message = 'TikTok API keeps sending the same page'
if self._KNOWN_DEVICE_ID:
raise ExtractorError(
f'{message}. Try again with a different device_id', expected=True)
# The user didn't pass a device_id so we can reset it and retry
del self._DEVICE_ID
retry.error = ExtractorError(
f'{message}. Taking measures to avoid an infinite loop', expected=True)
for video in traverse_obj(response, ('itemList', lambda _, v: v['id'])): for video in traverse_obj(response, ('itemList', lambda _, v: v['id'])):
video_id = video['id'] video_id = video['id']
@@ -1008,42 +1029,60 @@ class TikTokUserIE(TikTokBaseIE):
cursor = old_cursor - 7 * 86_400_000 cursor = old_cursor - 7 * 86_400_000
# In case 'hasMorePrevious' is wrong, break if we have gone back before TikTok existed # In case 'hasMorePrevious' is wrong, break if we have gone back before TikTok existed
if cursor < 1472706000000 or not traverse_obj(response, 'hasMorePrevious'): if cursor < 1472706000000 or not traverse_obj(response, 'hasMorePrevious'):
break return
def _get_sec_uid(self, user_url, user_name, msg): # This code path is ideally only reached when one of the following is true:
# 1. TikTok profile is private and webpage detection was bypassed due to a tiktokuser:sec_uid URL
# 2. TikTok profile is *not* private but all of their videos are private
if fail_early and not seen_ids:
self.raise_login_required(
'This user\'s account is likely either private or all of their videos are private. '
'Log into an account that has access')
def _extract_sec_uid_from_embed(self, user_name):
webpage = self._download_webpage( webpage = self._download_webpage(
user_url, user_name, fatal=False, headers={'User-Agent': 'Mozilla/5.0'}, f'https://www.tiktok.com/embed/@{user_name}', user_name,
note=f'Downloading {msg} webpage', errnote=f'Unable to download {msg} webpage') or '' 'Downloading user embed page', errnote=False, fatal=False)
return (traverse_obj(self._get_universal_data(webpage, user_name), if not webpage:
('webapp.user-detail', 'userInfo', 'user', 'secUid', {str})) self.report_warning('This user\'s account is either private or has embedding disabled')
or traverse_obj(self._get_sigi_state(webpage, user_name), return None
('LiveRoom', 'liveRoomUserInfo', 'user', 'secUid', {str}),
('UserModule', 'users', ..., 'secUid', {str}, any))) data = traverse_obj(self._search_json(
r'<script[^>]+\bid=[\'"]__FRONTITY_CONNECT_STATE__[\'"][^>]*>',
webpage, 'data', user_name, default={}),
('source', 'data', f'/embed/@{user_name}', {dict}))
for aweme_id in traverse_obj(data, ('videoList', ..., 'id', {str})):
webpage_url = self._create_url(user_name, aweme_id)
video_data, _ = self._extract_web_data_and_status(webpage_url, aweme_id, fatal=False)
sec_uid = self._parse_aweme_video_web(
video_data, webpage_url, aweme_id, extract_flat=True).get('channel_id')
if sec_uid:
return sec_uid
return None
def _real_extract(self, url): def _real_extract(self, url):
user_name, sec_uid = self._match_id(url), None user_name, sec_uid = self._match_id(url), None
if mobj := re.fullmatch(r'MS4wLjABAAAA[\w-]{64}', user_name): if re.fullmatch(r'MS4wLjABAAAA[\w-]{64}', user_name):
user_name, sec_uid = None, mobj.group(0) user_name, sec_uid = None, user_name
fail_early = True
else: else:
sec_uid = (self._get_sec_uid(self._UPLOADER_URL_FORMAT % user_name, user_name, 'user')
or self._get_sec_uid(self._UPLOADER_URL_FORMAT % f'{user_name}/live', user_name, 'live'))
if not sec_uid:
webpage = self._download_webpage( webpage = self._download_webpage(
f'https://www.tiktok.com/embed/@{user_name}', user_name, self._UPLOADER_URL_FORMAT % user_name, user_name,
note='Downloading user embed page', fatal=False) or '' 'Downloading user webpage', 'Unable to download user webpage',
data = traverse_obj(self._search_json( fatal=False, headers={'User-Agent': 'Mozilla/5.0'}) or ''
r'<script[^>]+\bid=[\'"]__FRONTITY_CONNECT_STATE__[\'"][^>]*>', detail = traverse_obj(
webpage, 'data', user_name, default={}), self._get_universal_data(webpage, user_name), ('webapp.user-detail', {dict})) or {}
('source', 'data', f'/embed/@{user_name}', {dict})) if detail.get('statusCode') == 10222:
self.raise_login_required(
for aweme_id in traverse_obj(data, ('videoList', ..., 'id', {str})): 'This user\'s account is private. Log into an account that has access')
webpage_url = self._create_url(user_name, aweme_id) sec_uid = traverse_obj(detail, ('userInfo', 'user', 'secUid', {str}))
video_data, _ = self._extract_web_data_and_status(webpage_url, aweme_id, fatal=False) if sec_uid:
sec_uid = self._parse_aweme_video_web( fail_early = not traverse_obj(detail, ('userInfo', 'itemList', ...))
video_data, webpage_url, aweme_id, extract_flat=True).get('channel_id') else:
if sec_uid: sec_uid = self._extract_sec_uid_from_embed(user_name)
break fail_early = False
if not sec_uid: if not sec_uid:
raise ExtractorError( raise ExtractorError(
@@ -1051,7 +1090,7 @@ class TikTokUserIE(TikTokBaseIE):
'from a video posted by this user, try using "tiktokuser:channel_id" as the ' 'from a video posted by this user, try using "tiktokuser:channel_id" as the '
'input URL (replacing `channel_id` with its actual value)', expected=True) 'input URL (replacing `channel_id` with its actual value)', expected=True)
return self.playlist_result(self._entries(sec_uid, user_name), sec_uid, user_name) return self.playlist_result(self._entries(sec_uid, user_name, fail_early), sec_uid, user_name)
class TikTokBaseListIE(TikTokBaseIE): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor class TikTokBaseListIE(TikTokBaseIE): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor

View File

@@ -1,37 +0,0 @@
from .mtv import MTVServicesInfoExtractor
# TODO: Remove - Reason not used anymore - Service moved to youtube
class TVLandIE(MTVServicesInfoExtractor):
IE_NAME = 'tvland.com'
_VALID_URL = r'https?://(?:www\.)?tvland\.com/(?:video-clips|(?:full-)?episodes)/(?P<id>[^/?#.]+)'
_FEED_URL = 'http://www.tvland.com/feeds/mrss/'
_TESTS = [{
# Geo-restricted. Without a proxy metadata are still there. With a
# proxy it redirects to http://m.tvland.com/app/
'url': 'https://www.tvland.com/episodes/s04pzf/everybody-loves-raymond-the-dog-season-1-ep-19',
'info_dict': {
'description': 'md5:84928e7a8ad6649371fbf5da5e1ad75a',
'title': 'The Dog',
},
'playlist_mincount': 5,
'skip': '404 Not found',
}, {
'url': 'https://www.tvland.com/video-clips/4n87f2/younger-a-first-look-at-younger-season-6',
'md5': 'e2c6389401cf485df26c79c247b08713',
'info_dict': {
'id': '891f7d3c-5b5b-4753-b879-b7ba1a601757',
'ext': 'mp4',
'title': 'Younger|April 30, 2019|6|NO-EPISODE#|A First Look at Younger Season 6',
'description': 'md5:595ea74578d3a888ae878dfd1c7d4ab2',
'upload_date': '20190430',
'timestamp': 1556658000,
},
'params': {
'skip_download': True,
},
}, {
'url': 'http://www.tvland.com/full-episodes/iu0hz6/younger-a-kiss-is-just-a-kiss-season-3-ep-301',
'only_matching': True,
}]

View File

@@ -1,352 +0,0 @@
import json
import re
from .common import InfoExtractor
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
int_or_none,
parse_iso8601,
parse_qs,
)
class VevoBaseIE(InfoExtractor):
def _extract_json(self, webpage, video_id):
return self._parse_json(
self._search_regex(
r'window\.__INITIAL_STORE__\s*=\s*({.+?});\s*</script>',
webpage, 'initial store'),
video_id)
class VevoIE(VevoBaseIE):
"""
Accepts urls from vevo.com or in the format 'vevo:{id}'
(currently used by MTVIE and MySpaceIE)
"""
_VALID_URL = r'''(?x)
(?:https?://(?:www\.)?vevo\.com/watch/(?!playlist|genre)(?:[^/]+/(?:[^/]+/)?)?|
https?://cache\.vevo\.com/m/html/embed\.html\?video=|
https?://videoplayer\.vevo\.com/embed/embedded\?videoId=|
https?://embed\.vevo\.com/.*?[?&]isrc=|
https?://tv\.vevo\.com/watch/artist/(?:[^/]+)/|
vevo:)
(?P<id>[^&?#]+)'''
_EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:cache\.)?vevo\.com/.+?)\1']
_TESTS = [{
'url': 'http://www.vevo.com/watch/hurts/somebody-to-die-for/GB1101300280',
'md5': '95ee28ee45e70130e3ab02b0f579ae23',
'info_dict': {
'id': 'GB1101300280',
'ext': 'mp4',
'title': 'Hurts - Somebody to Die For',
'timestamp': 1372057200,
'upload_date': '20130624',
'uploader': 'Hurts',
'track': 'Somebody to Die For',
'artist': 'Hurts',
'genre': 'Pop',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'v3 SMIL format',
'url': 'http://www.vevo.com/watch/cassadee-pope/i-wish-i-could-break-your-heart/USUV71302923',
'md5': 'f6ab09b034f8c22969020b042e5ac7fc',
'info_dict': {
'id': 'USUV71302923',
'ext': 'mp4',
'title': 'Cassadee Pope - I Wish I Could Break Your Heart',
'timestamp': 1392796919,
'upload_date': '20140219',
'uploader': 'Cassadee Pope',
'track': 'I Wish I Could Break Your Heart',
'artist': 'Cassadee Pope',
'genre': 'Country',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'Age-limited video',
'url': 'https://www.vevo.com/watch/justin-timberlake/tunnel-vision-explicit/USRV81300282',
'info_dict': {
'id': 'USRV81300282',
'ext': 'mp4',
'title': 'Justin Timberlake - Tunnel Vision (Explicit)',
'age_limit': 18,
'timestamp': 1372888800,
'upload_date': '20130703',
'uploader': 'Justin Timberlake',
'track': 'Tunnel Vision (Explicit)',
'artist': 'Justin Timberlake',
'genre': 'Pop',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'No video_info',
'url': 'http://www.vevo.com/watch/k-camp-1/Till-I-Die/USUV71503000',
'md5': '8b83cc492d72fc9cf74a02acee7dc1b0',
'info_dict': {
'id': 'USUV71503000',
'ext': 'mp4',
'title': 'K Camp ft. T.I. - Till I Die',
'age_limit': 18,
'timestamp': 1449468000,
'upload_date': '20151207',
'uploader': 'K Camp',
'track': 'Till I Die',
'artist': 'K Camp',
'genre': 'Hip-Hop',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'Featured test',
'url': 'https://www.vevo.com/watch/lemaitre/Wait/USUV71402190',
'md5': 'd28675e5e8805035d949dc5cf161071d',
'info_dict': {
'id': 'USUV71402190',
'ext': 'mp4',
'title': 'Lemaitre ft. LoLo - Wait',
'age_limit': 0,
'timestamp': 1413432000,
'upload_date': '20141016',
'uploader': 'Lemaitre',
'track': 'Wait',
'artist': 'Lemaitre',
'genre': 'Electronic',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'Only available via webpage',
'url': 'http://www.vevo.com/watch/GBUV71600656',
'md5': '67e79210613865b66a47c33baa5e37fe',
'info_dict': {
'id': 'GBUV71600656',
'ext': 'mp4',
'title': 'ABC - Viva Love',
'age_limit': 0,
'timestamp': 1461830400,
'upload_date': '20160428',
'uploader': 'ABC',
'track': 'Viva Love',
'artist': 'ABC',
'genre': 'Pop',
},
'expected_warnings': ['Failed to download video versions info'],
}, {
# no genres available
'url': 'http://www.vevo.com/watch/INS171400764',
'only_matching': True,
}, {
# Another case available only via the webpage; using streams/streamsV3 formats
# Geo-restricted to Netherlands/Germany
'url': 'http://www.vevo.com/watch/boostee/pop-corn-clip-officiel/FR1A91600909',
'only_matching': True,
}, {
'url': 'https://embed.vevo.com/?isrc=USH5V1923499&partnerId=4d61b777-8023-4191-9ede-497ed6c24647&partnerAdCode=',
'only_matching': True,
}, {
'url': 'https://tv.vevo.com/watch/artist/janet-jackson/US0450100550',
'only_matching': True,
}]
_VERSIONS = {
0: 'youtube', # only in AuthenticateVideo videoVersions
1: 'level3',
2: 'akamai',
3: 'level3',
4: 'amazon',
}
def _initialize_api(self, video_id):
webpage = self._download_webpage(
'https://accounts.vevo.com/token', None,
note='Retrieving oauth token',
errnote='Unable to retrieve oauth token',
data=json.dumps({
'client_id': 'SPupX1tvqFEopQ1YS6SS',
'grant_type': 'urn:vevo:params:oauth:grant-type:anonymous',
}).encode(),
headers={
'Content-Type': 'application/json',
})
if re.search(r'(?i)THIS PAGE IS CURRENTLY UNAVAILABLE IN YOUR REGION', webpage):
self.raise_geo_restricted(
f'{self.IE_NAME} said: This page is currently unavailable in your region')
auth_info = self._parse_json(webpage, video_id)
self._api_url_template = self.http_scheme() + '//apiv2.vevo.com/%s?token=' + auth_info['legacy_token']
def _call_api(self, path, *args, **kwargs):
try:
data = self._download_json(self._api_url_template % path, *args, **kwargs)
except ExtractorError as e:
if isinstance(e.cause, HTTPError):
errors = self._parse_json(e.cause.response.read().decode(), None)['errors']
error_message = ', '.join([error['message'] for error in errors])
raise ExtractorError(f'{self.IE_NAME} said: {error_message}', expected=True)
raise
return data
def _real_extract(self, url):
video_id = self._match_id(url)
self._initialize_api(video_id)
video_info = self._call_api(
f'video/{video_id}', video_id, 'Downloading api video info',
'Failed to download video info')
video_versions = self._call_api(
f'video/{video_id}/streams', video_id,
'Downloading video versions info',
'Failed to download video versions info',
fatal=False)
# Some videos are only available via webpage (e.g.
# https://github.com/ytdl-org/youtube-dl/issues/9366)
if not video_versions:
webpage = self._download_webpage(url, video_id)
json_data = self._extract_json(webpage, video_id)
if 'streams' in json_data.get('default', {}):
video_versions = json_data['default']['streams'][video_id][0]
else:
video_versions = [
value
for key, value in json_data['apollo']['data'].items()
if key.startswith(f'{video_id}.streams')]
uploader = None
artist = None
featured_artist = None
artists = video_info.get('artists')
for curr_artist in artists:
if curr_artist.get('role') == 'Featured':
featured_artist = curr_artist['name']
else:
artist = uploader = curr_artist['name']
formats = []
for video_version in video_versions:
version = self._VERSIONS.get(video_version.get('version'), 'generic')
version_url = video_version.get('url')
if not version_url:
continue
if '.ism' in version_url:
continue
elif '.mpd' in version_url:
formats.extend(self._extract_mpd_formats(
version_url, video_id, mpd_id=f'dash-{version}',
note=f'Downloading {version} MPD information',
errnote=f'Failed to download {version} MPD information',
fatal=False))
elif '.m3u8' in version_url:
formats.extend(self._extract_m3u8_formats(
version_url, video_id, 'mp4', 'm3u8_native',
m3u8_id=f'hls-{version}',
note=f'Downloading {version} m3u8 information',
errnote=f'Failed to download {version} m3u8 information',
fatal=False))
else:
m = re.search(r'''(?xi)
_(?P<quality>[a-z0-9]+)
_(?P<width>[0-9]+)x(?P<height>[0-9]+)
_(?P<vcodec>[a-z0-9]+)
_(?P<vbr>[0-9]+)
_(?P<acodec>[a-z0-9]+)
_(?P<abr>[0-9]+)
\.(?P<ext>[a-z0-9]+)''', version_url)
if not m:
continue
formats.append({
'url': version_url,
'format_id': f'http-{version}-{video_version.get("quality") or m.group("quality")}',
'vcodec': m.group('vcodec'),
'acodec': m.group('acodec'),
'vbr': int(m.group('vbr')),
'abr': int(m.group('abr')),
'ext': m.group('ext'),
'width': int(m.group('width')),
'height': int(m.group('height')),
})
track = video_info['title']
if featured_artist:
artist = f'{artist} ft. {featured_artist}'
title = f'{artist} - {track}' if artist else track
genres = video_info.get('genres')
genre = (
genres[0] if genres and isinstance(genres, list)
and isinstance(genres[0], str) else None)
is_explicit = video_info.get('isExplicit')
if is_explicit is True:
age_limit = 18
elif is_explicit is False:
age_limit = 0
else:
age_limit = None
return {
'id': video_id,
'title': title,
'formats': formats,
'thumbnail': video_info.get('imageUrl') or video_info.get('thumbnailUrl'),
'timestamp': parse_iso8601(video_info.get('releaseDate')),
'uploader': uploader,
'duration': int_or_none(video_info.get('duration')),
'view_count': int_or_none(video_info.get('views', {}).get('total')),
'age_limit': age_limit,
'track': track,
'artist': uploader,
'genre': genre,
}
class VevoPlaylistIE(VevoBaseIE):
_VALID_URL = r'https?://(?:www\.)?vevo\.com/watch/(?P<kind>playlist|genre)/(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.vevo.com/watch/genre/rock',
'info_dict': {
'id': 'rock',
'title': 'Rock',
},
'playlist_count': 20,
}, {
'url': 'http://www.vevo.com/watch/genre/rock?index=0',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = self._match_valid_url(url)
playlist_id = mobj.group('id')
playlist_kind = mobj.group('kind')
webpage = self._download_webpage(url, playlist_id)
qs = parse_qs(url)
index = qs.get('index', [None])[0]
if index:
video_id = self._search_regex(
r'<meta[^>]+content=(["\'])vevo://video/(?P<id>.+?)\1[^>]*>',
webpage, 'video id', default=None, group='id')
if video_id:
return self.url_result(f'vevo:{video_id}', VevoIE.ie_key())
playlists = self._extract_json(webpage, playlist_id)['default'][f'{playlist_kind}s']
playlist = (next(iter(playlists.values()))
if playlist_kind == 'playlist' else playlists[playlist_id])
entries = [
self.url_result(f'vevo:{src}', VevoIE.ie_key())
for src in playlist['isrcs']]
return self.playlist_result(
entries, playlist.get('playlistId') or playlist_id,
playlist.get('name'), playlist.get('description'))

View File

@@ -1,33 +1,50 @@
from .mtv import MTVServicesInfoExtractor from .mtv import MTVServicesBaseIE
# TODO: Remove - Reason: Outdated Site
class VH1IE(MTVServicesInfoExtractor): class VH1IE(MTVServicesBaseIE):
IE_NAME = 'vh1.com' IE_NAME = 'vh1.com'
_FEED_URL = 'http://www.vh1.com/feeds/mrss/' _VALID_URL = r'https?://(?:www\.)?vh1\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_TESTS = [{ _TESTS = [{
'url': 'https://www.vh1.com/episodes/0aqivv/nick-cannon-presents-wild-n-out-foushee-season-16-ep-12', 'url': 'https://www.vh1.com/episodes/d06ta1/barely-famous-barely-famous-season-1-ep-1',
'info_dict': { 'info_dict': {
'title': 'Fousheé', 'id': '4af4cf2c-a854-11e4-9596-0026b9414f30',
'description': 'Fousheé joins Team Evolutions fight against Nick and Team Revolution in Baby Daddy, Baby Mama; Kick Em Out the Classroom; Backseat of My Ride and Wildstyle; and Fousheé performs.',
},
'playlist_mincount': 4,
'skip': '404 Not found',
}, {
# Clip
'url': 'https://www.vh1.com/video-clips/e0sja0/nick-cannon-presents-wild-n-out-foushee-clap-for-him',
'info_dict': {
'id': 'a07563f7-a37b-4e7f-af68-85855c2c7cc3',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Fousheé - "clap for him"', 'display_id': 'd06ta1',
'description': 'Singer Fousheé hits the Wild N Out: In the Dark stage with a performance of the tongue-in-cheek track "clap for him" from her 2021 album "time machine."', 'title': 'Barely Famous',
'upload_date': '20210826', 'description': 'md5:6da5c9d88012eba0a80fc731c99b5fed',
'channel': 'VH1',
'duration': 1280.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'Barely Famous',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 1426680000,
'upload_date': '20150318',
'release_timestamp': 1426680000,
'release_date': '20150318',
}, },
'params': { 'params': {'skip_download': 'm3u8'},
# m3u8 download }, {
'skip_download': True, 'url': 'https://www.vh1.com/video-clips/ryzt2n/love-hip-hop-miami-love-hip-hop-miami-season-5-recap',
'info_dict': {
'id': '59e62974-4a5c-4417-91c3-5044cb2f4ce2',
'ext': 'mp4',
'display_id': 'ryzt2n',
'title': 'Love & Hip Hop Miami - Season 5 Recap',
'description': 'md5:4e49c65d0007bfc8d06db555a6b76ef0',
'duration': 792.083,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'Love & Hip Hop Miami',
'season': 'Season 6',
'season_number': 6,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1732597200,
'upload_date': '20241126',
'release_timestamp': 1732597200,
'release_date': '20241126',
}, },
'params': {'skip_download': 'm3u8'},
}] }]
_VALID_URL = r'https?://(?:www\.)?vh1\.com/(?:video-clips|episodes)/(?P<id>[^/?#.]+)'

View File

@@ -28,7 +28,6 @@ from ..utils import (
qualities, qualities,
smuggle_url, smuggle_url,
str_or_none, str_or_none,
traverse_obj,
try_call, try_call,
try_get, try_get,
unified_timestamp, unified_timestamp,
@@ -39,6 +38,7 @@ from ..utils import (
urlhandle_detect_ext, urlhandle_detect_ext,
urljoin, urljoin,
) )
from ..utils.traversal import require, traverse_obj
class VimeoBaseInfoExtractor(InfoExtractor): class VimeoBaseInfoExtractor(InfoExtractor):
@@ -117,13 +117,13 @@ class VimeoBaseInfoExtractor(InfoExtractor):
def _jwt_is_expired(self, token): def _jwt_is_expired(self, token):
return jwt_decode_hs256(token)['exp'] - time.time() < 120 return jwt_decode_hs256(token)['exp'] - time.time() < 120
def _fetch_viewer_info(self, display_id=None, fatal=True): def _fetch_viewer_info(self, display_id=None):
if self._viewer_info and not self._jwt_is_expired(self._viewer_info['jwt']): if self._viewer_info and not self._jwt_is_expired(self._viewer_info['jwt']):
return self._viewer_info return self._viewer_info
self._viewer_info = self._download_json( self._viewer_info = self._download_json(
'https://vimeo.com/_next/viewer', display_id, 'Downloading web token info', 'https://vimeo.com/_next/viewer', display_id, 'Downloading web token info',
'Failed to download web token info', fatal=fatal, headers={'Accept': 'application/json'}) 'Failed to download web token info', headers={'Accept': 'application/json'})
return self._viewer_info return self._viewer_info
@@ -502,6 +502,43 @@ class VimeoBaseInfoExtractor(InfoExtractor):
'quality': 1, 'quality': 1,
} }
@staticmethod
def _get_embed_params(is_embed, referer):
return {
'is_embed': 'true' if is_embed else 'false',
'referrer': urllib.parse.urlparse(referer).hostname if referer and is_embed else '',
}
def _get_album_data_and_hashed_pass(self, album_id, is_embed, referer):
viewer = self._fetch_viewer_info(album_id)
jwt = viewer['jwt']
album = self._download_json(
'https://api.vimeo.com/albums/' + album_id,
album_id, headers={'Authorization': 'jwt ' + jwt, 'Accept': 'application/json'},
query={**self._get_embed_params(is_embed, referer), 'fields': 'description,name,privacy'})
hashed_pass = None
if traverse_obj(album, ('privacy', 'view')) == 'password':
password = self.get_param('videopassword')
if not password:
raise ExtractorError(
'This album is protected by a password, use the --video-password option',
expected=True)
try:
hashed_pass = self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth',
album_id, 'Verifying the password', data=urlencode_postdata({
'password': password,
'token': viewer['xsrft'],
}), headers={
'X-Requested-With': 'XMLHttpRequest',
})['hashed_pass']
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Wrong password', expected=True)
raise
return album, hashed_pass
class VimeoIE(VimeoBaseInfoExtractor): class VimeoIE(VimeoBaseInfoExtractor):
"""Information extractor for vimeo.com.""" """Information extractor for vimeo.com."""
@@ -1188,42 +1225,6 @@ class VimeoIE(VimeoBaseInfoExtractor):
info[k + '_count'] = int_or_none(try_get(connections, lambda x: x[k + 's']['total'])) info[k + '_count'] = int_or_none(try_get(connections, lambda x: x[k + 's']['total']))
return info return info
def _try_album_password(self, url):
album_id = self._search_regex(
r'vimeo\.com/(?:album|showcase)/([^/]+)', url, 'album id', default=None)
if not album_id:
return
viewer = self._fetch_viewer_info(album_id, fatal=False)
if not viewer:
webpage = self._download_webpage(url, album_id)
viewer = self._parse_json(self._search_regex(
r'bootstrap_data\s*=\s*({.+?})</script>',
webpage, 'bootstrap data'), album_id)['viewer']
jwt = viewer['jwt']
album = self._download_json(
'https://api.vimeo.com/albums/' + album_id,
album_id, headers={'Authorization': 'jwt ' + jwt, 'Accept': 'application/json'},
query={'fields': 'description,name,privacy'})
if try_get(album, lambda x: x['privacy']['view']) == 'password':
password = self.get_param('videopassword')
if not password:
raise ExtractorError(
'This album is protected by a password, use the --video-password option',
expected=True)
try:
self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth',
album_id, 'Verifying the password', data=urlencode_postdata({
'password': password,
'token': viewer['xsrft'],
}), headers={
'X-Requested-With': 'XMLHttpRequest',
})
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Wrong password', expected=True)
raise
def _real_extract(self, url): def _real_extract(self, url):
url, data, headers = self._unsmuggle_headers(url) url, data, headers = self._unsmuggle_headers(url)
if 'Referer' not in headers: if 'Referer' not in headers:
@@ -1238,8 +1239,14 @@ class VimeoIE(VimeoBaseInfoExtractor):
if any(p in url for p in ('play_redirect_hls', 'moogaloop.swf')): if any(p in url for p in ('play_redirect_hls', 'moogaloop.swf')):
url = 'https://vimeo.com/' + video_id url = 'https://vimeo.com/' + video_id
self._try_album_password(url) album_id = self._search_regex(
is_secure = urllib.parse.urlparse(url).scheme == 'https' r'vimeo\.com/(?:album|showcase)/([0-9]+)/', url, 'album id', default=None)
if album_id:
# Detect password-protected showcase video => POST album password => set cookies
self._get_album_data_and_hashed_pass(album_id, False, None)
parsed_url = urllib.parse.urlparse(url)
is_secure = parsed_url.scheme == 'https'
try: try:
# Retrieve video webpage to extract further information # Retrieve video webpage to extract further information
webpage, urlh = self._download_webpage_handle( webpage, urlh = self._download_webpage_handle(
@@ -1265,7 +1272,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
f'{self._downloader._format_err("compromising your security/cookies", "light red")}, ' f'{self._downloader._format_err("compromising your security/cookies", "light red")}, '
f'try replacing "https:" with "http:" in the input URL. {dcip_msg}.', expected=True) f'try replacing "https:" with "http:" in the input URL. {dcip_msg}.', expected=True)
if '://player.vimeo.com/video/' in url: if parsed_url.hostname == 'player.vimeo.com':
config = self._search_json( config = self._search_json(
r'\b(?:playerC|c)onfig\s*=', webpage, 'info section', video_id) r'\b(?:playerC|c)onfig\s*=', webpage, 'info section', video_id)
if config.get('view') == 4: if config.get('view') == 4:
@@ -1293,7 +1300,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
config_url = None config_url = None
channel_id = self._search_regex( channel_id = self._search_regex(
r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None) r'vimeo\.com/channels/([^/?#]+)', url, 'channel id', default=None)
if channel_id: if channel_id:
config_url = self._extract_config_url(webpage, default=None) config_url = self._extract_config_url(webpage, default=None)
video_description = clean_html(get_element_by_class('description', webpage)) video_description = clean_html(get_element_by_class('description', webpage))
@@ -1531,7 +1538,7 @@ class VimeoUserIE(VimeoChannelIE): # XXX: Do not subclass from concrete IE
class VimeoAlbumIE(VimeoBaseInfoExtractor): class VimeoAlbumIE(VimeoBaseInfoExtractor):
IE_NAME = 'vimeo:album' IE_NAME = 'vimeo:album'
_VALID_URL = r'https://vimeo\.com/(?:album|showcase)/(?P<id>\d+)(?:$|[?#]|/(?!video))' _VALID_URL = r'https://vimeo\.com/(?:album|showcase)/(?P<id>[^/?#]+)(?:$|[?#]|(?P<is_embed>/embed))'
_TITLE_RE = r'<header id="page_header">\n\s*<h1>(.*?)</h1>' _TITLE_RE = r'<header id="page_header">\n\s*<h1>(.*?)</h1>'
_TESTS = [{ _TESTS = [{
'url': 'https://vimeo.com/album/2632481', 'url': 'https://vimeo.com/album/2632481',
@@ -1549,12 +1556,63 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
}, },
'playlist_count': 1, 'playlist_count': 1,
'params': {'videopassword': 'youtube-dl'}, 'params': {'videopassword': 'youtube-dl'},
}, {
'note': 'embedded album that requires "referrer" in query (smuggled)',
'url': 'https://vimeo.com/showcase/10677689/embed#__youtubedl_smuggle=%7B%22referer%22%3A+%22https%3A%2F%2Fwww.riccardomutimusic.com%2F%22%7D',
'info_dict': {
'title': 'La Traviata - la serie completa',
'id': '10677689',
},
'playlist': [{
'url': 'https://player.vimeo.com/video/505682113#__youtubedl_smuggle=%7B%22referer%22%3A+%22https%3A%2F%2Fwww.riccardomutimusic.com%2F%22%7D',
'info_dict': {
'id': '505682113',
'ext': 'mp4',
'title': 'La Traviata - Episodio 7',
'uploader': 'RMMusic',
'uploader_id': 'user62556494',
'uploader_url': 'https://vimeo.com/user62556494',
'duration': 3202,
'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+',
},
}],
'params': {
'playlist_items': '1',
'skip_download': 'm3u8',
},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
'note': 'embedded album that requires "referrer" in query (passed as param)',
'url': 'https://vimeo.com/showcase/10677689/embed',
'info_dict': {
'title': 'La Traviata - la serie completa',
'id': '10677689',
},
'playlist_mincount': 9,
'params': {'http_headers': {'Referer': 'https://www.riccardomutimusic.com/'}},
}, {
'url': 'https://vimeo.com/showcase/11803104/embed2',
'info_dict': {
'title': 'Romans Video Ministry',
'id': '11803104',
},
'playlist_mincount': 41,
}, {
'note': 'non-numeric slug, need to fetch numeric album ID',
'url': 'https://vimeo.com/showcase/BethelTally-Homegoing-Services',
'info_dict': {
'title': 'BethelTally Homegoing Services',
'id': '11547429',
'description': 'Bethel Missionary Baptist Church\nTallahassee, FL',
},
'playlist_mincount': 8,
}] }]
_PAGE_SIZE = 100 _PAGE_SIZE = 100
def _fetch_page(self, album_id, authorization, hashed_pass, page): def _fetch_page(self, album_id, hashed_pass, is_embed, referer, page):
api_page = page + 1 api_page = page + 1
query = { query = {
**self._get_embed_params(is_embed, referer),
'fields': 'link,uri', 'fields': 'link,uri',
'page': api_page, 'page': api_page,
'per_page': self._PAGE_SIZE, 'per_page': self._PAGE_SIZE,
@@ -1565,7 +1623,7 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
videos = self._download_json( videos = self._download_json(
f'https://api.vimeo.com/albums/{album_id}/videos', f'https://api.vimeo.com/albums/{album_id}/videos',
album_id, f'Downloading page {api_page}', query=query, headers={ album_id, f'Downloading page {api_page}', query=query, headers={
'Authorization': 'jwt ' + authorization, 'Authorization': 'jwt ' + self._fetch_viewer_info(album_id)['jwt'],
'Accept': 'application/json', 'Accept': 'application/json',
})['data'] })['data']
except ExtractorError as e: except ExtractorError as e:
@@ -1577,44 +1635,37 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
if not link: if not link:
continue continue
uri = video.get('uri') uri = video.get('uri')
video_id = self._search_regex(r'/videos/(\d+)', uri, 'video_id', default=None) if uri else None video_id = self._search_regex(r'/videos/(\d+)', uri, 'id', default=None) if uri else None
if is_embed:
if not video_id:
self.report_warning(f'Skipping due to missing video ID: {link}')
continue
link = f'https://player.vimeo.com/video/{video_id}'
if referer:
link = self._smuggle_referrer(link, referer)
yield self.url_result(link, VimeoIE.ie_key(), video_id) yield self.url_result(link, VimeoIE.ie_key(), video_id)
def _real_extract(self, url): def _real_extract(self, url):
album_id = self._match_id(url) url, _, http_headers = self._unsmuggle_headers(url)
viewer = self._fetch_viewer_info(album_id, fatal=False) album_id, is_embed = self._match_valid_url(url).group('id', 'is_embed')
if not viewer: referer = http_headers.get('Referer')
webpage = self._download_webpage(url, album_id)
viewer = self._parse_json(self._search_regex( if not re.fullmatch(r'[0-9]+', album_id):
r'bootstrap_data\s*=\s*({.+?})</script>', auth_info = self._download_json(
webpage, 'bootstrap data'), album_id)['viewer'] f'https://vimeo.com/showcase/{album_id}/auth', album_id, 'Downloading album info',
jwt = viewer['jwt'] headers={'X-Requested-With': 'XMLHttpRequest'}, expected_status=(401, 403))
album = self._download_json( album_id = traverse_obj(auth_info, (
'https://api.vimeo.com/albums/' + album_id, 'metadata', 'id', {int}, {str_or_none}, {require('album ID')}))
album_id, headers={'Authorization': 'jwt ' + jwt, 'Accept': 'application/json'},
query={'fields': 'description,name,privacy'}) try:
hashed_pass = None album, hashed_pass = self._get_album_data_and_hashed_pass(album_id, is_embed, referer)
if try_get(album, lambda x: x['privacy']['view']) == 'password': except ExtractorError as e:
password = self.get_param('videopassword') if is_embed and not referer and isinstance(e.cause, HTTPError) and e.cause.status == 403:
if not password: raise ExtractorError(self._REFERER_HINT, expected=True)
raise ExtractorError( raise
'This album is protected by a password, use the --video-password option',
expected=True)
try:
hashed_pass = self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth',
album_id, 'Verifying the password', data=urlencode_postdata({
'password': password,
'token': viewer['xsrft'],
}), headers={
'X-Requested-With': 'XMLHttpRequest',
})['hashed_pass']
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Wrong password', expected=True)
raise
entries = OnDemandPagedList(functools.partial( entries = OnDemandPagedList(functools.partial(
self._fetch_page, album_id, jwt, hashed_pass), self._PAGE_SIZE) self._fetch_page, album_id, hashed_pass, is_embed, referer), self._PAGE_SIZE)
return self.playlist_result( return self.playlist_result(
entries, album_id, album.get('name'), album.get('description')) entries, album_id, album.get('name'), album.get('description'))
@@ -1910,10 +1961,10 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
'like_count': int, 'like_count': int,
'duration': 9810, 'duration': 9810,
'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+', 'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+',
'timestamp': 1747502974, 'timestamp': 1746627359,
'upload_date': '20250517', 'upload_date': '20250507',
'release_timestamp': 1747502998, 'release_timestamp': 1746627359,
'release_date': '20250517', 'release_date': '20250507',
'live_status': 'was_live', 'live_status': 'was_live',
}, },
'params': {'skip_download': 'm3u8'}, 'params': {'skip_download': 'm3u8'},
@@ -1970,7 +2021,7 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
# "24/7" livestream # "24/7" livestream
'url': 'https://vimeo.com/event/4768062', 'url': 'https://vimeo.com/event/4768062',
'info_dict': { 'info_dict': {
'id': '1097650937', 'id': '1108792268',
'ext': 'mp4', 'ext': 'mp4',
'display_id': '4768062', 'display_id': '4768062',
'title': r're:GRACELAND CAM \d{4}-\d{2}-\d{2} \d{2}:\d{2}$', 'title': r're:GRACELAND CAM \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
@@ -1978,8 +2029,8 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
'uploader': 'Elvis Presley\'s Graceland', 'uploader': 'Elvis Presley\'s Graceland',
'uploader_id': 'visitgraceland', 'uploader_id': 'visitgraceland',
'uploader_url': 'https://vimeo.com/visitgraceland', 'uploader_url': 'https://vimeo.com/visitgraceland',
'release_timestamp': 1751396691, 'release_timestamp': 1754812109,
'release_date': '20250701', 'release_date': '20250810',
'live_status': 'is_live', 'live_status': 'is_live',
}, },
'params': {'skip_download': 'livestream'}, 'params': {'skip_download': 'livestream'},
@@ -2023,10 +2074,10 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
'like_count': int, 'like_count': int,
'duration': 4466, 'duration': 4466,
'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+', 'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+',
'timestamp': 1612228466, 'timestamp': 1610059450,
'upload_date': '20210202', 'upload_date': '20210107',
'release_timestamp': 1612228538, 'release_timestamp': 1610059450,
'release_date': '20210202', 'release_date': '20210107',
'live_status': 'was_live', 'live_status': 'was_live',
}, },
'params': {'skip_download': 'm3u8'}, 'params': {'skip_download': 'm3u8'},
@@ -2214,7 +2265,7 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
video_filter = lambda _, v: self._extract_video_id_and_unlisted_hash(v)[0] == video_id video_filter = lambda _, v: self._extract_video_id_and_unlisted_hash(v)[0] == video_id
else: else:
video_filter = lambda _, v: v['live']['status'] in ('streaming', 'done') video_filter = lambda _, v: traverse_obj(v, ('live', 'status')) != 'unavailable'
for page in itertools.count(1): for page in itertools.count(1):
videos_data = self._call_events_api( videos_data = self._call_events_api(
@@ -2226,15 +2277,18 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
if video or not traverse_obj(videos_data, ('paging', 'next', {str})): if video or not traverse_obj(videos_data, ('paging', 'next', {str})):
break break
if not video: # requested video_id is unavailable or no videos are available
raise ExtractorError('This event video is unavailable', expected=True)
live_status = { live_status = {
'streaming': 'is_live', 'streaming': 'is_live',
'done': 'was_live', 'done': 'was_live',
None: 'was_live',
}.get(traverse_obj(video, ('live', 'status', {str}))) }.get(traverse_obj(video, ('live', 'status', {str})))
if not live_status: # requested video_id is unavailable or no videos are available if live_status == 'was_live':
raise ExtractorError('This event video is unavailable', expected=True)
elif live_status == 'was_live':
return self._vimeo_url_result(*self._extract_video_id_and_unlisted_hash(video), event_id) return self._vimeo_url_result(*self._extract_video_id_and_unlisted_hash(video), event_id)
config_url = video['config_url'] config_url = video['config_url']
if config_url: # view_policy == 'embed_only' or live_status == 'is_live' if config_url: # view_policy == 'embed_only' or live_status == 'is_live'

View File

@@ -5,7 +5,6 @@ import re
from .common import InfoExtractor from .common import InfoExtractor
from .dailymotion import DailymotionIE from .dailymotion import DailymotionIE
from .odnoklassniki import OdnoklassnikiIE from .odnoklassniki import OdnoklassnikiIE
from .pladform import PladformIE
from .sibnet import SibnetEmbedIE from .sibnet import SibnetEmbedIE
from .vimeo import VimeoIE from .vimeo import VimeoIE
from .youtube import YoutubeIE from .youtube import YoutubeIE
@@ -334,11 +333,6 @@ class VKIE(VKBaseIE):
'url': 'https://vk.com/video205387401_164765225', 'url': 'https://vk.com/video205387401_164765225',
'only_matching': True, 'only_matching': True,
}, },
{
# pladform embed
'url': 'https://vk.com/video-76116461_171554880',
'only_matching': True,
},
{ {
'url': 'http://new.vk.com/video205387401_165548505', 'url': 'http://new.vk.com/video205387401_165548505',
'only_matching': True, 'only_matching': True,
@@ -456,10 +450,6 @@ class VKIE(VKBaseIE):
if vimeo_url is not None: if vimeo_url is not None:
return self.url_result(vimeo_url, VimeoIE.ie_key()) return self.url_result(vimeo_url, VimeoIE.ie_key())
pladform_url = PladformIE._extract_url(info_page)
if pladform_url:
return self.url_result(pladform_url, PladformIE.ie_key())
m_rutube = re.search( m_rutube = re.search(
r'\ssrc="((?:https?:)?//rutube\.ru\\?/(?:video|play)\\?/embed(?:.*?))\\?"', info_page) r'\ssrc="((?:https?:)?//rutube\.ru\\?/(?:video|play)\\?/embed(?:.*?))\\?"', info_page)
if m_rutube is not None: if m_rutube is not None:

View File

@@ -14,7 +14,7 @@ from ..utils import (
get_element_html_by_class, get_element_html_by_class,
int_or_none, int_or_none,
jwt_decode_hs256, jwt_decode_hs256,
jwt_encode_hs256, jwt_encode,
make_archive_id, make_archive_id,
merge_dicts, merge_dicts,
parse_age_limit, parse_age_limit,
@@ -98,9 +98,9 @@ class VRTBaseIE(InfoExtractor):
'Content-Type': 'application/json', 'Content-Type': 'application/json',
}, data=json.dumps({ }, data=json.dumps({
'identityToken': id_token or '', 'identityToken': id_token or '',
'playerInfo': jwt_encode_hs256(player_info, self._JWT_SIGNING_KEY, headers={ 'playerInfo': jwt_encode(player_info, self._JWT_SIGNING_KEY, headers={
'kid': self._JWT_KEY_ID, 'kid': self._JWT_KEY_ID,
}).decode(), }),
}, separators=(',', ':')).encode())['vrtPlayerToken'] }, separators=(',', ':')).encode())['vrtPlayerToken']
return self._download_json( return self._download_json(

View File

@@ -1,60 +1,42 @@
from .common import InfoExtractor from .medialaan import MedialaanBaseIE
from ..utils import ( from ..utils import str_or_none
int_or_none, from ..utils.traversal import require, traverse_obj
parse_iso8601,
try_get,
)
class VTMIE(InfoExtractor): class VTMIE(MedialaanBaseIE):
_WORKING = False _VALID_URL = r'https?://(?:www\.)?vtm\.be/[^/?#]+~v(?P<id>[\da-f]{8}(?:-[\da-f]{4}){3}-[\da-f]{12})'
_VALID_URL = r'https?://(?:www\.)?vtm\.be/([^/?&#]+)~v(?P<id>[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12})' _TESTS = [{
_TEST = {
'url': 'https://vtm.be/gast-vernielt-genkse-hotelkamer~ve7534523-279f-4b4d-a5c9-a33ffdbe23e1', 'url': 'https://vtm.be/gast-vernielt-genkse-hotelkamer~ve7534523-279f-4b4d-a5c9-a33ffdbe23e1',
'md5': '37dca85fbc3a33f2de28ceb834b071f8',
'info_dict': { 'info_dict': {
'id': '192445', 'id': '192445',
'ext': 'mp4', 'ext': 'mp4',
'title': 'Gast vernielt Genkse hotelkamer', 'title': 'Gast vernielt Genkse hotelkamer',
'timestamp': 1611060180, 'channel': 'VTM',
'upload_date': '20210119', 'channel_id': '867',
'description': 'md5:75fce957d219646ff1b65ba449ab97b5',
'duration': 74, 'duration': 74,
# TODO: fix url _type result processing 'genres': ['Documentaries'],
# 'series': 'Op Interventie', 'release_date': '20210119',
'release_timestamp': 1611060180,
'series': 'Op Interventie',
'series_id': '2658',
'tags': 'count:2',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'VTM',
'uploader_id': '74',
}, },
} }]
def _real_initialize(self):
if not self._get_cookies('https://vtm.be/').get('authId'):
self.raise_login_required()
def _real_extract(self, url): def _real_extract(self, url):
uuid = self._match_id(url) video_id = self._match_id(url)
video = self._download_json( webpage = self._download_webpage(url, video_id)
'https://omc4vm23offuhaxx6hekxtzspi.appsync-api.eu-west-1.amazonaws.com/graphql', apollo_state = self._search_json(
uuid, query={ r'window\.__APOLLO_STATE__\s*=', webpage, 'apollo state', video_id)
'query': '''{ mychannels_id = traverse_obj(apollo_state, (
getComponent(type: Video, uuid: "%s") { f'Video:{{"uuid":"{video_id}"}}', 'myChannelsVideo', {str_or_none}, {require('mychannels ID')}))
... on Video {
description
duration
myChannelsVideo
program {
title
}
publishedAt
title
}
}
}''' % uuid, # noqa: UP031
}, headers={
'x-api-key': 'da2-lz2cab4tfnah3mve6wiye4n77e',
})['data']['getComponent']
return { return self._extract_from_mychannels_api(mychannels_id)
'_type': 'url',
'id': uuid,
'title': video.get('title'),
'url': 'http://mychannels.video/embed/%d' % video['myChannelsVideo'],
'description': video.get('description'),
'timestamp': parse_iso8601(video.get('publishedAt')),
'duration': int_or_none(video.get('duration')),
'series': try_get(video, lambda x: x['program']['title']),
'ie_key': 'Medialaan',
}

View File

@@ -8,6 +8,7 @@ from ..utils import (
int_or_none, int_or_none,
make_archive_id, make_archive_id,
mimetype2ext, mimetype2ext,
parse_qs,
parse_resolution, parse_resolution,
str_or_none, str_or_none,
strip_jsonp, strip_jsonp,
@@ -52,13 +53,16 @@ class WeiboBaseIE(InfoExtractor):
'_rand': random.random(), '_rand': random.random(),
}) })
def _weibo_download_json(self, url, video_id, *args, fatal=True, note='Downloading JSON metadata', **kwargs): def _weibo_download_json(self, url, video_id, note='Downloading JSON metadata', data=None, headers=None, query=None):
# XXX: Always fatal; _download_webpage_handle only returns False (not a tuple) on error headers = {
webpage, urlh = self._download_webpage_handle(url, video_id, *args, fatal=fatal, note=note, **kwargs) 'Referer': 'https://weibo.com/',
**(headers or {}),
}
webpage, urlh = self._download_webpage_handle(url, video_id, note=note, data=data, headers=headers, query=query)
if urllib.parse.urlparse(urlh.url).netloc == 'passport.weibo.com': if urllib.parse.urlparse(urlh.url).netloc == 'passport.weibo.com':
self._update_visitor_cookies(urlh.url, video_id) self._update_visitor_cookies(urlh.url, video_id)
webpage = self._download_webpage(url, video_id, *args, fatal=fatal, note=note, **kwargs) webpage = self._download_webpage(url, video_id, note=note, data=data, headers=headers, query=query)
return self._parse_json(webpage, video_id, fatal=fatal) return self._parse_json(webpage, video_id)
def _extract_formats(self, video_info): def _extract_formats(self, video_info):
media_info = traverse_obj(video_info, ('page_info', 'media_info')) media_info = traverse_obj(video_info, ('page_info', 'media_info'))
@@ -189,7 +193,8 @@ class WeiboIE(WeiboBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)
meta = self._weibo_download_json(f'https://weibo.com/ajax/statuses/show?id={video_id}', video_id) meta = self._weibo_download_json(
'https://weibo.com/ajax/statuses/show', video_id, query={'id': video_id})
mix_media_info = traverse_obj(meta, ('mix_media_info', 'items', ...)) mix_media_info = traverse_obj(meta, ('mix_media_info', 'items', ...))
if not mix_media_info: if not mix_media_info:
return self._parse_video_info(meta) return self._parse_video_info(meta)
@@ -205,7 +210,11 @@ class WeiboIE(WeiboBaseIE):
class WeiboVideoIE(WeiboBaseIE): class WeiboVideoIE(WeiboBaseIE):
_VALID_URL = r'https?://(?:www\.)?weibo\.com/tv/show/(?P<id>\d+:\d+)' _VIDEO_ID_RE = r'\d+:(?:[\da-f]{32}|\d{16,})'
_VALID_URL = [
fr'https?://(?:www\.)?weibo\.com/tv/show/(?P<id>{_VIDEO_ID_RE})',
fr'https?://video\.weibo\.com/show/?\?(?:[^#]+&)?fid=(?P<id>{_VIDEO_ID_RE})',
]
_TESTS = [{ _TESTS = [{
'url': 'https://weibo.com/tv/show/1034:4797699866951785?from=old_pc_videoshow', 'url': 'https://weibo.com/tv/show/1034:4797699866951785?from=old_pc_videoshow',
'info_dict': { 'info_dict': {
@@ -227,6 +236,49 @@ class WeiboVideoIE(WeiboBaseIE):
'repost_count': int, 'repost_count': int,
'_old_archive_ids': ['weibomobile 4797700463137878'], '_old_archive_ids': ['weibomobile 4797700463137878'],
}, },
}, {
'url': 'https://weibo.com/tv/show/1034:633c288cc043d0ca7808030f1157da64',
'info_dict': {
'id': '4189191225395228',
'ext': 'mp4',
'display_id': 'FBqgOmDxO',
'title': '柴犬柴犬的秒拍视频',
'alt_title': '柴犬柴犬的秒拍视频',
'description': '午睡当然是要甜甜蜜蜜的啦![坏笑] Instagramshibainu.gaku http://t.cn/RHbmjzW \u200B\u200B\u200B',
'uploader': '柴犬柴犬',
'uploader_id': '5926682210',
'uploader_url': 'https://weibo.com/u/5926682210',
'view_count': int,
'like_count': int,
'repost_count': int,
'duration': 53,
'thumbnail': 'https://wx1.sinaimg.cn/large/006t5KMygy1fmu31fsqbej30hs0hstav.jpg',
'timestamp': 1514264429,
'upload_date': '20171226',
'_old_archive_ids': ['weibomobile 4189191225395228'],
},
}, {
'url': 'https://video.weibo.com/show?fid=1034:4967272104787984',
'info_dict': {
'id': '4967273022359838',
'ext': 'mp4',
'display_id': 'Nse4S9TTU',
'title': '#张婧仪[超话]#📸#婧仪的相册集#  早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'alt_title': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'description': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手] http://t.cn/A6WTpbEu \u200B\u200B\u200B',
'uploader': '张婧仪工作室',
'uploader_id': '7610808848',
'uploader_url': 'https://weibo.com/u/7610808848',
'view_count': int,
'like_count': int,
'repost_count': int,
'duration': 85,
'thumbnail': 'https://wx2.sinaimg.cn/orj480/008j4b3qly1hjsce01gnqj30u00gvwf8.jpg',
'tags': ['婧仪的相册集'],
'timestamp': 1699773545,
'upload_date': '20231112',
'_old_archive_ids': ['weibomobile 4967273022359838'],
},
}] }]
def _real_extract(self, url): def _real_extract(self, url):
@@ -234,8 +286,8 @@ class WeiboVideoIE(WeiboBaseIE):
post_data = f'data={{"Component_Play_Playinfo":{{"oid":"{video_id}"}}}}'.encode() post_data = f'data={{"Component_Play_Playinfo":{{"oid":"{video_id}"}}}}'.encode()
video_info = self._weibo_download_json( video_info = self._weibo_download_json(
f'https://weibo.com/tv/api/component?page=%2Ftv%2Fshow%2F{video_id.replace(":", "%3A")}', 'https://weibo.com/tv/api/component', video_id, data=post_data, headers={'Referer': url},
video_id, headers={'Referer': url}, data=post_data)['data']['Component_Play_Playinfo'] query={'page': f'/tv/show/{video_id}'})['data']['Component_Play_Playinfo']
return self.url_result(f'https://weibo.com/0/{video_info["mid"]}', WeiboIE) return self.url_result(f'https://weibo.com/0/{video_info["mid"]}', WeiboIE)
@@ -250,6 +302,38 @@ class WeiboUserIE(WeiboBaseIE):
'uploader': '萧影殿下', 'uploader': '萧影殿下',
}, },
'playlist_mincount': 195, 'playlist_mincount': 195,
}, {
'url': 'https://weibo.com/u/7610808848?tabtype=newVideo&layerid=4967273022359838',
'info_dict': {
'id': '7610808848',
'title': '张婧仪工作室的视频',
'description': '张婧仪工作室的全部视频',
'uploader': '张婧仪工作室',
},
'playlist_mincount': 61,
}, {
'url': 'https://weibo.com/u/7610808848?tabtype=newVideo&layerid=4967273022359838',
'info_dict': {
'id': '4967273022359838',
'ext': 'mp4',
'display_id': 'Nse4S9TTU',
'title': '#张婧仪[超话]#📸#婧仪的相册集#  早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'alt_title': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'description': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手] http://t.cn/A6WTpbEu \u200B\u200B\u200B',
'uploader': '张婧仪工作室',
'uploader_id': '7610808848',
'uploader_url': 'https://weibo.com/u/7610808848',
'view_count': int,
'like_count': int,
'repost_count': int,
'duration': 85,
'thumbnail': 'https://wx2.sinaimg.cn/orj480/008j4b3qly1hjsce01gnqj30u00gvwf8.jpg',
'tags': ['婧仪的相册集'],
'timestamp': 1699773545,
'upload_date': '20231112',
'_old_archive_ids': ['weibomobile 4967273022359838'],
},
'params': {'noplaylist': True},
}] }]
def _fetch_page(self, uid, cursor=0, page=1): def _fetch_page(self, uid, cursor=0, page=1):
@@ -270,6 +354,11 @@ class WeiboUserIE(WeiboBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
uid = self._match_id(url) uid = self._match_id(url)
params = {k: v[-1] for k, v in parse_qs(url).items()}
video_id = params.get('layerid') if params.get('tabtype') == 'newVideo' else None
if not self._yes_playlist(uid, video_id):
return self.url_result(f'https://weibo.com/{uid}/{video_id}', WeiboIE, video_id)
first_page = self._fetch_page(uid) first_page = self._fetch_page(uid)
uploader = traverse_obj(first_page, ('list', ..., 'user', 'screen_name', {str}), get_all=False) uploader = traverse_obj(first_page, ('list', ..., 'user', 'screen_name', {str}), get_all=False)
metainfo = { metainfo = {

View File

@@ -105,7 +105,6 @@ INNERTUBE_CLIENTS = {
'INNERTUBE_CONTEXT_CLIENT_NAME': 1, 'INNERTUBE_CONTEXT_CLIENT_NAME': 1,
'SUPPORTS_COOKIES': True, 'SUPPORTS_COOKIES': True,
**WEB_PO_TOKEN_POLICIES, **WEB_PO_TOKEN_POLICIES,
'PLAYER_PARAMS': '8AEB2AMB',
}, },
# Safari UA returns pre-merged video+audio 144p/240p/360p/720p/1080p HLS formats # Safari UA returns pre-merged video+audio 144p/240p/360p/720p/1080p HLS formats
'web_safari': { 'web_safari': {
@@ -119,7 +118,6 @@ INNERTUBE_CLIENTS = {
'INNERTUBE_CONTEXT_CLIENT_NAME': 1, 'INNERTUBE_CONTEXT_CLIENT_NAME': 1,
'SUPPORTS_COOKIES': True, 'SUPPORTS_COOKIES': True,
**WEB_PO_TOKEN_POLICIES, **WEB_PO_TOKEN_POLICIES,
'PLAYER_PARAMS': '8AEB2AMB',
}, },
'web_embedded': { 'web_embedded': {
'INNERTUBE_CONTEXT': { 'INNERTUBE_CONTEXT': {
@@ -261,9 +259,8 @@ INNERTUBE_CLIENTS = {
not_required_with_player_token=True, not_required_with_player_token=True,
), ),
# HLS Livestreams require POT 30 seconds in # HLS Livestreams require POT 30 seconds in
# TODO: Rolling out
StreamingProtocol.HLS: GvsPoTokenPolicy( StreamingProtocol.HLS: GvsPoTokenPolicy(
required=False, required=True,
recommended=True, recommended=True,
not_required_with_player_token=True, not_required_with_player_token=True,
), ),
@@ -282,7 +279,6 @@ INNERTUBE_CLIENTS = {
'userAgent': 'Mozilla/5.0 (iPad; CPU OS 16_7_10 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1,gzip(gfe)', 'userAgent': 'Mozilla/5.0 (iPad; CPU OS 16_7_10 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1,gzip(gfe)',
}, },
}, },
'PLAYER_PARAMS': '8AEB2AMB',
'INNERTUBE_CONTEXT_CLIENT_NAME': 2, 'INNERTUBE_CONTEXT_CLIENT_NAME': 2,
'GVS_PO_TOKEN_POLICY': { 'GVS_PO_TOKEN_POLICY': {
StreamingProtocol.HTTPS: GvsPoTokenPolicy( StreamingProtocol.HTTPS: GvsPoTokenPolicy(
@@ -314,7 +310,8 @@ INNERTUBE_CLIENTS = {
}, },
'INNERTUBE_CONTEXT_CLIENT_NAME': 7, 'INNERTUBE_CONTEXT_CLIENT_NAME': 7,
'SUPPORTS_COOKIES': True, 'SUPPORTS_COOKIES': True,
'PLAYER_PARAMS': '8AEB2AMB', # See: https://github.com/youtube/cobalt/blob/main/cobalt/browser/user_agent/user_agent_platform_info.cc#L506
'AUTHENTICATED_USER_AGENT': 'Mozilla/5.0 (ChromiumStylePlatform) Cobalt/25.lts.30.1034943-gold (unlike Gecko), Unknown_TV_Unknown_0/Unknown (Unknown, Unknown)',
}, },
'tv_simply': { 'tv_simply': {
'INNERTUBE_CONTEXT': { 'INNERTUBE_CONTEXT': {
@@ -373,6 +370,7 @@ def build_innertube_clients():
ytcfg.setdefault('REQUIRE_AUTH', False) ytcfg.setdefault('REQUIRE_AUTH', False)
ytcfg.setdefault('SUPPORTS_COOKIES', False) ytcfg.setdefault('SUPPORTS_COOKIES', False)
ytcfg.setdefault('PLAYER_PARAMS', None) ytcfg.setdefault('PLAYER_PARAMS', None)
ytcfg.setdefault('AUTHENTICATED_USER_AGENT', None)
ytcfg['INNERTUBE_CONTEXT']['client'].setdefault('hl', 'en') ytcfg['INNERTUBE_CONTEXT']['client'].setdefault('hl', 'en')
_, base_client, variant = _split_innertube_client(client) _, base_client, variant = _split_innertube_client(client)
@@ -660,7 +658,14 @@ class YoutubeBaseInfoExtractor(InfoExtractor):
_YT_INITIAL_PLAYER_RESPONSE_RE = r'ytInitialPlayerResponse\s*=' _YT_INITIAL_PLAYER_RESPONSE_RE = r'ytInitialPlayerResponse\s*='
def _get_default_ytcfg(self, client='web'): def _get_default_ytcfg(self, client='web'):
return copy.deepcopy(INNERTUBE_CLIENTS[client]) ytcfg = copy.deepcopy(INNERTUBE_CLIENTS[client])
# Currently, only the tv client needs to use an alternative user-agent when logged-in
if ytcfg.get('AUTHENTICATED_USER_AGENT') and self.is_authenticated:
client_context = ytcfg.setdefault('INNERTUBE_CONTEXT', {}).setdefault('client', {})
client_context['userAgent'] = ytcfg['AUTHENTICATED_USER_AGENT']
return ytcfg
def _get_innertube_host(self, client='web'): def _get_innertube_host(self, client='web'):
return INNERTUBE_CLIENTS[client]['INNERTUBE_HOST'] return INNERTUBE_CLIENTS[client]['INNERTUBE_HOST']
@@ -955,7 +960,17 @@ class YoutubeBaseInfoExtractor(InfoExtractor):
headers=traverse_obj(self._get_default_ytcfg(client), { headers=traverse_obj(self._get_default_ytcfg(client), {
'User-Agent': ('INNERTUBE_CONTEXT', 'client', 'userAgent', {str}), 'User-Agent': ('INNERTUBE_CONTEXT', 'client', 'userAgent', {str}),
})) }))
return self.extract_ytcfg(video_id, webpage) or {}
ytcfg = self.extract_ytcfg(video_id, webpage) or {}
# Workaround for https://github.com/yt-dlp/yt-dlp/issues/12563
# But it's not effective when logged-in
if client == 'tv' and not self.is_authenticated:
config_info = traverse_obj(ytcfg, (
'INNERTUBE_CONTEXT', 'client', 'configInfo', {dict})) or {}
config_info.pop('appInstallData', None)
return ytcfg
@staticmethod @staticmethod
def _build_api_continuation_query(continuation, ctp=None): def _build_api_continuation_query(continuation, ctp=None):

View File

@@ -566,6 +566,7 @@ class YoutubeTabBaseInfoExtractor(YoutubeBaseInfoExtractor):
'gridContinuation': (self._grid_entries, None), 'gridContinuation': (self._grid_entries, None),
'itemSectionContinuation': (self._post_thread_continuation_entries, None), 'itemSectionContinuation': (self._post_thread_continuation_entries, None),
'sectionListContinuation': (extract_entries, None), # for feeds 'sectionListContinuation': (extract_entries, None), # for feeds
'lockupViewModel': (self._grid_entries, 'items'), # for playlists tab
} }
continuation_items = traverse_obj(response, ( continuation_items = traverse_obj(response, (
@@ -1026,7 +1027,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'info_dict': { 'info_dict': {
'id': 'UCqj7Cz7revf5maW9g5pgNcg', 'id': 'UCqj7Cz7revf5maW9g5pgNcg',
'title': 'Igor Kleiner - Playlists', 'title': 'Igor Kleiner - Playlists',
'description': 'md5:15d7dd9e333cb987907fcb0d604b233a', 'description': r're:(?s)Добро пожаловать на мой канал! Здесь вы найдете видео .{504}/a1/50b/10a$',
'uploader': 'Igor Kleiner ', 'uploader': 'Igor Kleiner ',
'uploader_id': '@IgorDataScience', 'uploader_id': '@IgorDataScience',
'uploader_url': 'https://www.youtube.com/@IgorDataScience', 'uploader_url': 'https://www.youtube.com/@IgorDataScience',
@@ -1043,7 +1044,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'info_dict': { 'info_dict': {
'id': 'UCqj7Cz7revf5maW9g5pgNcg', 'id': 'UCqj7Cz7revf5maW9g5pgNcg',
'title': 'Igor Kleiner - Playlists', 'title': 'Igor Kleiner - Playlists',
'description': 'md5:15d7dd9e333cb987907fcb0d604b233a', 'description': r're:(?s)Добро пожаловать на мой канал! Здесь вы найдете видео .{504}/a1/50b/10a$',
'uploader': 'Igor Kleiner ', 'uploader': 'Igor Kleiner ',
'uploader_id': '@IgorDataScience', 'uploader_id': '@IgorDataScience',
'uploader_url': 'https://www.youtube.com/@IgorDataScience', 'uploader_url': 'https://www.youtube.com/@IgorDataScience',
@@ -1093,7 +1094,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'url': 'https://www.youtube.com/c/ChristophLaimer/playlists', 'url': 'https://www.youtube.com/c/ChristophLaimer/playlists',
'only_matching': True, 'only_matching': True,
}, { }, {
# TODO: fix availability extraction # TODO: fix availability and view_count extraction
'note': 'basic, single video playlist', 'note': 'basic, single video playlist',
'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlSLRHmI1qNm0wjyVNWw1pCU', 'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlSLRHmI1qNm0wjyVNWw1pCU',
'info_dict': { 'info_dict': {
@@ -1215,6 +1216,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': 'lex will', 'uploader': 'lex will',
}, },
'playlist_mincount': 18, 'playlist_mincount': 18,
'skip': 'This Community isn\'t available',
}, { }, {
# TODO: fix channel_is_verified extraction # TODO: fix channel_is_verified extraction
'note': 'Search tab', 'note': 'Search tab',
@@ -1399,7 +1401,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
}, { }, {
'url': 'https://www.youtube.com/channel/UCoMdktPbSTixAyNGwb-UYkQ/live', 'url': 'https://www.youtube.com/channel/UCoMdktPbSTixAyNGwb-UYkQ/live',
'info_dict': { 'info_dict': {
'id': 'YDvsBbKfLPA', # This will keep changing 'id': 'VFGoUmo74wE', # This will keep changing
'ext': 'mp4', 'ext': 'mp4',
'title': str, 'title': str,
'upload_date': r're:\d{8}', 'upload_date': r're:\d{8}',
@@ -1578,6 +1580,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'playlist_count': 50, 'playlist_count': 50,
'expected_warnings': ['YouTube Music is not directly supported'], 'expected_warnings': ['YouTube Music is not directly supported'],
}, { }, {
# TODO: fix test suite, 208163447408c78673b08c172beafe5c310fb167 broke this test
'note': 'unlisted single video playlist', 'note': 'unlisted single video playlist',
'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlQLfIN0MMgp0wVV6MP3bM4_', 'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlQLfIN0MMgp0wVV6MP3bM4_',
'info_dict': { 'info_dict': {
@@ -1597,19 +1600,19 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
}, },
'playlist': [{ 'playlist': [{
'info_dict': { 'info_dict': {
'title': 'youtube-dl test video "\'/\\ä↭𝕐', 'title': 'Big Buck Bunny 60fps 4K - Official Blender Foundation Short Film',
'id': 'BaW_jenozKc', 'id': 'aqz-KE-bpKQ',
'_type': 'url', '_type': 'url',
'ie_key': 'Youtube', 'ie_key': 'Youtube',
'duration': 10, 'duration': 635,
'channel_id': 'UCLqxVugv74EIW3VWh2NOa3Q', 'channel_id': 'UCSMOQeBJ2RAnuFungnQOxLg',
'channel_url': 'https://www.youtube.com/channel/UCLqxVugv74EIW3VWh2NOa3Q', 'channel_url': 'https://www.youtube.com/channel/UCSMOQeBJ2RAnuFungnQOxLg',
'view_count': int, 'view_count': int,
'url': 'https://www.youtube.com/watch?v=BaW_jenozKc', 'url': 'https://www.youtube.com/watch?v=aqz-KE-bpKQ',
'channel': 'Philipp Hagemeister', 'channel': 'Blender',
'uploader_id': '@PhilippHagemeister', 'uploader_id': '@BlenderOfficial',
'uploader_url': 'https://www.youtube.com/@PhilippHagemeister', 'uploader_url': 'https://www.youtube.com/@BlenderOfficial',
'uploader': 'Philipp Hagemeister', 'uploader': 'Blender',
}, },
}], }],
'playlist_count': 1, 'playlist_count': 1,
@@ -1675,7 +1678,6 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'url': 'https://www.youtube.com/channel/UCwVVpHQ2Cs9iGJfpdFngePQ', 'url': 'https://www.youtube.com/channel/UCwVVpHQ2Cs9iGJfpdFngePQ',
'only_matching': True, 'only_matching': True,
}, { }, {
# TODO: fix metadata extraction
'note': 'collaborative playlist (uploader name in the form "by <uploader> and x other(s)")', 'note': 'collaborative playlist (uploader name in the form "by <uploader> and x other(s)")',
'url': 'https://www.youtube.com/playlist?list=PLx-_-Kk4c89oOHEDQAojOXzEzemXxoqx6', 'url': 'https://www.youtube.com/playlist?list=PLx-_-Kk4c89oOHEDQAojOXzEzemXxoqx6',
'info_dict': { 'info_dict': {
@@ -1694,6 +1696,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': 'pukkandan', 'uploader': 'pukkandan',
}, },
'playlist_mincount': 2, 'playlist_mincount': 2,
'skip': 'https://github.com/yt-dlp/yt-dlp/issues/13690',
}, { }, {
'note': 'translated tab name', 'note': 'translated tab name',
'url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA/playlists', 'url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA/playlists',
@@ -1801,7 +1804,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'title': 'Not Just Bikes - Shorts', 'title': 'Not Just Bikes - Shorts',
'tags': 'count:10', 'tags': 'count:10',
'channel_url': 'https://www.youtube.com/channel/UC0intLFzLaudFG-xAvUEO-A', 'channel_url': 'https://www.youtube.com/channel/UC0intLFzLaudFG-xAvUEO-A',
'description': 'md5:1d9fc1bad7f13a487299d1fe1712e031', 'description': 'md5:295758591d0d43d8594277be54584da7',
'channel_follower_count': int, 'channel_follower_count': int,
'channel_id': 'UC0intLFzLaudFG-xAvUEO-A', 'channel_id': 'UC0intLFzLaudFG-xAvUEO-A',
'channel': 'Not Just Bikes', 'channel': 'Not Just Bikes',
@@ -1822,7 +1825,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'channel_url': 'https://www.youtube.com/channel/UC3eYAvjCVwNHgkaGbXX3sig', 'channel_url': 'https://www.youtube.com/channel/UC3eYAvjCVwNHgkaGbXX3sig',
'channel': '中村悠一', 'channel': '中村悠一',
'channel_follower_count': int, 'channel_follower_count': int,
'description': 'md5:e8fd705073a594f27d6d6d020da560dc', 'description': 'md5:76b312b48a26c3b0e4d90e2dfc1b417d',
'uploader_url': 'https://www.youtube.com/@Yuichi-Nakamura', 'uploader_url': 'https://www.youtube.com/@Yuichi-Nakamura',
'uploader_id': '@Yuichi-Nakamura', 'uploader_id': '@Yuichi-Nakamura',
'uploader': '中村悠一', 'uploader': '中村悠一',
@@ -1865,12 +1868,13 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'tags': [], 'tags': [],
}, },
'playlist_mincount': 30, 'playlist_mincount': 30,
'skip': 'The channel/playlist does not exist and the URL redirected to youtube.com home page',
}, { }, {
# Trending Gaming Tab. tab id is empty # Trending Gaming Tab. tab id is empty
'url': 'https://www.youtube.com/feed/trending?bp=4gIcGhpnYW1pbmdfY29ycHVzX21vc3RfcG9wdWxhcg%3D%3D', 'url': 'https://www.youtube.com/feed/trending?bp=4gIcGhpnYW1pbmdfY29ycHVzX21vc3RfcG9wdWxhcg%3D%3D',
'info_dict': { 'info_dict': {
'id': 'trending', 'id': 'trending',
'title': 'trending - Gaming', 'title': 'trending',
'tags': [], 'tags': [],
}, },
'playlist_mincount': 30, 'playlist_mincount': 30,
@@ -2018,7 +2022,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': 'A Himitsu', 'uploader': 'A Himitsu',
'channel_id': 'UCgFwu-j5-xNJml2FtTrrB3A', 'channel_id': 'UCgFwu-j5-xNJml2FtTrrB3A',
'tags': 'count:12', 'tags': 'count:12',
'description': 'I make music', 'description': 'Music producer, sometimes.',
'channel_url': 'https://www.youtube.com/channel/UCgFwu-j5-xNJml2FtTrrB3A', 'channel_url': 'https://www.youtube.com/channel/UCgFwu-j5-xNJml2FtTrrB3A',
'channel_follower_count': int, 'channel_follower_count': int,
'channel_is_verified': True, 'channel_is_verified': True,
@@ -2304,19 +2308,20 @@ class YoutubePlaylistIE(YoutubeBaseInfoExtractor):
) )
IE_NAME = 'youtube:playlist' IE_NAME = 'youtube:playlist'
_TESTS = [{ _TESTS = [{
# TODO: fix availability extraction
'note': 'issue #673', 'note': 'issue #673',
'url': 'PLBB231211A4F62143', 'url': 'PLBB231211A4F62143',
'info_dict': { 'info_dict': {
'title': '[OLD]Team Fortress 2 (Class-based LP)', 'title': 'Team Fortress 2 [2010 Version]',
'id': 'PLBB231211A4F62143', 'id': 'PLBB231211A4F62143',
'uploader': 'Wickman', 'uploader': 'Wickman Wish',
'uploader_id': '@WickmanVT', 'uploader_id': '@WickmanWish',
'description': 'md5:8fa6f52abb47a9552002fa3ddfc57fc2', 'description': 'md5:8fa6f52abb47a9552002fa3ddfc57fc2',
'view_count': int, 'view_count': int,
'uploader_url': 'https://www.youtube.com/@WickmanVT', 'uploader_url': 'https://www.youtube.com/@WickmanWish',
'modified_date': r're:\d{8}', 'modified_date': r're:\d{8}',
'channel_id': 'UCKSpbfbl5kRQpTdL7kMc-1Q', 'channel_id': 'UCKSpbfbl5kRQpTdL7kMc-1Q',
'channel': 'Wickman', 'channel': 'Wickman Wish',
'tags': [], 'tags': [],
'channel_url': 'https://www.youtube.com/channel/UCKSpbfbl5kRQpTdL7kMc-1Q', 'channel_url': 'https://www.youtube.com/channel/UCKSpbfbl5kRQpTdL7kMc-1Q',
'availability': 'public', 'availability': 'public',
@@ -2331,6 +2336,7 @@ class YoutubePlaylistIE(YoutubeBaseInfoExtractor):
'playlist_count': 2, 'playlist_count': 2,
'skip': 'This playlist is private', 'skip': 'This playlist is private',
}, { }, {
# TODO: fix availability extraction
'note': 'embedded', 'note': 'embedded',
'url': 'https://www.youtube.com/embed/videoseries?list=PL6IaIsEjSbf96XFRuNccS_RuEXwNdsoEu', 'url': 'https://www.youtube.com/embed/videoseries?list=PL6IaIsEjSbf96XFRuNccS_RuEXwNdsoEu',
'playlist_count': 4, 'playlist_count': 4,
@@ -2351,6 +2357,7 @@ class YoutubePlaylistIE(YoutubeBaseInfoExtractor):
}, },
'expected_warnings': [r'[Uu]navailable videos? (is|are|will be) hidden', 'Retrying', 'Giving up'], 'expected_warnings': [r'[Uu]navailable videos? (is|are|will be) hidden', 'Retrying', 'Giving up'],
}, { }, {
# TODO: fix availability extraction
'url': 'http://www.youtube.com/embed/_xDOZElKyNU?list=PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl', 'url': 'http://www.youtube.com/embed/_xDOZElKyNU?list=PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl',
'playlist_mincount': 455, 'playlist_mincount': 455,
'info_dict': { 'info_dict': {

View File

@@ -79,6 +79,7 @@ STREAMING_DATA_FETCH_GVS_PO_TOKEN = '__yt_dlp_fetch_gvs_po_token'
STREAMING_DATA_PLAYER_TOKEN_PROVIDED = '__yt_dlp_player_token_provided' STREAMING_DATA_PLAYER_TOKEN_PROVIDED = '__yt_dlp_player_token_provided'
STREAMING_DATA_INNERTUBE_CONTEXT = '__yt_dlp_innertube_context' STREAMING_DATA_INNERTUBE_CONTEXT = '__yt_dlp_innertube_context'
STREAMING_DATA_IS_PREMIUM_SUBSCRIBER = '__yt_dlp_is_premium_subscriber' STREAMING_DATA_IS_PREMIUM_SUBSCRIBER = '__yt_dlp_is_premium_subscriber'
STREAMING_DATA_FETCHED_TIMESTAMP = '__yt_dlp_fetched_timestamp'
PO_TOKEN_GUIDE_URL = 'https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide' PO_TOKEN_GUIDE_URL = 'https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide'
@@ -256,10 +257,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
'401': {'ext': 'mp4', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'}, '401': {'ext': 'mp4', 'height': 2160, 'format_note': 'DASH video', 'vcodec': 'av01.0.12M.08'},
} }
_SUBTITLE_FORMATS = ('json3', 'srv1', 'srv2', 'srv3', 'ttml', 'srt', 'vtt') _SUBTITLE_FORMATS = ('json3', 'srv1', 'srv2', 'srv3', 'ttml', 'srt', 'vtt')
_DEFAULT_CLIENTS = ('tv', 'ios', 'web') _DEFAULT_CLIENTS = ('tv_simply', 'tv', 'web')
_DEFAULT_AUTHED_CLIENTS = ('tv', 'web') _DEFAULT_AUTHED_CLIENTS = ('tv', 'web_safari', 'web')
# Premium does not require POT (except for subtitles) # Premium does not require POT (except for subtitles)
_DEFAULT_PREMIUM_CLIENTS = ('tv', 'web') _DEFAULT_PREMIUM_CLIENTS = ('tv', 'web_creator', 'web')
_GEO_BYPASS = False _GEO_BYPASS = False
@@ -1816,7 +1817,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
_PLAYER_JS_VARIANT_MAP = { _PLAYER_JS_VARIANT_MAP = {
'main': 'player_ias.vflset/en_US/base.js', 'main': 'player_ias.vflset/en_US/base.js',
'tcc': 'player_ias_tcc.vflset/en_US/base.js',
'tce': 'player_ias_tce.vflset/en_US/base.js', 'tce': 'player_ias_tce.vflset/en_US/base.js',
'es5': 'player_es5.vflset/en_US/base.js',
'es6': 'player_es6.vflset/en_US/base.js',
'tv': 'tv-player-ias.vflset/tv-player-ias.js', 'tv': 'tv-player-ias.vflset/tv-player-ias.js',
'tv_es6': 'tv-player-es6.vflset/tv-player-es6.js', 'tv_es6': 'tv-player-es6.vflset/tv-player-es6.js',
'phone': 'player-plasma-ias-phone-en_US.vflset/base.js', 'phone': 'player-plasma-ias-phone-en_US.vflset/base.js',
@@ -2018,7 +2022,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if not player_url: if not player_url:
return return
requested_js_variant = self._configuration_arg('player_js_variant', [''])[0] or 'actual' requested_js_variant = self._configuration_arg('player_js_variant', [''])[0] or 'main'
if requested_js_variant in self._PLAYER_JS_VARIANT_MAP: if requested_js_variant in self._PLAYER_JS_VARIANT_MAP:
player_id = self._extract_player_info(player_url) player_id = self._extract_player_info(player_url)
original_url = player_url original_url = player_url
@@ -3242,6 +3246,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
elif pr: elif pr:
# Save client details for introspection later # Save client details for introspection later
innertube_context = traverse_obj(player_ytcfg or self._get_default_ytcfg(client), 'INNERTUBE_CONTEXT') innertube_context = traverse_obj(player_ytcfg or self._get_default_ytcfg(client), 'INNERTUBE_CONTEXT')
fetched_timestamp = int(time.time())
sd = pr.setdefault('streamingData', {}) sd = pr.setdefault('streamingData', {})
sd[STREAMING_DATA_CLIENT_NAME] = client sd[STREAMING_DATA_CLIENT_NAME] = client
sd[STREAMING_DATA_FETCH_GVS_PO_TOKEN] = fetch_gvs_po_token_func sd[STREAMING_DATA_FETCH_GVS_PO_TOKEN] = fetch_gvs_po_token_func
@@ -3254,6 +3259,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
f[STREAMING_DATA_FETCH_GVS_PO_TOKEN] = fetch_gvs_po_token_func f[STREAMING_DATA_FETCH_GVS_PO_TOKEN] = fetch_gvs_po_token_func
f[STREAMING_DATA_IS_PREMIUM_SUBSCRIBER] = is_premium_subscriber f[STREAMING_DATA_IS_PREMIUM_SUBSCRIBER] = is_premium_subscriber
f[STREAMING_DATA_PLAYER_TOKEN_PROVIDED] = bool(player_po_token) f[STREAMING_DATA_PLAYER_TOKEN_PROVIDED] = bool(player_po_token)
f[STREAMING_DATA_FETCHED_TIMESTAMP] = fetched_timestamp
if deprioritize_pr: if deprioritize_pr:
deprioritized_prs.append(pr) deprioritized_prs.append(pr)
else: else:
@@ -3306,11 +3312,9 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
f'{video_id}: {client_name} client {proto} formats require a GVS PO Token which was not provided. ' f'{video_id}: {client_name} client {proto} formats require a GVS PO Token which was not provided. '
'They will be skipped as they may yield HTTP Error 403. ' 'They will be skipped as they may yield HTTP Error 403. '
f'You can manually pass a GVS PO Token for this client with --extractor-args "youtube:po_token={client_name}.gvs+XXX". ' f'You can manually pass a GVS PO Token for this client with --extractor-args "youtube:po_token={client_name}.gvs+XXX". '
f'For more information, refer to {PO_TOKEN_GUIDE_URL} . ' f'For more information, refer to {PO_TOKEN_GUIDE_URL}')
'To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"')
# Only raise a warning for non-default clients, to not confuse users. # Only raise a warning for non-default clients, to not confuse users.
# iOS HLS formats still work without PO Token, so we don't need to warn about them.
if client_name in (*self._DEFAULT_CLIENTS, *self._DEFAULT_AUTHED_CLIENTS): if client_name in (*self._DEFAULT_CLIENTS, *self._DEFAULT_AUTHED_CLIENTS):
self.write_debug(msg, only_once=True) self.write_debug(msg, only_once=True)
else: else:
@@ -3372,8 +3376,12 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
# save pots per client to avoid fetching again # save pots per client to avoid fetching again
gvs_pots = {} gvs_pots = {}
# For handling potential pre-playback required waiting period
playback_wait = int_or_none(self._configuration_arg('playback_wait', [None])[0], default=6)
for fmt in streaming_formats: for fmt in streaming_formats:
client_name = fmt[STREAMING_DATA_CLIENT_NAME] client_name = fmt[STREAMING_DATA_CLIENT_NAME]
available_at = fmt[STREAMING_DATA_FETCHED_TIMESTAMP] + playback_wait
if fmt.get('targetDurationSec'): if fmt.get('targetDurationSec'):
continue continue
@@ -3555,6 +3563,10 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if single_stream and dct.get('ext'): if single_stream and dct.get('ext'):
dct['container'] = dct['ext'] + '_dash' dct['container'] = dct['ext'] + '_dash'
# For handling potential pre-playback required waiting period
if live_status not in ('is_live', 'post_live'):
dct['available_at'] = available_at
if (all_formats or 'dashy' in format_types) and dct['filesize']: if (all_formats or 'dashy' in format_types) and dct['filesize']:
yield { yield {
**dct, **dct,
@@ -3592,18 +3604,21 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if not all_formats and key in itags[itag]: if not all_formats and key in itags[itag]:
return False return False
# For handling potential pre-playback required waiting period
if live_status not in ('is_live', 'post_live'):
f['available_at'] = available_at
if f.get('source_preference') is None: if f.get('source_preference') is None:
f['source_preference'] = -1 f['source_preference'] = -1
# Deprioritize since its pre-merged m3u8 formats may have lower quality audio streams
if client_name == 'web_safari' and proto == 'hls' and live_status != 'is_live':
f['source_preference'] -= 1
if missing_pot: if missing_pot:
f['format_note'] = join_nonempty(f.get('format_note'), 'MISSING POT', delim=' ') f['format_note'] = join_nonempty(f.get('format_note'), 'MISSING POT', delim=' ')
f['source_preference'] -= 20 f['source_preference'] -= 20
# XXX: Check if IOS HLS formats are affected by PO token enforcement; temporary
# See https://github.com/yt-dlp/yt-dlp/issues/13511
if proto == 'hls' and client_name == 'ios':
f['__needs_testing'] = True
itags[itag].add(key) itags[itag].add(key)
if itag and all_formats: if itag and all_formats:
@@ -3848,11 +3863,33 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
player_responses, (..., 'microformat', 'playerMicroformatRenderer'), player_responses, (..., 'microformat', 'playerMicroformatRenderer'),
expected_type=dict) expected_type=dict)
# Fallbacks in case player responses are missing metadata
initial_sdcr = traverse_obj(initial_data, (
'engagementPanels', ..., 'engagementPanelSectionListRenderer',
'content', 'structuredDescriptionContentRenderer', {dict}, any))
initial_description = traverse_obj(initial_sdcr, (
'items', ..., 'expandableVideoDescriptionBodyRenderer',
'attributedDescriptionBodyText', 'content', {str}, any))
# videoDescriptionHeaderRenderer also has publishDate/channel/handle/ucid, but not needed
initial_vdhr = traverse_obj(initial_sdcr, (
'items', ..., 'videoDescriptionHeaderRenderer', {dict}, any)) or {}
initial_video_details_renderer = traverse_obj(initial_data, (
'playerOverlays', 'playerOverlayRenderer', 'videoDetails',
'playerOverlayVideoDetailsRenderer', {dict})) or {}
initial_title = (
self._get_text(initial_vdhr, 'title')
or self._get_text(initial_video_details_renderer, 'title'))
translated_title = self._get_text(microformats, (..., 'title')) translated_title = self._get_text(microformats, (..., 'title'))
video_title = ((self._preferred_lang and translated_title) video_title = ((self._preferred_lang and translated_title)
or get_first(video_details, 'title') # primary or get_first(video_details, 'title') # primary
or translated_title or translated_title
or search_meta(['og:title', 'twitter:title', 'title'])) or search_meta(['og:title', 'twitter:title', 'title']))
if not video_title and initial_title:
self.report_warning(
'No title found in player responses; falling back to title from initial data. '
'Other metadata may also be missing')
video_title = initial_title
translated_description = self._get_text(microformats, (..., 'description')) translated_description = self._get_text(microformats, (..., 'description'))
original_description = get_first(video_details, 'shortDescription') original_description = get_first(video_details, 'shortDescription')
video_description = ( video_description = (
@@ -3860,6 +3897,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
# If original description is blank, it will be an empty string. # If original description is blank, it will be an empty string.
# Do not prefer translated description in this case. # Do not prefer translated description in this case.
or original_description if original_description is not None else translated_description) or original_description if original_description is not None else translated_description)
if video_description is None:
video_description = initial_description
multifeed_metadata_list = get_first( multifeed_metadata_list = get_first(
player_responses, player_responses,
@@ -4389,7 +4428,6 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if upload_date and live_status not in ('is_live', 'post_live', 'is_upcoming'): if upload_date and live_status not in ('is_live', 'post_live', 'is_upcoming'):
# Newly uploaded videos' HLS formats are potentially problematic and need to be checked # Newly uploaded videos' HLS formats are potentially problematic and need to be checked
# XXX: This is redundant for as long as we are already checking all IOS HLS formats
upload_datetime = datetime_from_str(upload_date).replace(tzinfo=dt.timezone.utc) upload_datetime = datetime_from_str(upload_date).replace(tzinfo=dt.timezone.utc)
if upload_datetime >= datetime_from_str('today-2days'): if upload_datetime >= datetime_from_str('today-2days'):
for fmt in info['formats']: for fmt in info['formats']:

View File

@@ -1,4 +1,5 @@
import os import os
import sys
from .common import PostProcessor from .common import PostProcessor
from ..utils import ( from ..utils import (
@@ -54,8 +55,8 @@ class XAttrMetadataPP(PostProcessor):
if infoname == 'upload_date': if infoname == 'upload_date':
value = hyphenate_date(value) value = hyphenate_date(value)
elif xattrname == 'com.apple.metadata:kMDItemWhereFroms': elif xattrname == 'com.apple.metadata:kMDItemWhereFroms':
# NTFS ADS doesn't support colons in names # Colon in xattr name throws errors on Windows/NTFS and Linux
if os.name == 'nt': if sys.platform != 'darwin':
continue continue
value = self.APPLE_PLIST_TEMPLATE % value value = self.APPLE_PLIST_TEMPLATE % value
write_xattr(info['filepath'], xattrname, value.encode()) write_xattr(info['filepath'], xattrname, value.encode())

View File

@@ -58,26 +58,30 @@ def _get_variant_and_executable_path():
"""@returns (variant, executable_path)""" """@returns (variant, executable_path)"""
if getattr(sys, 'frozen', False): if getattr(sys, 'frozen', False):
path = sys.executable path = sys.executable
# py2exe is unsupported but we should still correctly identify it for debugging purposes
if not hasattr(sys, '_MEIPASS'): if not hasattr(sys, '_MEIPASS'):
return 'py2exe', path return 'py2exe', path
elif sys._MEIPASS == os.path.dirname(path): if sys._MEIPASS == os.path.dirname(path):
return f'{sys.platform}_dir', path return f'{sys.platform}_dir', path
elif sys.platform == 'darwin': if sys.platform == 'darwin':
# darwin_legacy_exe is no longer supported, but still identify it to block updates
machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else '' machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else ''
else: return f'darwin{machine}_exe', path
machine = f'_{platform.machine().lower()}'
is_64bits = sys.maxsize > 2**32 machine = f'_{platform.machine().lower()}'
# Ref: https://en.wikipedia.org/wiki/Uname#Examples is_64bits = sys.maxsize > 2**32
if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'): # Ref: https://en.wikipedia.org/wiki/Uname#Examples
machine = '_x86' if not is_64bits else '' if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'):
# platform.machine() on 32-bit raspbian OS may return 'aarch64', so check "64-bitness" machine = '_x86' if not is_64bits else ''
# See: https://github.com/yt-dlp/yt-dlp/issues/11813 # platform.machine() on 32-bit raspbian OS may return 'aarch64', so check "64-bitness"
elif machine[1:] == 'aarch64' and not is_64bits: # See: https://github.com/yt-dlp/yt-dlp/issues/11813
machine = '_armv7l' elif machine[1:] == 'aarch64' and not is_64bits:
# sys.executable returns a /tmp/ path for staticx builds (linux_static) machine = '_armv7l'
# Ref: https://staticx.readthedocs.io/en/latest/usage.html#run-time-information # sys.executable returns a /tmp/ path for staticx builds (linux_static)
if static_exe_path := os.getenv('STATICX_PROG_PATH'): # Ref: https://staticx.readthedocs.io/en/latest/usage.html#run-time-information
path = static_exe_path if static_exe_path := os.getenv('STATICX_PROG_PATH'):
path = static_exe_path
return f'{remove_end(sys.platform, "32")}{machine}_exe', path return f'{remove_end(sys.platform, "32")}{machine}_exe', path
path = os.path.dirname(__file__) path = os.path.dirname(__file__)
@@ -110,8 +114,8 @@ _FILE_SUFFIXES = {
'zip': '', 'zip': '',
'win_exe': '.exe', 'win_exe': '.exe',
'win_x86_exe': '_x86.exe', 'win_x86_exe': '_x86.exe',
'win_arm64_exe': '_arm64.exe',
'darwin_exe': '_macos', 'darwin_exe': '_macos',
'darwin_legacy_exe': '_macos_legacy',
'linux_exe': '_linux', 'linux_exe': '_linux',
'linux_aarch64_exe': '_linux_aarch64', 'linux_aarch64_exe': '_linux_aarch64',
'linux_armv7l_exe': '_linux_armv7l', 'linux_armv7l_exe': '_linux_armv7l',
@@ -147,12 +151,6 @@ def _get_system_deprecation():
STOP_MSG = 'You may stop receiving updates on this version at any time!' STOP_MSG = 'You may stop receiving updates on this version at any time!'
variant = detect_variant() variant = detect_variant()
# Temporary until macos_legacy executable builds are discontinued
if variant == 'darwin_legacy_exe':
return EXE_MSG_TMPL.format(
f'{variant} (the PyInstaller-bundled executable for macOS versions older than 10.15)',
'issues/13856', STOP_MSG)
# Temporary until linux_armv7l executable builds are discontinued # Temporary until linux_armv7l executable builds are discontinued
if variant == 'linux_armv7l_exe': if variant == 'linux_armv7l_exe':
return EXE_MSG_TMPL.format( return EXE_MSG_TMPL.format(

View File

@@ -1,4 +1,8 @@
"""Deprecated - New code should avoid these""" """Deprecated - New code should avoid these"""
import base64
import hashlib
import hmac
import json
import warnings import warnings
from ..compat.compat_utils import passthrough_module from ..compat.compat_utils import passthrough_module
@@ -28,4 +32,18 @@ def intlist_to_bytes(xs):
return struct.pack('%dB' % len(xs), *xs) return struct.pack('%dB' % len(xs), *xs)
def jwt_encode_hs256(payload_data, key, headers={}):
header_data = {
'alg': 'HS256',
'typ': 'JWT',
}
if headers:
header_data.update(headers)
header_b64 = base64.b64encode(json.dumps(header_data).encode())
payload_b64 = base64.b64encode(json.dumps(payload_data).encode())
h = hmac.new(key.encode(), header_b64 + b'.' + payload_b64, hashlib.sha256)
signature_b64 = base64.b64encode(h.digest())
return header_b64 + b'.' + payload_b64 + b'.' + signature_b64
compiled_regex_type = type(re.compile('')) compiled_regex_type = type(re.compile(''))

View File

@@ -4739,22 +4739,36 @@ def time_seconds(**kwargs):
return time.time() + dt.timedelta(**kwargs).total_seconds() return time.time() + dt.timedelta(**kwargs).total_seconds()
# create a JSON Web Signature (jws) with HS256 algorithm
# the resulting format is in JWS Compact Serialization
# implemented following JWT https://www.rfc-editor.org/rfc/rfc7519.html # implemented following JWT https://www.rfc-editor.org/rfc/rfc7519.html
# implemented following JWS https://www.rfc-editor.org/rfc/rfc7515.html # implemented following JWS https://www.rfc-editor.org/rfc/rfc7515.html
def jwt_encode_hs256(payload_data, key, headers={}): def jwt_encode(payload_data, key, *, alg='HS256', headers=None):
assert alg in ('HS256',), f'Unsupported algorithm "{alg}"'
def jwt_json_bytes(obj):
return json.dumps(obj, separators=(',', ':')).encode()
def jwt_b64encode(bytestring):
return base64.urlsafe_b64encode(bytestring).rstrip(b'=')
header_data = { header_data = {
'alg': 'HS256', 'alg': alg,
'typ': 'JWT', 'typ': 'JWT',
} }
if headers: if headers:
header_data.update(headers) # Allow re-ordering of keys if both 'alg' and 'typ' are present
header_b64 = base64.b64encode(json.dumps(header_data).encode()) if 'alg' in headers and 'typ' in headers:
payload_b64 = base64.b64encode(json.dumps(payload_data).encode()) header_data = headers
else:
header_data.update(headers)
header_b64 = jwt_b64encode(jwt_json_bytes(header_data))
payload_b64 = jwt_b64encode(jwt_json_bytes(payload_data))
# HS256 is the only algorithm currently supported
h = hmac.new(key.encode(), header_b64 + b'.' + payload_b64, hashlib.sha256) h = hmac.new(key.encode(), header_b64 + b'.' + payload_b64, hashlib.sha256)
signature_b64 = base64.b64encode(h.digest()) signature_b64 = jwt_b64encode(h.digest())
return header_b64 + b'.' + payload_b64 + b'.' + signature_b64
return (header_b64 + b'.' + payload_b64 + b'.' + signature_b64).decode()
# can be extended in future to verify the signature and parse header and return the algorithm used if it's not HS256 # can be extended in future to verify the signature and parse header and return the algorithm used if it's not HS256

View File

@@ -1,8 +1,8 @@
# Autogenerated by devscripts/update-version.py # Autogenerated by devscripts/update-version.py
__version__ = '2025.08.11' __version__ = '2025.08.27'
RELEASE_GIT_HEAD = '5e4ceb35cf997af0dbf100e1de37f4e2bcbaa0b7' RELEASE_GIT_HEAD = '8cd37b85d492edb56a4f7506ea05527b85a6b02b'
VARIANT = None VARIANT = None
@@ -12,4 +12,4 @@ CHANNEL = 'stable'
ORIGIN = 'yt-dlp/yt-dlp' ORIGIN = 'yt-dlp/yt-dlp'
_pkg_version = '2025.08.11' _pkg_version = '2025.08.27'