1
0
mirror of https://github.com/yt-dlp/yt-dlp.git synced 2026-01-13 10:21:30 +00:00

Compare commits

..

36 Commits

Author SHA1 Message Date
github-actions[bot]
575753b9f3 Release 2025.08.20
Created by: bashonly

:ci skip all
2025-08-20 02:51:33 +00:00
bashonly
c2fc4f3e7f [cleanup] Misc (#13991)
Authored by: bashonly
2025-08-20 02:42:34 +00:00
bashonly
07247d6c20 [build] Add Windows ARM64 builds (#14003)
Adds yt-dlp_arm64.exe and yt-dlp_win_arm64.zip to release assets

Closes #13849
Authored by: bashonly
2025-08-20 02:28:00 +00:00
bashonly
f63a7e41d1 [ie/youtube] Add playback_wait extractor-arg
Authored by: bashonly
2025-08-19 21:22:00 -05:00
bashonly
7b8a8abb98 [ie/francetv:site] Fix extractor (#14082)
Closes #14072
Authored by: bashonly
2025-08-20 01:35:32 +00:00
bashonly
a97f4cb57e [ie/youtube] Handle required preroll waiting period (#14081)
Authored by: bashonly
2025-08-19 20:06:53 -05:00
bashonly
d154dc3dcf [ie/youtube] Remove default player params (#14081)
Closes #13930
Authored by: bashonly
2025-08-19 20:06:53 -05:00
bashonly
438d3f06b3 [fd] Support available_at format field (#13980)
Authored by: bashonly
2025-08-20 00:38:48 +00:00
CasperMcFadden95
74b4b3b005 [ie/faulio] Add extractor (#13907)
Authored by: CasperMcFadden95
2025-08-19 23:27:31 +00:00
e2dk4r
36e873822b [ie/puhutv] Fix playlists extraction (#11955)
Closes #5691
Authored by: e2dk4r
2025-08-19 23:20:07 +00:00
Arseniy D.
d3d1ac8eb2 [ie/steam] Fix extractor (#14008)
Closes #14000
Authored by: AzartX47
2025-08-19 23:14:20 +00:00
doe1080
86d74e5cf0 [ie/medialaan] Rework extractors (#14015)
Fixes MedialaanIE and VTMIE

Authored by: doe1080
2025-08-19 23:10:17 +00:00
Junyi Lou
6ca9165648 [ie/bilibili] Handle Bangumi redirection (#14038)
Closes #13924
Authored by: junyilou, grqz

Co-authored-by: _Grqz <173015200+grqz@users.noreply.github.com>
2025-08-19 23:06:49 +00:00
Pierre
82a1390204 [ie/svt] Extract forced subs under separate lang code (#14062)
Closes #14020
Authored by: PierreMesure
2025-08-19 22:57:46 +00:00
Runar Saur Modahl
7540aa1da1 [ie/NRKTVEpisode] Fix extractor (#14065)
Closes #14054
Authored by: runarmod
2025-08-19 22:45:55 +00:00
bashonly
35da8df4f8 [utils] Add improved jwt_encode function (#14071)
Also deprecates `jwt_encode_hs256`

Authored by: bashonly
2025-08-19 22:36:00 +00:00
bashonly
8df121ba59 [ie/mtv] Overhaul extractors (#14052)
Adds SouthParkComBrIE and SouthParkCoUkIE

Removes these extractors:
 - CMTIE: migrated to Paramount+
 - ComedyCentralTVIE: migrated to Paramount+
 - MTVDEIE: migrated to Paramount+
 - MTVItaliaIE: migrated to Paramount+
 - MTVItaliaProgrammaIE: migrated to Paramount+
 - MTVJapanIE: migrated to JP Services
 - MTVServicesEmbeddedIE: dead domain
 - MTVVideoIE: migrated to Paramount+
 - NickBrIE: redirects to landing page w/o any videos
 - NickDeIE: redirects to landing page w/o any videos
 - NickRuIE: redirects to landing page w/o any videos
 - BellatorIE: migrated to PFL
 - ParamountNetworkIE: migrated to Paramount+
 - SouthParkNlIE: site no longer exists
 - TVLandIE: migrated to Paramount+

Closes #169, Closes #1711, Closes #1712, Closes #2621, Closes #3167, Closes #3893, Closes #4552, Closes #4702, Closes #4928, Closes #5249, Closes #6156, Closes #8722, Closes #9896, Closes #10168, Closes #12765, Closes #13446, Closes #14009

Authored by: bashonly, doe1080, Randalix, seproDev

Co-authored-by: doe1080 <98906116+doe1080@users.noreply.github.com>
Co-authored-by: Randalix <23729538+Randalix@users.noreply.github.com>
Co-authored-by: sepro <sepro@sepr0.com>
2025-08-19 20:46:11 +00:00
bashonly
471a2b60e0 [ie/tiktok:user] Improve infinite loop prevention (#14077)
Fix edf55e8184

Closes #14076
Authored by: bashonly
2025-08-19 20:39:33 +00:00
bashonly
df0553153e [ie/youtube] Default to main player JS variant (#14079)
Authored by: bashonly
2025-08-19 19:28:15 +00:00
bashonly
7bc53ae799 [ie/youtube] Extract title and description from initial data (#14078)
Closes #13604
Authored by: bashonly
2025-08-19 19:27:17 +00:00
bashonly
d8200ff0a4 [ie/vimeo:album] Support embed-only and non-numeric albums (#14021)
Authored by: bashonly
2025-08-18 22:09:35 +00:00
bashonly
0f6b915822 [ie/vimeo:event] Fix extractor (#14064)
Closes #14059
Authored by: bashonly
2025-08-18 21:09:25 +00:00
doe1080
374ea049f5 [ie/niconico:live] Support age-restricted streams (#13549)
Authored by: doe1080
2025-08-18 17:43:40 +00:00
doe1080
6f4c1bb593 [cleanup] Remove dead extractors (#13996)
Removes ArkenaIE, PladformIE, VevoIE, VevoPlaylistIE

Authored by: doe1080
2025-08-18 16:34:32 +00:00
doe1080
c22660aed5 [ie/adobetv] Fix extractor (#13917)
Removes AdobeTVChannelIE, AdobeTVEmbedIE, AdobeTVIE, AdobeTVShowIE

Authored by: doe1080
2025-08-18 16:32:26 +00:00
bashonly
404bd889d0 [ie/weibo] Support more URLs and --no-playlist (#14035)
Authored by: bashonly
2025-08-16 23:02:04 +00:00
bashonly
edf55e8184 [ie/tiktok:user] Avoid infinite loop during extraction (#14032)
Closes #14031
Authored by: bashonly
2025-08-16 22:57:14 +00:00
bashonly
8a8861d538 [ie/youtube:tab] Fix playlists tab extraction (#14030)
Closes #14028
Authored by: bashonly
2025-08-16 22:55:21 +00:00
sepro
70f5669951 Warn against using -f mp4 (#13915)
Authored by: seproDev
2025-08-17 00:35:46 +02:00
doe1080
6ae3543d5a [ie] _rta_search: Do not assume age_limit is 0 (#13985)
Authored by: doe1080
2025-08-16 04:28:58 +00:00
doe1080
770119bdd1 [ie] Extract avif storyboard formats from MPD manifests (#14016)
Authored by: doe1080
2025-08-16 03:32:21 +00:00
Arseniy D.
8e3f8065af [ie/weibo] Fix extractors (#14012)
Closes #14012
Authored by: AzartX47, bashonly

Co-authored-by: bashonly <88596187+bashonly@users.noreply.github.com>
2025-08-16 03:07:35 +00:00
bashonly
aea85d525e [build] Discontinue darwin_legacy_exe support (#13860)
* Removes "yt-dlp_macos_legacy" from release assets
* Discontinues executable support for macOS < 10.15

Closes #13856
Authored by: bashonly
2025-08-13 22:02:58 +00:00
bashonly
f2919bd28e [ie/youtube] Add es5 and es6 player JS variants (#14005)
Authored by: bashonly
2025-08-12 23:24:31 +00:00
bashonly
681ed2153d [build] Bump PyInstaller version to 6.15.0 for Windows (#14002)
Authored by: bashonly
2025-08-12 23:17:13 +00:00
bashonly
bdeb3eb3f2 [pp/XAttrMetadata] Only set "Where From" attribute on macOS (#13999)
Fix 3e918d825d

Closes #14004
Authored by: bashonly
2025-08-12 07:58:22 +00:00
59 changed files with 1964 additions and 2875 deletions

View File

@@ -21,15 +21,9 @@ on:
macos:
default: true
type: boolean
macos_legacy:
default: true
type: boolean
windows:
default: true
type: boolean
windows32:
default: true
type: boolean
origin:
required: false
default: ''
@@ -67,16 +61,8 @@ on:
description: yt-dlp_macos, yt-dlp_macos.zip
default: true
type: boolean
macos_legacy:
description: yt-dlp_macos_legacy
default: true
type: boolean
windows:
description: yt-dlp.exe, yt-dlp_win.zip
default: true
type: boolean
windows32:
description: yt-dlp_x86.exe
description: yt-dlp.exe, yt-dlp_win.zip, yt-dlp_x86.exe, yt-dlp_win_x86.zip, yt-dlp_arm64.exe, yt-dlp_win_arm64.zip
default: true
type: boolean
origin:
@@ -344,73 +330,56 @@ jobs:
~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}-${{ github.ref }}
macos_legacy:
needs: process
if: inputs.macos_legacy
runs-on: macos-13
steps:
- uses: actions/checkout@v4
- name: Install Python
# We need the official Python, because the GA ones only support newer macOS versions
env:
PYTHON_VERSION: 3.10.5
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
run: |
# Hack to get the latest patch version. Uncomment if needed
#brew install python@3.10
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
curl "https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg" -o "python.pkg"
sudo installer -pkg python.pkg -target /
python3 --version
- name: Install Requirements
run: |
brew install coreutils
python3 devscripts/install_deps.py --user -o --include build
python3 devscripts/install_deps.py --user --include pyinstaller
- name: Prepare
run: |
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python3 devscripts/make_lazy_extractors.py
- name: Build
run: |
python3 -m bundle.pyinstaller
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
chmod +x ./dist/yt-dlp_macos_legacy
cp ./dist/yt-dlp_macos_legacy ./dist/yt-dlp_macos_legacy_downgraded
version="$(./dist/yt-dlp_macos_legacy --version)"
./dist/yt-dlp_macos_legacy_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./dist/yt-dlp_macos_legacy_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp_macos_legacy
compression-level: 0
windows:
needs: process
if: inputs.windows
runs-on: windows-latest
permissions:
contents: read
actions: write # For cleaning up cache
runs-on: ${{ matrix.runner }}
strategy:
fail-fast: false
matrix:
include:
- arch: 'x64'
runner: windows-2025
suffix: ''
python_version: '3.10'
- arch: 'x86'
runner: windows-2025
suffix: '_x86'
python_version: '3.10'
- arch: 'arm64'
runner: windows-11-arm
suffix: '_arm64'
python_version: '3.13' # arm64 only has Python >= 3.11 available
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
python-version: ${{ matrix.python_version }}
architecture: ${{ matrix.arch }}
- name: Restore cached requirements
id: restore-cache
if: matrix.arch == 'arm64'
uses: actions/cache/restore@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MINS: 1
with:
path: |
/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}_${{ matrix.arch }}-${{ matrix.python_version }}-${{ github.ref }}
- name: Install Requirements
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
run: |
python -m venv /yt-dlp-build-venv
/yt-dlp-build-venv/Scripts/Activate.ps1
python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py --include curl-cffi
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.13.0-py3-none-any.whl"
python devscripts/install_deps.py ${{ (matrix.arch != 'x86' && '--include curl-cffi') || '' }}
# Use custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/${{ matrix.arch }}/pyinstaller-6.15.0-py3-none-any.whl"
- name: Prepare
run: |
@@ -418,14 +387,18 @@ jobs:
python devscripts/make_lazy_extractors.py
- name: Build
run: |
/yt-dlp-build-venv/Scripts/Activate.ps1
python -m bundle.pyinstaller
python -m bundle.pyinstaller --onedir
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
Compress-Archive -Path ./dist/yt-dlp${{ matrix.suffix }}/* -DestinationPath ./dist/yt-dlp_win${{ matrix.suffix }}.zip
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
# if: vars.UPDATE_TO_VERIFICATION
# Temporarily skip for arm64 until there is a release that it can --update-to
if: |
vars.UPDATE_TO_VERIFICATION && matrix.arch != 'arm64'
run: |
foreach ($name in @("yt-dlp")) {
foreach ($name in @("yt-dlp${{ matrix.suffix }}")) {
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
$version = & "./dist/${name}.exe" --version
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
@@ -435,61 +408,43 @@ jobs:
}
}
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
dist/yt-dlp.exe
dist/yt-dlp_win.zip
compression-level: 0
windows32:
needs: process
if: inputs.windows32
runs-on: windows-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.10"
architecture: "x86"
- name: Install Requirements
# TODO: remove when there is a windows_arm64 release that we can --update-to
- name: Verify arm64 executable
if: |
vars.UPDATE_TO_VERIFICATION && matrix.arch == 'arm64'
run: |
python devscripts/install_deps.py -o --include build
python devscripts/install_deps.py
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.13.0-py3-none-any.whl"
- name: Prepare
run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python devscripts/make_lazy_extractors.py
- name: Build
run: |
python -m bundle.pyinstaller
- name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION
run: |
foreach ($name in @("yt-dlp_x86")) {
Copy-Item "./dist/${name}.exe" "./dist/${name}_downgraded.exe"
$version = & "./dist/${name}.exe" --version
& "./dist/${name}_downgraded.exe" -v --update-to yt-dlp/yt-dlp@2023.03.04
$downgraded_version = & "./dist/${name}_downgraded.exe" --version
if ($version -eq $downgraded_version) {
exit 1
}
foreach ($name in @("yt-dlp${{ matrix.suffix }}")) {
& "./dist/${name}.exe" -v --print-traffic --impersonate chrome "https://tls.browserleaks.com/json" -o ./resp.json
& cat ./resp.json
& "./dist/${name}.exe" --version
}
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
name: build-bin-${{ github.job }}-${{ matrix.arch }}
path: |
dist/yt-dlp_x86.exe
dist/yt-dlp${{ matrix.suffix }}.exe
dist/yt-dlp_win${{ matrix.suffix }}.zip
compression-level: 0
- name: Cleanup cache
if: |
matrix.arch == 'arm64' && steps.restore-cache.outputs.cache-hit == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
cache_key: cache-reqs-${{ github.job }}_${{ matrix.arch }}-${{ matrix.python_version }}-${{ github.ref }}
run: |
gh cache delete "${cache_key}"
- name: Cache requirements
if: matrix.arch == 'arm64'
uses: actions/cache/save@v4
with:
path: |
/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}_${{ matrix.arch }}-${{ matrix.python_version }}-${{ github.ref }}
meta_files:
if: always() && !cancelled()
needs:
@@ -498,9 +453,7 @@ jobs:
- linux_static
- linux_arm
- macos
- macos_legacy
- windows
- windows32
runs-on: ubuntu-latest
steps:
- name: Download artifacts
@@ -530,27 +483,31 @@ jobs:
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lock 2024.10.22 py2exe .+
lock 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lock 2024.10.22 (?!\w+_exe).+ Python 3\.8
lock 2024.10.22 zip Python 3\.8
lock 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lock 2025.08.11 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6
lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp 2024.10.22 py2exe .+
lockV2 yt-dlp/yt-dlp 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp 2024.10.22 (?!\w+_exe).+ Python 3\.8
lockV2 yt-dlp/yt-dlp 2024.10.22 zip Python 3\.8
lockV2 yt-dlp/yt-dlp 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp 2025.08.11 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 py2exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 (?!\w+_exe).+ Python 3\.8
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-nightly-builds 2025.08.12.233030 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.045052 py2exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 (?!\w+_exe).+ Python 3\.8
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-master-builds 2025.08.12.232447 darwin_legacy_exe .+
EOF
- name: Sign checksum files

View File

@@ -800,3 +800,9 @@ iribeirocampos
rolandcrosby
Sojiroh
tchebb
AzartX47
e2dk4r
junyilou
PierreMesure
Randalix
runarmod

View File

@@ -4,13 +4,64 @@
# To create a release, dispatch the https://github.com/yt-dlp/yt-dlp/actions/workflows/release.yml workflow on master
-->
### 2025.08.20
#### Core changes
- [Warn against using `-f mp4`](https://github.com/yt-dlp/yt-dlp/commit/70f56699515e0854a4853d214dce11b61d432387) ([#13915](https://github.com/yt-dlp/yt-dlp/issues/13915)) by [seproDev](https://github.com/seproDev)
- **utils**: [Add improved `jwt_encode` function](https://github.com/yt-dlp/yt-dlp/commit/35da8df4f843cb8f0656a301e5bebbf47d64d69a) ([#14071](https://github.com/yt-dlp/yt-dlp/issues/14071)) by [bashonly](https://github.com/bashonly)
#### Extractor changes
- [Extract avif storyboard formats from MPD manifests](https://github.com/yt-dlp/yt-dlp/commit/770119bdd15c525ba4338503f0eb68ea4baedf10) ([#14016](https://github.com/yt-dlp/yt-dlp/issues/14016)) by [doe1080](https://github.com/doe1080)
- `_rta_search`: [Do not assume `age_limit` is `0`](https://github.com/yt-dlp/yt-dlp/commit/6ae3543d5a1feea0c546571fd2782b024c108eac) ([#13985](https://github.com/yt-dlp/yt-dlp/issues/13985)) by [doe1080](https://github.com/doe1080)
- **adobetv**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/c22660aed5fadb4ac29bdf25db4e8016414153cc) ([#13917](https://github.com/yt-dlp/yt-dlp/issues/13917)) by [doe1080](https://github.com/doe1080)
- **bilibili**: [Handle Bangumi redirection](https://github.com/yt-dlp/yt-dlp/commit/6ca9165648ac9a07c012de639faf50a97cbe0991) ([#14038](https://github.com/yt-dlp/yt-dlp/issues/14038)) by [grqz](https://github.com/grqz), [junyilou](https://github.com/junyilou)
- **faulio**: [Add extractor](https://github.com/yt-dlp/yt-dlp/commit/74b4b3b00516e92a60250e0626272a6826459057) ([#13907](https://github.com/yt-dlp/yt-dlp/issues/13907)) by [CasperMcFadden95](https://github.com/CasperMcFadden95)
- **francetv**: site: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/7b8a8abb98165a53c026e2a3f52faee608df1f20) ([#14082](https://github.com/yt-dlp/yt-dlp/issues/14082)) by [bashonly](https://github.com/bashonly)
- **medialaan**: [Rework extractors](https://github.com/yt-dlp/yt-dlp/commit/86d74e5cf0e06c53c931ccdbdd497e3f2c4d2fe2) ([#14015](https://github.com/yt-dlp/yt-dlp/issues/14015)) by [doe1080](https://github.com/doe1080)
- **mtv**: [Overhaul extractors](https://github.com/yt-dlp/yt-dlp/commit/8df121ba59208979aa713822781891347abd03d1) ([#14052](https://github.com/yt-dlp/yt-dlp/issues/14052)) by [bashonly](https://github.com/bashonly), [doe1080](https://github.com/doe1080), [Randalix](https://github.com/Randalix), [seproDev](https://github.com/seproDev)
- **niconico**: live: [Support age-restricted streams](https://github.com/yt-dlp/yt-dlp/commit/374ea049f531959bcccf8a1e6bc5659d228a780e) ([#13549](https://github.com/yt-dlp/yt-dlp/issues/13549)) by [doe1080](https://github.com/doe1080)
- **nrktvepisode**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/7540aa1da1800769af40381f423825a1a8826377) ([#14065](https://github.com/yt-dlp/yt-dlp/issues/14065)) by [runarmod](https://github.com/runarmod)
- **puhutv**: [Fix playlists extraction](https://github.com/yt-dlp/yt-dlp/commit/36e873822bdb2c5aba3780dd3ae32cbae564c6cd) ([#11955](https://github.com/yt-dlp/yt-dlp/issues/11955)) by [e2dk4r](https://github.com/e2dk4r)
- **steam**: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/d3d1ac8eb2f9e96f3d75292e0effe2b1bccece3b) ([#14008](https://github.com/yt-dlp/yt-dlp/issues/14008)) by [AzartX47](https://github.com/AzartX47)
- **svt**: [Extract forced subs under separate lang code](https://github.com/yt-dlp/yt-dlp/commit/82a139020417a501f261d9fe02cefca01b1e12e4) ([#14062](https://github.com/yt-dlp/yt-dlp/issues/14062)) by [PierreMesure](https://github.com/PierreMesure)
- **tiktok**: user: [Avoid infinite loop during extraction](https://github.com/yt-dlp/yt-dlp/commit/edf55e81842fcfa6c302528d7f33ccd5081b37ef) ([#14032](https://github.com/yt-dlp/yt-dlp/issues/14032)) by [bashonly](https://github.com/bashonly) (With fixes in [471a2b6](https://github.com/yt-dlp/yt-dlp/commit/471a2b60e0a3e056960d9ceb1ebf57908428f752))
- **vimeo**
- album: [Support embed-only and non-numeric albums](https://github.com/yt-dlp/yt-dlp/commit/d8200ff0a4699e06c9f7daca8f8531f8b98e68f2) ([#14021](https://github.com/yt-dlp/yt-dlp/issues/14021)) by [bashonly](https://github.com/bashonly)
- event: [Fix extractor](https://github.com/yt-dlp/yt-dlp/commit/0f6b915822fb64bd944126fdacd401975c9f06ed) ([#14064](https://github.com/yt-dlp/yt-dlp/issues/14064)) by [bashonly](https://github.com/bashonly)
- **weibo**
- [Fix extractors](https://github.com/yt-dlp/yt-dlp/commit/8e3f8065af1415caeff788c5c430703dd0d8f576) ([#14012](https://github.com/yt-dlp/yt-dlp/issues/14012)) by [AzartX47](https://github.com/AzartX47), [bashonly](https://github.com/bashonly)
- [Support more URLs and --no-playlist](https://github.com/yt-dlp/yt-dlp/commit/404bd889d0e0b62ad72b7281e3fefdc0497080b3) ([#14035](https://github.com/yt-dlp/yt-dlp/issues/14035)) by [bashonly](https://github.com/bashonly)
- **youtube**
- [Add `es5` and `es6` player JS variants](https://github.com/yt-dlp/yt-dlp/commit/f2919bd28eac905f1267c62b83738a02bb5b4e04) ([#14005](https://github.com/yt-dlp/yt-dlp/issues/14005)) by [bashonly](https://github.com/bashonly)
- [Add `playback_wait` extractor-arg](https://github.com/yt-dlp/yt-dlp/commit/f63a7e41d120ef84f0f2274b0962438e3272d2fa) by [bashonly](https://github.com/bashonly)
- [Default to `main` player JS variant](https://github.com/yt-dlp/yt-dlp/commit/df0553153e41f81e3b30aa5bb1d119c61bd449ac) ([#14079](https://github.com/yt-dlp/yt-dlp/issues/14079)) by [bashonly](https://github.com/bashonly)
- [Extract title and description from initial data](https://github.com/yt-dlp/yt-dlp/commit/7bc53ae79930b36f4f947679545c75f36e9f0ddd) ([#14078](https://github.com/yt-dlp/yt-dlp/issues/14078)) by [bashonly](https://github.com/bashonly)
- [Handle required preroll waiting period](https://github.com/yt-dlp/yt-dlp/commit/a97f4cb57e61e19be61a7d5ac19665d4b567c960) ([#14081](https://github.com/yt-dlp/yt-dlp/issues/14081)) by [bashonly](https://github.com/bashonly)
- [Remove default player params](https://github.com/yt-dlp/yt-dlp/commit/d154dc3dcf0c7c75dbabb6cd1aca66fdd806f858) ([#14081](https://github.com/yt-dlp/yt-dlp/issues/14081)) by [bashonly](https://github.com/bashonly)
- tab: [Fix playlists tab extraction](https://github.com/yt-dlp/yt-dlp/commit/8a8861d53864c8a38e924bc0657ead5180f17268) ([#14030](https://github.com/yt-dlp/yt-dlp/issues/14030)) by [bashonly](https://github.com/bashonly)
#### Downloader changes
- [Support `available_at` format field](https://github.com/yt-dlp/yt-dlp/commit/438d3f06b3c41bdef8112d40b75d342186e91a16) ([#13980](https://github.com/yt-dlp/yt-dlp/issues/13980)) by [bashonly](https://github.com/bashonly)
#### Postprocessor changes
- **xattrmetadata**: [Only set "Where From" attribute on macOS](https://github.com/yt-dlp/yt-dlp/commit/bdeb3eb3f29eebbe8237fbc5186e51e7293eea4a) ([#13999](https://github.com/yt-dlp/yt-dlp/issues/13999)) by [bashonly](https://github.com/bashonly)
#### Misc. changes
- **build**
- [Add Windows ARM64 builds](https://github.com/yt-dlp/yt-dlp/commit/07247d6c20fef1ad13b6f71f6355a44d308cf010) ([#14003](https://github.com/yt-dlp/yt-dlp/issues/14003)) by [bashonly](https://github.com/bashonly)
- [Bump PyInstaller version to 6.15.0 for Windows](https://github.com/yt-dlp/yt-dlp/commit/681ed2153de754c2c885fdad09ab71fffa8114f9) ([#14002](https://github.com/yt-dlp/yt-dlp/issues/14002)) by [bashonly](https://github.com/bashonly)
- [Discontinue `darwin_legacy_exe` support](https://github.com/yt-dlp/yt-dlp/commit/aea85d525e1007bb64baec0e170c054292d0858a) ([#13860](https://github.com/yt-dlp/yt-dlp/issues/13860)) by [bashonly](https://github.com/bashonly)
- **cleanup**
- [Remove dead extractors](https://github.com/yt-dlp/yt-dlp/commit/6f4c1bb593da92f0ce68229d0c813cdbaf1314da) ([#13996](https://github.com/yt-dlp/yt-dlp/issues/13996)) by [doe1080](https://github.com/doe1080)
- Miscellaneous: [c2fc4f3](https://github.com/yt-dlp/yt-dlp/commit/c2fc4f3e7f6d757250183b177130c64beee50520) by [bashonly](https://github.com/bashonly)
### 2025.08.11
#### Important changes
- **The minimum *recommended* Python version has been raised to 3.10**
Since Python 3.9 will reach end-of-life in October 2025, support for it will be dropped soon. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13858)
- **darwin_legacy_exe builds are being discontinued**
This release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13857)
This release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13856)
- **linux_armv7l_exe builds are being discontinued**
This release's `yt-dlp_linux_armv7l` binary could be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13976)

View File

@@ -106,12 +106,14 @@ File|Description
File|Description
:---|:---
[yt-dlp_x86.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_x86.exe)|Windows (Win8+) standalone x86 (32-bit) binary
[yt-dlp_arm64.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_arm64.exe)|Windows (Win10+) standalone arm64 (64-bit) binary
[yt-dlp_linux](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux)|Linux standalone x64 binary
[yt-dlp_linux_armv7l](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_armv7l)|Linux standalone armv7l (32-bit) binary
[yt-dlp_linux_aarch64](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux_aarch64)|Linux standalone aarch64 (64-bit) binary
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update)
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows (Win8+) x64 executable (no auto-update)
[yt-dlp_win_x86.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win_x86.zip)|Unpackaged Windows (Win8+) x86 executable (no auto-update)
[yt-dlp_win_arm64.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win_arm64.zip)|Unpackaged Windows (Win10+) arm64 executable (no auto-update)
[yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update)
[yt-dlp_macos_legacy](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos_legacy)|MacOS (10.9+) standalone x64 executable
#### Misc
@@ -1804,7 +1806,7 @@ The following extractors use this feature:
* `player_skip`: Skip some network requests that are generally needed for robust extraction. One or more of `configs` (skip client configs), `webpage` (skip initial webpage), `js` (skip js player), `initial_data` (skip initial data/next ep request). While these options can help reduce the number of requests needed or avoid some rate-limiting, they could cause issues such as missing formats or metadata. See [#860](https://github.com/yt-dlp/yt-dlp/pull/860) and [#12826](https://github.com/yt-dlp/yt-dlp/issues/12826) for more details
* `webpage_skip`: Skip extraction of embedded webpage data. One or both of `player_response`, `initial_data`. These options are for testing purposes and don't skip any network requests
* `player_params`: YouTube player parameters to use for player requests. Will overwrite any default ones set by yt-dlp.
* `player_js_variant`: The player javascript variant to use for signature and nsig deciphering. The known variants are: `main`, `tce`, `tv`, `tv_es6`, `phone`, `tablet`. Only `main` is recommended as a possible workaround; the others are for debugging purposes. The default is to use what is prescribed by the site, and can be selected with `actual`
* `player_js_variant`: The player javascript variant to use for signature and nsig deciphering. The known variants are: `main`, `tce`, `tv`, `tv_es6`, `phone`, `tablet`. The default is `main`, and the others are for debugging purposes. You can use `actual` to go with what is prescribed by the site
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
@@ -1817,6 +1819,7 @@ The following extractors use this feature:
* `po_token`: Proof of Origin (PO) Token(s) to use. Comma seperated list of PO Tokens in the format `CLIENT.CONTEXT+PO_TOKEN`, e.g. `youtube:po_token=web.gvs+XXX,web.player=XXX,web_safari.gvs+YYY`. Context can be any of `gvs` (Google Video Server URLs), `player` (Innertube player request) or `subs` (Subtitles)
* `pot_trace`: Enable debug logging for PO Token fetching. Either `true` or `false` (default)
* `fetch_pot`: Policy to use for fetching a PO Token from providers. One of `always` (always try fetch a PO Token regardless if the client requires one for the given context), `never` (never fetch a PO Token), or `auto` (default; only fetch a PO Token if the client requires one for the given context)
* `playback_wait`: Duration (in seconds) to wait inbetween the extraction and download stages in order to ensure the formats are available. The default is `6` seconds
#### youtubepot-webpo
* `bind_to_visitor_id`: Whether to use the Visitor ID instead of Visitor Data for caching WebPO tokens. Either `true` (default) or `false`

View File

@@ -287,7 +287,7 @@
{
"action": "add",
"when": "cc5a5caac5fbc0d605b52bde0778d6fd5f97b5ab",
"short": "[priority] **darwin_legacy_exe builds are being discontinued**\nThis release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13857)"
"short": "[priority] **darwin_legacy_exe builds are being discontinued**\nThis release's `yt-dlp_macos_legacy` binary will likely be the last one. [Read more](https://github.com/yt-dlp/yt-dlp/issues/13856)"
},
{
"action": "add",

View File

@@ -315,6 +315,7 @@ banned-from = [
"yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead."
"yt_dlp.utils.bytes_to_intlist".msg = "Use `list` instead."
"yt_dlp.utils.intlist_to_bytes".msg = "Use `bytes` instead."
"yt_dlp.utils.jwt_encode_hs256".msg = "Use `yt_dlp.utils.jwt_encode` instead."
"yt_dlp.utils.decodeArgument".msg = "Do not use"
"yt_dlp.utils.decodeFilename".msg = "Do not use"
"yt_dlp.utils.encodeFilename".msg = "Do not use"

View File

@@ -44,11 +44,7 @@ The only reliable way to check if a site is supported is to try it.
- **ADN**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network
- **ADNSeason**: [*animationdigitalnetwork*](## "netrc machine") Animation Digital Network
- **AdobeConnect**
- **adobetv**: (**Currently broken**)
- **adobetv:channel**: (**Currently broken**)
- **adobetv:embed**: (**Currently broken**)
- **adobetv:show**: (**Currently broken**)
- **adobetv:video**
- **adobetv**
- **AdultSwim**
- **aenetworks**: A+E Networks: A&E, Lifetime, History.com, FYI Network and History Vault
- **aenetworks:collection**
@@ -100,7 +96,6 @@ The only reliable way to check if a site is supported is to try it.
- **ARD**
- **ARDMediathek**
- **ARDMediathekCollection**
- **Arkena**
- **Art19**
- **Art19Show**
- **arte.sky.it**
@@ -155,9 +150,8 @@ The only reliable way to check if a site is supported is to try it.
- **Beatport**
- **Beeg**
- **BehindKink**: (**Currently broken**)
- **Bellator**
- **BerufeTV**
- **Bet**: (**Currently broken**)
- **Bet**
- **bfi:player**: (**Currently broken**)
- **bfmtv**
- **bfmtv:article**
@@ -290,12 +284,10 @@ The only reliable way to check if a site is supported is to try it.
- **CloudyCDN**
- **Clubic**: (**Currently broken**)
- **Clyp**
- **cmt.com**: (**Currently broken**)
- **CNBCVideo**
- **CNN**
- **CNNIndonesia**
- **ComedyCentral**
- **ComedyCentralTV**
- **ConanClassic**: (**Currently broken**)
- **CondeNast**: Condé Nast media group: Allure, Architectural Digest, Ars Technica, Bon Appétit, Brides, Condé Nast, Condé Nast Traveler, Details, Epicurious, GQ, Glamour, Golf Digest, SELF, Teen Vogue, The New Yorker, Vanity Fair, Vogue, W Magazine, WIRED
- **CONtv**
@@ -445,6 +437,7 @@ The only reliable way to check if a site is supported is to try it.
- **fancode:live**: [*fancode*](## "netrc machine") (**Currently broken**)
- **fancode:vod**: [*fancode*](## "netrc machine") (**Currently broken**)
- **Fathom**
- **Faulio**
- **FaulioLive**
- **faz.net**
- **fc2**: [*fc2*](## "netrc machine")
@@ -700,8 +693,8 @@ The only reliable way to check if a site is supported is to try it.
- **lbry:channel**: odysee.com channels
- **lbry:playlist**: odysee.com playlists
- **LCI**
- **Lcp**
- **LcpPlay**
- **Lcp**: (**Currently broken**)
- **LcpPlay**: (**Currently broken**)
- **Le**: 乐视网
- **LearningOnScreen**
- **Lecture2Go**: (**Currently broken**)
@@ -840,12 +833,6 @@ The only reliable way to check if a site is supported is to try it.
- **MSN**
- **mtg**: MTG services
- **mtv**
- **mtv.de**: (**Currently broken**)
- **mtv.it**
- **mtv.it:programma**
- **mtv:video**
- **mtvjapan**
- **mtvservices:embedded**
- **MTVUutisetArticle**: (**Currently broken**)
- **MuenchenTV**: münchen.tv (**Currently broken**)
- **MujRozhlas**
@@ -945,9 +932,6 @@ The only reliable way to check if a site is supported is to try it.
- **NhkVodProgram**
- **nhl.com**
- **nick.com**
- **nick.de**
- **nickelodeon:br**
- **nickelodeonru**
- **niconico**: [*niconico*](## "netrc machine") ニコニコ動画
- **niconico:history**: NicoNico user history or likes. Requires cookies.
- **niconico:live**: [*niconico*](## "netrc machine") ニコニコ生放送
@@ -1049,7 +1033,6 @@ The only reliable way to check if a site is supported is to try it.
- **Panopto**
- **PanoptoList**
- **PanoptoPlaylist**
- **ParamountNetwork**
- **ParamountPlus**
- **ParamountPlusSeries**
- **ParamountPressExpress**
@@ -1088,7 +1071,6 @@ The only reliable way to check if a site is supported is to try it.
- **PiramideTVChannel**
- **pixiv:sketch**
- **pixiv:sketch:user**
- **Pladform**
- **PlanetMarathi**
- **Platzi**: [*platzi*](## "netrc machine")
- **PlatziCourse**: [*platzi*](## "netrc machine")
@@ -1377,8 +1359,9 @@ The only reliable way to check if a site is supported is to try it.
- **southpark.cc.com:español**
- **southpark.de**
- **southpark.lat**
- **southpark.nl**
- **southparkstudios.dk**
- **southparkstudios.co.uk**
- **southparkstudios.com.br**
- **southparkstudios.nu**
- **SovietsCloset**
- **SovietsClosetPlaylist**
- **SpankBang**
@@ -1557,7 +1540,6 @@ The only reliable way to check if a site is supported is to try it.
- **TVer**
- **tvigle**: Интернет-телевидение Tvigle.ru
- **TVIPlayer**
- **tvland.com**
- **TVN24**: (**Currently broken**)
- **TVNoe**: (**Currently broken**)
- **tvopengr:embed**: tvopen.gr embedded videos
@@ -1618,8 +1600,6 @@ The only reliable way to check if a site is supported is to try it.
- **Vbox7**
- **Veo**
- **Vesti**: Вести.Ru (**Currently broken**)
- **Vevo**
- **VevoPlaylist**
- **VGTV**: VGTV, BTTV, FTV, Aftenposten and Aftonbladet
- **vh1.com**
- **vhx:embed**: [*vimeo*](## "netrc machine")
@@ -1698,7 +1678,7 @@ The only reliable way to check if a site is supported is to try it.
- **vrsquare:section**
- **VRT**: VRT NWS, Flanders News, Flandern Info and Sporza
- **vrtmax**: [*vrtnu*](## "netrc machine") VRT MAX (formerly VRT NU)
- **VTM**: (**Currently broken**)
- **VTM**
- **VTV**
- **VTVGo**
- **VTXTV**: [*vtxtv*](## "netrc machine")

View File

@@ -14,7 +14,6 @@ from yt_dlp.extractor import (
NRKTVIE,
PBSIE,
CeskaTelevizeIE,
ComedyCentralIE,
DailymotionIE,
DemocracynowIE,
LyndaIE,
@@ -279,23 +278,6 @@ class TestNPOSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['nl']), 'fc6435027572b63fb4ab143abd5ad3f4')
@is_download_test
@unittest.skip('IE broken')
class TestMTVSubtitles(BaseTestSubtitles):
url = 'http://www.cc.com/video-clips/p63lk0/adam-devine-s-house-party-chasing-white-swans'
IE = ComedyCentralIE
def getInfoDict(self):
return super().getInfoDict()['entries'][0]
def test_allsubtitles(self):
self.DL.params['writesubtitles'] = True
self.DL.params['allsubtitles'] = True
subtitles = self.getSubtitles()
self.assertEqual(set(subtitles.keys()), {'en'})
self.assertEqual(md5(subtitles['en']), '78206b8d8a0cfa9da64dc026eea48961')
@is_download_test
class TestNRKSubtitles(BaseTestSubtitles):
url = 'http://tv.nrk.no/serie/ikke-gjoer-dette-hjemme/DMPV73000411/sesong-2/episode-1'

View File

@@ -84,8 +84,9 @@ lock 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lock 2024.10.22 py2exe .+
lock 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lock 2024.10.22 (?!\w+_exe).+ Python 3\.8
lock 2024.10.22 zip Python 3\.8
lock 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lock 2025.08.11 darwin_legacy_exe .+
'''
TEST_LOCKFILE_V2_TMPL = r'''%s
@@ -94,20 +95,23 @@ lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp 2024.10.22 py2exe .+
lockV2 yt-dlp/yt-dlp 2024.10.22 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp 2024.10.22 (?!\w+_exe).+ Python 3\.8
lockV2 yt-dlp/yt-dlp 2024.10.22 zip Python 3\.8
lockV2 yt-dlp/yt-dlp 2024.10.22 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp 2025.08.11 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 py2exe .+
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 (?!\w+_exe).+ Python 3\.8
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-nightly-builds 2024.10.22.051025 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-nightly-builds 2025.08.12.233030 darwin_legacy_exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.045052 py2exe .+
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 linux_(?:armv7l|aarch64)_exe .+-glibc2\.(?:[12]?\d|30)\b
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 (?!\w+_exe).+ Python 3\.8
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 zip Python 3\.8
lockV2 yt-dlp/yt-dlp-master-builds 2024.10.22.060347 win(?:_x86)?_exe Python 3\.[78].+ Windows-(?:7-|2008ServerR2)
lockV2 yt-dlp/yt-dlp-master-builds 2025.08.12.232447 darwin_legacy_exe .+
'''
TEST_LOCKFILE_V2 = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_COMMENT
@@ -217,6 +221,10 @@ class TestUpdate(unittest.TestCase):
test( # linux_aarch64_exe w/glibc2.3 should only update to glibc<2.31 lock
lockfile, 'linux_aarch64_exe Python 3.8.0 (CPython aarch64 64bit) - Linux-6.5.0-1025-azure-aarch64-with-glibc2.3 (OpenSSL',
'2025.01.01', '2024.10.22')
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.11', '2025.08.11')
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.11', '2025.08.11', exact=True)
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.12', '2025.08.11')
test(lockfile, 'darwin_legacy_exe Python 3.10.5', '2025.08.12', None, exact=True)
# Forks can block updates to non-numeric tags rather than lock
test(TEST_LOCKFILE_FORK, 'zip Python 3.6.3', 'pr0000', None, repo='fork/yt-dlp')

View File

@@ -71,6 +71,8 @@ from yt_dlp.utils import (
iri_to_uri,
is_html,
js_to_json,
jwt_decode_hs256,
jwt_encode,
limit_length,
locked_file,
lowercase_escape,
@@ -2180,6 +2182,41 @@ Line 1
assert int_or_none(v=10) == 10, 'keyword passed positional should call function'
assert int_or_none(scale=0.1)(10) == 100, 'call after partial application should call the function'
_JWT_KEY = '12345678'
_JWT_HEADERS_1 = {'a': 'b'}
_JWT_HEADERS_2 = {'typ': 'JWT', 'alg': 'HS256'}
_JWT_HEADERS_3 = {'typ': 'JWT', 'alg': 'RS256'}
_JWT_HEADERS_4 = {'c': 'd', 'alg': 'ES256'}
_JWT_DECODED = {
'foo': 'bar',
'qux': 'baz',
}
_JWT_SIMPLE = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.fKojvTWqnjNTbsdoDTmYNc4tgYAG3h_SWRzM77iLH0U'
_JWT_WITH_EXTRA_HEADERS = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImEiOiJiIn0.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.Ia91-B77yasfYM7jsB6iVKLew-3rO6ITjNmjWUVXCvQ'
_JWT_WITH_REORDERED_HEADERS = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.slg-7COta5VOfB36p3tqV4MGPV6TTA_ouGnD48UEVq4'
_JWT_WITH_REORDERED_HEADERS_AND_RS256_ALG = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.XWp496oVgQnoits0OOocutdjxoaQwn4GUWWxUsKENPM'
_JWT_WITH_EXTRA_HEADERS_AND_ES256_ALG = 'eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCIsImMiOiJkIn0.eyJmb28iOiJiYXIiLCJxdXgiOiJiYXoifQ.oM_tc7IkfrwkoRh43rFFE1wOi3J3mQGwx7_lMyKQqDg'
def test_jwt_encode(self):
def test(expected, headers={}):
self.assertEqual(jwt_encode(self._JWT_DECODED, self._JWT_KEY, headers=headers), expected)
test(self._JWT_SIMPLE)
test(self._JWT_WITH_EXTRA_HEADERS, headers=self._JWT_HEADERS_1)
test(self._JWT_WITH_REORDERED_HEADERS, headers=self._JWT_HEADERS_2)
test(self._JWT_WITH_REORDERED_HEADERS_AND_RS256_ALG, headers=self._JWT_HEADERS_3)
test(self._JWT_WITH_EXTRA_HEADERS_AND_ES256_ALG, headers=self._JWT_HEADERS_4)
def test_jwt_decode_hs256(self):
def test(inp):
self.assertEqual(jwt_decode_hs256(inp), self._JWT_DECODED)
test(self._JWT_SIMPLE)
test(self._JWT_WITH_EXTRA_HEADERS)
test(self._JWT_WITH_REORDERED_HEADERS)
test(self._JWT_WITH_REORDERED_HEADERS_AND_RS256_ALG)
test(self._JWT_WITH_EXTRA_HEADERS_AND_ES256_ALG)
if __name__ == '__main__':
unittest.main()

View File

@@ -138,6 +138,16 @@ _SIG_TESTS = [
'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt',
'JC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit0zJAtIEsgOV2SXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-a',
),
(
'https://www.youtube.com/s/player/010fbc8d/player_es5.vflset/en_US/base.js',
'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt',
'ttJC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit2zJAsIEggOVaSXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-',
),
(
'https://www.youtube.com/s/player/010fbc8d/player_es6.vflset/en_US/base.js',
'gN7a-hudCuAuPH6fByOk1_GNXN0yNMHShjZXS2VOgsEItAJz0tipeavEOmNdYN-wUtcEqD3bCXjc0iyKfAyZxCBGgIARwsSdQfJ2CJtt',
'ttJC2JfQdSswRAIgGBCxZyAfKyi0cjXCb3DqEctUw-NYdNmOEvaepit2zJAsIEggOVaSXZjhSHMNy0NXNG_1kOyBf6HPuAuCduh-',
),
]
_NSIG_TESTS = [
@@ -377,6 +387,14 @@ _NSIG_TESTS = [
'https://www.youtube.com/s/player/ef259203/player_ias_tce.vflset/en_US/base.js',
'rPqBC01nJpqhhi2iA2U', 'hY7dbiKFT51UIA',
),
(
'https://www.youtube.com/s/player/010fbc8d/player_es5.vflset/en_US/base.js',
'0hlOAlqjFszVvF4Z', 'R-H23bZGAsRFTg',
),
(
'https://www.youtube.com/s/player/010fbc8d/player_es6.vflset/en_US/base.js',
'0hlOAlqjFszVvF4Z', 'R-H23bZGAsRFTg',
),
]

View File

@@ -599,7 +599,7 @@ class YoutubeDL:
_NUMERIC_FIELDS = {
'width', 'height', 'asr', 'audio_channels', 'fps',
'tbr', 'abr', 'vbr', 'filesize', 'filesize_approx',
'timestamp', 'release_timestamp',
'timestamp', 'release_timestamp', 'available_at',
'duration', 'view_count', 'like_count', 'dislike_count', 'repost_count',
'average_rating', 'comment_count', 'age_limit',
'start_time', 'end_time',
@@ -609,7 +609,7 @@ class YoutubeDL:
_format_fields = {
# NB: Keep in sync with the docstring of extractor/common.py
'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note',
'url', 'manifest_url', 'manifest_stream_number', 'ext', 'format', 'format_id', 'format_note', 'available_at',
'width', 'height', 'aspect_ratio', 'resolution', 'dynamic_range', 'tbr', 'abr', 'acodec', 'asr', 'audio_channels',
'vbr', 'fps', 'vcodec', 'container', 'filesize', 'filesize_approx', 'rows', 'columns', 'hls_media_playlist_data',
'player_url', 'protocol', 'fragment_base_url', 'fragments', 'is_from_start', 'is_dash_periods', 'request_data',

View File

@@ -500,6 +500,14 @@ def validate_options(opts):
'To let yt-dlp download and merge the best available formats, simply do not pass any format selection',
'If you know what you are doing and want only the best pre-merged format, use "-f b" instead to suppress this warning')))
# Common mistake: -f mp4
if opts.format == 'mp4':
warnings.append('.\n '.join((
'"-f mp4" selects the best pre-merged mp4 format which is often not what\'s intended',
'Pre-merged mp4 formats are not available from all sites, or may only be available in lower quality',
'To prioritize the best h264 video and aac audio in an mp4 container, use "-t mp4" instead',
'If you know what you are doing and want a pre-merged mp4 format, use "-f b[ext=mp4]" instead to suppress this warning')))
# --(postprocessor/downloader)-args without name
def report_args_compat(name, value, key1, key2=None, where=None):
if key1 in value and key2 not in value:

View File

@@ -455,14 +455,26 @@ class FileDownloader:
self._finish_multiline_status()
return True, False
sleep_note = ''
if subtitle:
sleep_interval = self.params.get('sleep_interval_subtitles') or 0
else:
min_sleep_interval = self.params.get('sleep_interval') or 0
max_sleep_interval = self.params.get('max_sleep_interval') or 0
if available_at := info_dict.get('available_at'):
forced_sleep_interval = available_at - int(time.time())
if forced_sleep_interval > min_sleep_interval:
sleep_note = 'as required by the site'
min_sleep_interval = forced_sleep_interval
if forced_sleep_interval > max_sleep_interval:
max_sleep_interval = forced_sleep_interval
sleep_interval = random.uniform(
min_sleep_interval, self.params.get('max_sleep_interval') or min_sleep_interval)
min_sleep_interval, max_sleep_interval or min_sleep_interval)
if sleep_interval > 0:
self.to_screen(f'[download] Sleeping {sleep_interval:.2f} seconds ...')
self.to_screen(f'[download] Sleeping {sleep_interval:.2f} seconds {sleep_note}...')
time.sleep(sleep_interval)
ret = self.real_download(filename, info_dict)

View File

@@ -58,13 +58,7 @@ from .adn import (
ADNSeasonIE,
)
from .adobeconnect import AdobeConnectIE
from .adobetv import (
AdobeTVChannelIE,
AdobeTVEmbedIE,
AdobeTVIE,
AdobeTVShowIE,
AdobeTVVideoIE,
)
from .adobetv import AdobeTVVideoIE
from .adultswim import AdultSwimIE
from .aenetworks import (
AENetworksCollectionIE,
@@ -152,7 +146,6 @@ from .ard import (
ARDBetaMediathekIE,
ARDMediathekCollectionIE,
)
from .arkena import ArkenaIE
from .arnes import ArnesIE
from .art19 import (
Art19IE,
@@ -405,16 +398,12 @@ from .cloudflarestream import CloudflareStreamIE
from .cloudycdn import CloudyCDNIE
from .clubic import ClubicIE
from .clyp import ClypIE
from .cmt import CMTIE
from .cnbc import CNBCVideoIE
from .cnn import (
CNNIE,
CNNIndonesiaIE,
)
from .comedycentral import (
ComedyCentralIE,
ComedyCentralTVIE,
)
from .comedycentral import ComedyCentralIE
from .commonmistakes import (
BlobIE,
CommonMistakesIE,
@@ -636,7 +625,10 @@ from .fancode import (
FancodeVodIE,
)
from .fathom import FathomIE
from .faulio import FaulioLiveIE
from .faulio import (
FaulioIE,
FaulioLiveIE,
)
from .faz import FazIE
from .fc2 import (
FC2IE,
@@ -1187,15 +1179,7 @@ from .moview import MoviewPlayIE
from .moviezine import MoviezineIE
from .movingimage import MovingImageIE
from .msn import MSNIE
from .mtv import (
MTVDEIE,
MTVIE,
MTVItaliaIE,
MTVItaliaProgrammaIE,
MTVJapanIE,
MTVServicesEmbeddedIE,
MTVVideoIE,
)
from .mtv import MTVIE
from .muenchentv import MuenchenTVIE
from .murrtube import (
MurrtubeIE,
@@ -1337,12 +1321,7 @@ from .nhk import (
NhkVodProgramIE,
)
from .nhl import NHLIE
from .nick import (
NickBrIE,
NickDeIE,
NickIE,
NickRuIE,
)
from .nick import NickIE
from .niconico import (
NiconicoHistoryIE,
NiconicoIE,
@@ -1548,7 +1527,6 @@ from .pixivsketch import (
PixivSketchIE,
PixivSketchUserIE,
)
from .pladform import PladformIE
from .planetmarathi import PlanetMarathiIE
from .platzi import (
PlatziCourseIE,
@@ -1931,12 +1909,13 @@ from .soundgasm import (
SoundgasmProfileIE,
)
from .southpark import (
SouthParkComBrIE,
SouthParkCoUkIE,
SouthParkDeIE,
SouthParkDkIE,
SouthParkEsIE,
SouthParkIE,
SouthParkLatIE,
SouthParkNlIE,
)
from .sovietscloset import (
SovietsClosetIE,
@@ -1947,10 +1926,6 @@ from .spankbang import (
SpankBangPlaylistIE,
)
from .spiegel import SpiegelIE
from .spike import (
BellatorIE,
ParamountNetworkIE,
)
from .sport5 import Sport5IE
from .sportbox import SportBoxIE
from .sportdeutschland import SportDeutschlandIE
@@ -2215,7 +2190,6 @@ from .tvc import (
from .tver import TVerIE
from .tvigle import TvigleIE
from .tviplayer import TVIPlayerIE
from .tvland import TVLandIE
from .tvn24 import TVN24IE
from .tvnoe import TVNoeIE
from .tvopengr import (
@@ -2313,10 +2287,6 @@ from .varzesh3 import Varzesh3IE
from .vbox7 import Vbox7IE
from .veo import VeoIE
from .vesti import VestiIE
from .vevo import (
VevoIE,
VevoPlaylistIE,
)
from .vgtv import (
VGTVIE,
BTArticleIE,

View File

@@ -1,297 +1,100 @@
import functools
import re
from .common import InfoExtractor
from ..utils import (
ISO639Utils,
OnDemandPagedList,
clean_html,
determine_ext,
float_or_none,
int_or_none,
join_nonempty,
parse_duration,
str_or_none,
str_to_int,
unified_strdate,
url_or_none,
)
from ..utils.traversal import traverse_obj
class AdobeTVBaseIE(InfoExtractor):
def _call_api(self, path, video_id, query, note=None):
return self._download_json(
'http://tv.adobe.com/api/v4/' + path,
video_id, note, query=query)['data']
def _parse_subtitles(self, video_data, url_key):
subtitles = {}
for translation in video_data.get('translations', []):
vtt_path = translation.get(url_key)
if not vtt_path:
continue
lang = translation.get('language_w3c') or ISO639Utils.long2short(translation['language_medium'])
subtitles.setdefault(lang, []).append({
'ext': 'vtt',
'url': vtt_path,
})
return subtitles
def _parse_video_data(self, video_data):
video_id = str(video_data['id'])
title = video_data['title']
s3_extracted = False
formats = []
for source in video_data.get('videos', []):
source_url = source.get('url')
if not source_url:
continue
f = {
'format_id': source.get('quality_level'),
'fps': int_or_none(source.get('frame_rate')),
'height': int_or_none(source.get('height')),
'tbr': int_or_none(source.get('video_data_rate')),
'width': int_or_none(source.get('width')),
'url': source_url,
}
original_filename = source.get('original_filename')
if original_filename:
if not (f.get('height') and f.get('width')):
mobj = re.search(r'_(\d+)x(\d+)', original_filename)
if mobj:
f.update({
'height': int(mobj.group(2)),
'width': int(mobj.group(1)),
})
if original_filename.startswith('s3://') and not s3_extracted:
formats.append({
'format_id': 'original',
'quality': 1,
'url': original_filename.replace('s3://', 'https://s3.amazonaws.com/'),
})
s3_extracted = True
formats.append(f)
return {
'id': video_id,
'title': title,
'description': video_data.get('description'),
'thumbnail': video_data.get('thumbnail'),
'upload_date': unified_strdate(video_data.get('start_date')),
'duration': parse_duration(video_data.get('duration')),
'view_count': str_to_int(video_data.get('playcount')),
'formats': formats,
'subtitles': self._parse_subtitles(video_data, 'vtt'),
}
class AdobeTVEmbedIE(AdobeTVBaseIE):
_WORKING = False
IE_NAME = 'adobetv:embed'
_VALID_URL = r'https?://tv\.adobe\.com/embed/\d+/(?P<id>\d+)'
_TESTS = [{
'url': 'https://tv.adobe.com/embed/22/4153',
'md5': 'c8c0461bf04d54574fc2b4d07ac6783a',
'info_dict': {
'id': '4153',
'ext': 'flv',
'title': 'Creating Graphics Optimized for BlackBerry',
'description': 'md5:eac6e8dced38bdaae51cd94447927459',
'thumbnail': r're:https?://.+\.jpg',
'upload_date': '20091109',
'duration': 377,
'view_count': int,
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
video_data = self._call_api(
'episode/' + video_id, video_id, {'disclosure': 'standard'})[0]
return self._parse_video_data(video_data)
class AdobeTVIE(AdobeTVBaseIE):
_WORKING = False
class AdobeTVVideoIE(InfoExtractor):
IE_NAME = 'adobetv'
_VALID_URL = r'https?://tv\.adobe\.com/(?:(?P<language>fr|de|es|jp)/)?watch/(?P<show_urlname>[^/]+)/(?P<id>[^/]+)'
_TESTS = [{
'url': 'http://tv.adobe.com/watch/the-complete-picture-with-julieanne-kost/quick-tip-how-to-draw-a-circle-around-an-object-in-photoshop/',
'md5': '9bc5727bcdd55251f35ad311ca74fa1e',
'info_dict': {
'id': '10981',
'ext': 'mp4',
'title': 'Quick Tip - How to Draw a Circle Around an Object in Photoshop',
'description': 'md5:99ec318dc909d7ba2a1f2b038f7d2311',
'thumbnail': r're:https?://.+\.jpg',
'upload_date': '20110914',
'duration': 60,
'view_count': int,
},
}]
def _real_extract(self, url):
language, show_urlname, urlname = self._match_valid_url(url).groups()
if not language:
language = 'en'
video_data = self._call_api(
'episode/get', urlname, {
'disclosure': 'standard',
'language': language,
'show_urlname': show_urlname,
'urlname': urlname,
})[0]
return self._parse_video_data(video_data)
class AdobeTVPlaylistBaseIE(AdobeTVBaseIE):
_PAGE_SIZE = 25
def _fetch_page(self, display_id, query, page):
page += 1
query['page'] = page
for element_data in self._call_api(
self._RESOURCE, display_id, query, f'Download Page {page}'):
yield self._process_data(element_data)
def _extract_playlist_entries(self, display_id, query):
return OnDemandPagedList(functools.partial(
self._fetch_page, display_id, query), self._PAGE_SIZE)
class AdobeTVShowIE(AdobeTVPlaylistBaseIE):
_WORKING = False
IE_NAME = 'adobetv:show'
_VALID_URL = r'https?://tv\.adobe\.com/(?:(?P<language>fr|de|es|jp)/)?show/(?P<id>[^/]+)'
_TESTS = [{
'url': 'http://tv.adobe.com/show/the-complete-picture-with-julieanne-kost',
'info_dict': {
'id': '36',
'title': 'The Complete Picture with Julieanne Kost',
'description': 'md5:fa50867102dcd1aa0ddf2ab039311b27',
},
'playlist_mincount': 136,
}]
_RESOURCE = 'episode'
_process_data = AdobeTVBaseIE._parse_video_data
def _real_extract(self, url):
language, show_urlname = self._match_valid_url(url).groups()
if not language:
language = 'en'
query = {
'disclosure': 'standard',
'language': language,
'show_urlname': show_urlname,
}
show_data = self._call_api(
'show/get', show_urlname, query)[0]
return self.playlist_result(
self._extract_playlist_entries(show_urlname, query),
str_or_none(show_data.get('id')),
show_data.get('show_name'),
show_data.get('show_description'))
class AdobeTVChannelIE(AdobeTVPlaylistBaseIE):
_WORKING = False
IE_NAME = 'adobetv:channel'
_VALID_URL = r'https?://tv\.adobe\.com/(?:(?P<language>fr|de|es|jp)/)?channel/(?P<id>[^/]+)(?:/(?P<category_urlname>[^/]+))?'
_TESTS = [{
'url': 'http://tv.adobe.com/channel/development',
'info_dict': {
'id': 'development',
},
'playlist_mincount': 96,
}]
_RESOURCE = 'show'
def _process_data(self, show_data):
return self.url_result(
show_data['url'], 'AdobeTVShow', str_or_none(show_data.get('id')))
def _real_extract(self, url):
language, channel_urlname, category_urlname = self._match_valid_url(url).groups()
if not language:
language = 'en'
query = {
'channel_urlname': channel_urlname,
'language': language,
}
if category_urlname:
query['category_urlname'] = category_urlname
return self.playlist_result(
self._extract_playlist_entries(channel_urlname, query),
channel_urlname)
class AdobeTVVideoIE(AdobeTVBaseIE):
IE_NAME = 'adobetv:video'
_VALID_URL = r'https?://video\.tv\.adobe\.com/v/(?P<id>\d+)'
_EMBED_REGEX = [r'<iframe[^>]+src=[\'"](?P<url>(?:https?:)?//video\.tv\.adobe\.com/v/\d+[^"]+)[\'"]']
_EMBED_REGEX = [r'<iframe[^>]+src=["\'](?P<url>(?:https?:)?//video\.tv\.adobe\.com/v/\d+)']
_TESTS = [{
# From https://helpx.adobe.com/acrobat/how-to/new-experience-acrobat-dc.html?set=acrobat--get-started--essential-beginners
'url': 'https://video.tv.adobe.com/v/2456/',
'url': 'https://video.tv.adobe.com/v/2456',
'md5': '43662b577c018ad707a63766462b1e87',
'info_dict': {
'id': '2456',
'ext': 'mp4',
'title': 'New experience with Acrobat DC',
'description': 'New experience with Acrobat DC',
'duration': 248.667,
'duration': 248.522,
'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg',
},
}, {
'url': 'https://video.tv.adobe.com/v/3463980/adobe-acrobat',
'info_dict': {
'id': '3463980',
'ext': 'mp4',
'title': 'Adobe Acrobat: How to Customize the Toolbar for Faster PDF Editing',
'description': 'md5:94368ab95ae24f9c1bee0cb346e03dc3',
'duration': 97.514,
'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg',
},
}]
_WEBPAGE_TESTS = [{
# FIXME: Invalid extension
'url': 'https://www.adobe.com/learn/acrobat/web/customize-toolbar',
# https://video.tv.adobe.com/v/3442499
'url': 'https://business.adobe.com/dx-fragments/summit/2025/marquees/S335/ondemand.live.html',
'info_dict': {
'id': '3463980',
'ext': 'm3u8',
'title': 'Adobe Acrobat: How to Customize the Toolbar for Faster PDF Editing',
'description': 'md5:94368ab95ae24f9c1bee0cb346e03dc3',
'duration': 97.557,
'id': '3442499',
'ext': 'mp4',
'title': 'S335 - Beyond Personalization: Creating Intent-Based Experiences at Scale',
'description': 'Beyond Personalization: Creating Intent-Based Experiences at Scale',
'duration': 2906.8,
'thumbnail': r're:https?://images-tv\.adobe\.com/.+\.jpg',
},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
video_data = self._parse_json(self._search_regex(
r'var\s+bridge\s*=\s*([^;]+);', webpage, 'bridged data'), video_id)
title = video_data['title']
video_data = self._search_json(
r'var\s+bridge\s*=', webpage, 'bridged data', video_id)
formats = []
sources = video_data.get('sources') or []
for source in sources:
source_src = source.get('src')
if not source_src:
continue
formats.append({
'filesize': int_or_none(source.get('kilobytes') or None, invscale=1000),
'format_id': join_nonempty(source.get('format'), source.get('label')),
'height': int_or_none(source.get('height') or None),
'tbr': int_or_none(source.get('bitrate') or None),
'width': int_or_none(source.get('width') or None),
'url': source_src,
})
for source in traverse_obj(video_data, (
'sources', lambda _, v: v['format'] != 'playlist' and url_or_none(v['src']),
)):
source_url = self._proto_relative_url(source['src'])
if determine_ext(source_url) == 'm3u8':
fmts = self._extract_m3u8_formats(
source_url, video_id, 'mp4', m3u8_id='hls', fatal=False)
else:
fmts = [{'url': source_url}]
# For both metadata and downloaded files the duration varies among
# formats. I just pick the max one
duration = max(filter(None, [
float_or_none(source.get('duration'), scale=1000)
for source in sources]))
for fmt in fmts:
fmt.update(traverse_obj(source, {
'duration': ('duration', {float_or_none(scale=1000)}),
'filesize': ('kilobytes', {float_or_none(invscale=1000)}),
'format_id': (('format', 'label'), {str}, all, {lambda x: join_nonempty(*x)}),
'height': ('height', {int_or_none}),
'tbr': ('bitrate', {int_or_none}),
'width': ('width', {int_or_none}),
}))
formats.extend(fmts)
subtitles = {}
for translation in traverse_obj(video_data, (
'translations', lambda _, v: url_or_none(v['vttPath']),
)):
lang = translation.get('language_w3c') or ISO639Utils.long2short(translation.get('language_medium')) or 'und'
subtitles.setdefault(lang, []).append({
'ext': 'vtt',
'url': self._proto_relative_url(translation['vttPath']),
})
return {
'id': video_id,
'formats': formats,
'title': title,
'description': video_data.get('description'),
'thumbnail': video_data.get('video', {}).get('poster'),
'duration': duration,
'subtitles': self._parse_subtitles(video_data, 'vttPath'),
'subtitles': subtitles,
**traverse_obj(video_data, {
'title': ('title', {clean_html}),
'description': ('description', {clean_html}, filter),
'thumbnail': ('video', 'poster', {self._proto_relative_url}, {url_or_none}),
}),
}

View File

@@ -1,150 +0,0 @@
from .common import InfoExtractor
from ..utils import (
ExtractorError,
float_or_none,
int_or_none,
parse_iso8601,
parse_qs,
try_get,
)
class ArkenaIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://
(?:
video\.(?:arkena|qbrick)\.com/play2/embed/player\?|
play\.arkena\.com/(?:config|embed)/avp/v\d/player/media/(?P<id>[^/]+)/[^/]+/(?P<account_id>\d+)
)
'''
# See https://support.arkena.com/display/PLAY/Ways+to+embed+your+video
_EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//play\.arkena\.com/embed/avp/.+?)\1']
_TESTS = [{
'url': 'https://video.qbrick.com/play2/embed/player?accountId=1034090&mediaId=d8ab4607-00090107-aab86310',
'md5': '97f117754e5f3c020f5f26da4a44ebaf',
'info_dict': {
'id': 'd8ab4607-00090107-aab86310',
'ext': 'mp4',
'title': 'EM_HT20_117_roslund_v2.mp4',
'timestamp': 1608285912,
'upload_date': '20201218',
'duration': 1429.162667,
'subtitles': {
'sv': 'count:3',
},
},
}, {
'url': 'https://play.arkena.com/embed/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411',
'only_matching': True,
}, {
'url': 'https://play.arkena.com/config/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411/?callbackMethod=jQuery1111023664739129262213_1469227693893',
'only_matching': True,
}, {
'url': 'http://play.arkena.com/config/avp/v1/player/media/327336/darkmatter/131064/?callbackMethod=jQuery1111002221189684892677_1469227595972',
'only_matching': True,
}, {
'url': 'http://play.arkena.com/embed/avp/v1/player/media/327336/darkmatter/131064/',
'only_matching': True,
}, {
'url': 'http://video.arkena.com/play2/embed/player?accountId=472718&mediaId=35763b3b-00090078-bf604299&pageStyling=styled',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('id')
account_id = mobj.group('account_id')
# Handle http://video.arkena.com/play2/embed/player URL
if not video_id:
qs = parse_qs(url)
video_id = qs.get('mediaId', [None])[0]
account_id = qs.get('accountId', [None])[0]
if not video_id or not account_id:
raise ExtractorError('Invalid URL', expected=True)
media = self._download_json(
f'https://video.qbrick.com/api/v1/public/accounts/{account_id}/medias/{video_id}',
video_id, query={
# https://video.qbrick.com/docs/api/examples/library-api.html
'fields': 'asset/resources/*/renditions/*(height,id,language,links/*(href,mimeType),type,size,videos/*(audios/*(codec,sampleRate),bitrate,codec,duration,height,width),width),created,metadata/*(title,description),tags',
})
metadata = media.get('metadata') or {}
title = metadata['title']
duration = None
formats = []
thumbnails = []
subtitles = {}
for resource in media['asset']['resources']:
for rendition in (resource.get('renditions') or []):
rendition_type = rendition.get('type')
for i, link in enumerate(rendition.get('links') or []):
href = link.get('href')
if not href:
continue
if rendition_type == 'image':
thumbnails.append({
'filesize': int_or_none(rendition.get('size')),
'height': int_or_none(rendition.get('height')),
'id': rendition.get('id'),
'url': href,
'width': int_or_none(rendition.get('width')),
})
elif rendition_type == 'subtitle':
subtitles.setdefault(rendition.get('language') or 'en', []).append({
'url': href,
})
elif rendition_type == 'video':
f = {
'filesize': int_or_none(rendition.get('size')),
'format_id': rendition.get('id'),
'url': href,
}
video = try_get(rendition, lambda x: x['videos'][i], dict)
if video:
if not duration:
duration = float_or_none(video.get('duration'))
f.update({
'height': int_or_none(video.get('height')),
'tbr': int_or_none(video.get('bitrate'), 1000),
'vcodec': video.get('codec'),
'width': int_or_none(video.get('width')),
})
audio = try_get(video, lambda x: x['audios'][0], dict)
if audio:
f.update({
'acodec': audio.get('codec'),
'asr': int_or_none(audio.get('sampleRate')),
})
formats.append(f)
elif rendition_type == 'index':
mime_type = link.get('mimeType')
if mime_type == 'application/smil+xml':
formats.extend(self._extract_smil_formats(
href, video_id, fatal=False))
elif mime_type == 'application/x-mpegURL':
formats.extend(self._extract_m3u8_formats(
href, video_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
elif mime_type == 'application/hds+xml':
formats.extend(self._extract_f4m_formats(
href, video_id, f4m_id='hds', fatal=False))
elif mime_type == 'application/dash+xml':
formats.extend(self._extract_mpd_formats(
href, video_id, mpd_id='dash', fatal=False))
elif mime_type == 'application/vnd.ms-sstr+xml':
formats.extend(self._extract_ism_formats(
href, video_id, ism_id='mss', fatal=False))
return {
'id': video_id,
'title': title,
'description': metadata.get('description'),
'timestamp': parse_iso8601(media.get('created')),
'thumbnails': thumbnails,
'subtitles': subtitles,
'duration': duration,
'tags': media.get('tags'),
'formats': formats,
}

View File

@@ -4,7 +4,7 @@ from .common import InfoExtractor
from ..utils import (
ExtractorError,
float_or_none,
jwt_encode_hs256,
jwt_encode,
try_get,
)
@@ -83,11 +83,10 @@ class ATVAtIE(InfoExtractor):
'nbf': int(not_before.timestamp()),
'exp': int(expire.timestamp()),
}
jwt_token = jwt_encode_hs256(payload, self._ENCRYPTION_KEY, headers={'kid': self._ACCESS_ID})
videos = self._download_json(
'https://vas-v4.p7s1video.net/4.0/getsources',
content_id, 'Downloading videos JSON', query={
'token': jwt_token.decode('utf-8'),
'token': jwt_encode(payload, self._ENCRYPTION_KEY, headers={'kid': self._ACCESS_ID}),
})
video_id, videos_data = next(iter(videos['data'].items()))

View File

@@ -1,79 +1,47 @@
from .mtv import MTVServicesInfoExtractor
from ..utils import unified_strdate
from .mtv import MTVServicesBaseIE
class BetIE(MTVServicesInfoExtractor):
_WORKING = False
_VALID_URL = r'https?://(?:www\.)?bet\.com/(?:[^/]+/)+(?P<id>.+?)\.html'
_TESTS = [
{
'url': 'http://www.bet.com/news/politics/2014/12/08/in-bet-exclusive-obama-talks-race-and-racism.html',
'info_dict': {
'id': '07e96bd3-8850-3051-b856-271b457f0ab8',
'display_id': 'in-bet-exclusive-obama-talks-race-and-racism',
'ext': 'flv',
'title': 'A Conversation With President Obama',
'description': 'President Obama urges persistence in confronting racism and bias.',
'duration': 1534,
'upload_date': '20141208',
'thumbnail': r're:(?i)^https?://.*\.jpg$',
'subtitles': {
'en': 'mincount:2',
},
},
'params': {
# rtmp download
'skip_download': True,
},
class BetIE(MTVServicesBaseIE):
_VALID_URL = r'https?://(?:www\.)?bet\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_TESTS = [{
'url': 'https://www.bet.com/video-clips/w9mk7v',
'info_dict': {
'id': '3022d121-d191-43fd-b5fb-b2c26f335497',
'ext': 'mp4',
'display_id': 'w9mk7v',
'title': 'New Normal',
'description': 'md5:d7898c124713b4646cecad9d16ff01f3',
'duration': 30.08,
'series': 'Tyler Perry\'s Sistas',
'season': 'Season 0',
'season_number': 0,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1755269073,
'upload_date': '20250815',
},
{
'url': 'http://www.bet.com/video/news/national/2014/justice-for-ferguson-a-community-reacts.html',
'info_dict': {
'id': '9f516bf1-7543-39c4-8076-dd441b459ba9',
'display_id': 'justice-for-ferguson-a-community-reacts',
'ext': 'flv',
'title': 'Justice for Ferguson: A Community Reacts',
'description': 'A BET News special.',
'duration': 1696,
'upload_date': '20141125',
'thumbnail': r're:(?i)^https?://.*\.jpg$',
'subtitles': {
'en': 'mincount:2',
},
},
'params': {
# rtmp download
'skip_download': True,
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.bet.com/episodes/nmce72/tyler-perry-s-sistas-heavy-is-the-crown-season-9-ep-5',
'info_dict': {
'id': '6427562b-3029-11f0-b405-16fff45bc035',
'ext': 'mp4',
'display_id': 'nmce72',
'title': 'Heavy Is the Crown',
'description': 'md5:1ed345d3157a50572d2464afcc7a652a',
'channel': 'BET',
'duration': 2550.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref',
'series': 'Tyler Perry\'s Sistas',
'season': 'Season 9',
'season_number': 9,
'episode': 'Episode 5',
'episode_number': 5,
'timestamp': 1755165600,
'upload_date': '20250814',
'release_timestamp': 1755129600,
'release_date': '20250814',
},
]
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/bet-mrss-player'
def _get_feed_query(self, uri):
return {
'uuid': uri,
}
def _extract_mgid(self, webpage):
return self._search_regex(r'data-uri="([^"]+)', webpage, 'mgid')
def _real_extract(self, url):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
mgid = self._extract_mgid(webpage)
videos_info = self._get_videos_info(mgid)
info_dict = videos_info['entries'][0]
upload_date = unified_strdate(self._html_search_meta('date', webpage))
description = self._html_search_meta('description', webpage)
info_dict.update({
'display_id': display_id,
'description': description,
'upload_date': upload_date,
})
return info_dict
'params': {'skip_download': 'm3u8'},
'skip': 'Requires provider sign-in',
}]

View File

@@ -304,7 +304,7 @@ class BilibiliBaseIE(InfoExtractor):
class BiliBiliIE(BilibiliBaseIE):
_VALID_URL = r'https?://(?:www\.)?bilibili\.com/(?:video/|festival/[^/?#]+\?(?:[^#]*&)?bvid=)[aAbB][vV](?P<id>[^/?#&]+)'
_VALID_URL = r'https?://(?:www\.)?bilibili\.com/(?:video/|festival/[^/?#]+\?(?:[^#]*&)?bvid=)(?P<prefix>[aAbB][vV])(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'https://www.bilibili.com/video/BV13x41117TL',
@@ -563,7 +563,7 @@ class BiliBiliIE(BilibiliBaseIE):
},
}],
}, {
'note': '301 redirect to bangumi link',
'note': 'redirect from bvid to bangumi link via redirect_url',
'url': 'https://www.bilibili.com/video/BV1TE411f7f1',
'info_dict': {
'id': '288525',
@@ -580,7 +580,27 @@ class BiliBiliIE(BilibiliBaseIE):
'duration': 1183.957,
'timestamp': 1571648124,
'upload_date': '20191021',
'thumbnail': r're:^https?://.*\.(jpg|jpeg|png)$',
'thumbnail': r're:https?://.*\.(jpg|jpeg|png)$',
},
}, {
'note': 'redirect from aid to bangumi link via redirect_url',
'url': 'https://www.bilibili.com/video/av114868162141203',
'info_dict': {
'id': '1933368',
'title': 'PV 引爆变革的起点',
'ext': 'mp4',
'duration': 63.139,
'series': '时光代理人',
'series_id': '5183',
'season': '第三季',
'season_number': 4,
'season_id': '105212',
'episode': '引爆变革的起点',
'episode_number': 1,
'episode_id': '1933368',
'timestamp': 1752849001,
'upload_date': '20250718',
'thumbnail': r're:https?://.*\.(jpg|jpeg|png)$',
},
}, {
'note': 'video has subtitles, which requires login',
@@ -636,7 +656,7 @@ class BiliBiliIE(BilibiliBaseIE):
}]
def _real_extract(self, url):
video_id = self._match_id(url)
video_id, prefix = self._match_valid_url(url).group('id', 'prefix')
headers = self.geo_verification_headers()
webpage, urlh = self._download_webpage_handle(url, video_id, headers=headers)
if not self._match_valid_url(urlh.url):
@@ -644,7 +664,24 @@ class BiliBiliIE(BilibiliBaseIE):
headers['Referer'] = url
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id)
initial_state = self._search_json(r'window\.__INITIAL_STATE__\s*=', webpage, 'initial state', video_id, default=None)
if not initial_state:
if self._search_json(r'\bwindow\._riskdata_\s*=', webpage, 'risk', video_id, default={}).get('v_voucher'):
raise ExtractorError('You have exceeded the rate limit. Try again later', expected=True)
query = {'platform': 'web'}
prefix = prefix.upper()
if prefix == 'BV':
query['bvid'] = prefix + video_id
elif prefix == 'AV':
query['aid'] = video_id
detail = self._download_json(
'https://api.bilibili.com/x/web-interface/wbi/view/detail', video_id,
note='Downloading redirection URL', errnote='Failed to download redirection URL',
query=self._sign_wbi(query, video_id), headers=headers)
new_url = traverse_obj(detail, ('data', 'View', 'redirect_url', {url_or_none}))
if new_url and BiliBiliBangumiIE.suitable(new_url):
return self.url_result(new_url, BiliBiliBangumiIE)
raise ExtractorError('Unable to extract initial state')
if traverse_obj(initial_state, ('error', 'trueCode')) == -403:
self.raise_login_required()

View File

@@ -1,55 +0,0 @@
from .mtv import MTVIE
# TODO: Remove - Reason: Outdated Site
class CMTIE(MTVIE): # XXX: Do not subclass from concrete IE
_WORKING = False
IE_NAME = 'cmt.com'
_VALID_URL = r'https?://(?:www\.)?cmt\.com/(?:videos|shows|(?:full-)?episodes|video-clips)/(?P<id>[^/]+)'
_TESTS = [{
'url': 'http://www.cmt.com/videos/garth-brooks/989124/the-call-featuring-trisha-yearwood.jhtml#artist=30061',
'md5': 'e6b7ef3c4c45bbfae88061799bbba6c2',
'info_dict': {
'id': '989124',
'ext': 'mp4',
'title': 'Garth Brooks - "The Call (featuring Trisha Yearwood)"',
'description': 'Blame It All On My Roots',
},
'skip': 'Video not available',
}, {
'url': 'http://www.cmt.com/videos/misc/1504699/still-the-king-ep-109-in-3-minutes.jhtml#id=1739908',
'md5': 'e61a801ca4a183a466c08bd98dccbb1c',
'info_dict': {
'id': '1504699',
'ext': 'mp4',
'title': 'Still The King Ep. 109 in 3 Minutes',
'description': 'Relive or catch up with Still The King by watching this recap of season 1, episode 9.',
'timestamp': 1469421000.0,
'upload_date': '20160725',
},
}, {
'url': 'http://www.cmt.com/shows/party-down-south/party-down-south-ep-407-gone-girl/1738172/playlist/#id=1738172',
'only_matching': True,
}, {
'url': 'http://www.cmt.com/full-episodes/537qb3/nashville-the-wayfaring-stranger-season-5-ep-501',
'only_matching': True,
}, {
'url': 'http://www.cmt.com/video-clips/t9e4ci/nashville-juliette-in-2-minutes',
'only_matching': True,
}]
def _extract_mgid(self, webpage, url):
mgid = self._search_regex(
r'MTVN\.VIDEO\.contentUri\s*=\s*([\'"])(?P<mgid>.+?)\1',
webpage, 'mgid', group='mgid', default=None)
if not mgid:
mgid = self._extract_triforce_mgid(webpage)
return mgid
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
mgid = self._extract_mgid(webpage, url)
return self.url_result(f'http://media.mtvnservices.com/embed/{mgid}')

View File

@@ -1,55 +1,27 @@
from .mtv import MTVServicesInfoExtractor
from .mtv import MTVServicesBaseIE
class ComedyCentralIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?cc\.com/(?:episodes|video(?:-clips)?|collection-playlist|movies)/(?P<id>[0-9a-z]{6})'
_FEED_URL = 'http://comedycentral.com/feeds/mrss/'
class ComedyCentralIE(MTVServicesBaseIE):
_VALID_URL = r'https?://(?:www\.)?cc\.com/video-clips/(?P<id>[\da-z]{6})'
_TESTS = [{
'url': 'http://www.cc.com/video-clips/5ke9v2/the-daily-show-with-trevor-noah-doc-rivers-and-steve-ballmer---the-nba-player-strike',
'md5': 'b8acb347177c680ff18a292aa2166f80',
'url': 'https://www.cc.com/video-clips/wl12cx',
'info_dict': {
'id': '89ccc86e-1b02-4f83-b0c9-1d9592ecd025',
'id': 'dec6953e-80c8-43b3-96cd-05e9230e704d',
'ext': 'mp4',
'title': 'The Daily Show with Trevor Noah|August 28, 2020|25|25149|Doc Rivers and Steve Ballmer - The NBA Player Strike',
'description': 'md5:5334307c433892b85f4f5e5ac9ef7498',
'timestamp': 1598670000,
'upload_date': '20200829',
'display_id': 'wl12cx',
'title': 'Alison Brie and Dave Franco -"Together"- Extended Interview',
'description': 'md5:ec68e38d3282f863de9cde0ce5cd231c',
'duration': 516.76,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'The Daily Show',
'season': 'Season 30',
'season_number': 30,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1753973314,
'upload_date': '20250731',
'release_timestamp': 1753977914,
'release_date': '20250731',
},
}, {
'url': 'http://www.cc.com/episodes/pnzzci/drawn-together--american-idol--parody-clip-show-season-3-ep-314',
'only_matching': True,
}, {
'url': 'https://www.cc.com/video/k3sdvm/the-daily-show-with-jon-stewart-exclusive-the-fourth-estate',
'only_matching': True,
}, {
'url': 'https://www.cc.com/collection-playlist/cosnej/stand-up-specials/t6vtjb',
'only_matching': True,
}, {
'url': 'https://www.cc.com/movies/tkp406/a-cluesterfuenke-christmas',
'only_matching': True,
'params': {'skip_download': 'm3u8'},
}]
class ComedyCentralTVIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?comedycentral\.tv/folgen/(?P<id>[0-9a-z]{6})'
_TESTS = [{
'url': 'https://www.comedycentral.tv/folgen/pxdpec/josh-investigates-klimawandel-staffel-1-ep-1',
'info_dict': {
'id': '15907dc3-ec3c-11e8-a442-0e40cf2fc285',
'ext': 'mp4',
'title': 'Josh Investigates',
'description': 'Steht uns das Ende der Welt bevor?',
},
}]
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_GEO_COUNTRIES = ['DE']
def _get_feed_query(self, uri):
return {
'accountOverride': 'intl.mtvi.com',
'arcEp': 'web.cc.tv',
'ep': 'b9032c3a',
'imageEp': 'web.cc.tv',
'mgid': uri,
}

View File

@@ -263,6 +263,7 @@ class InfoExtractor:
* a string in the format of CLIENT[:OS]
* a list or a tuple of CLIENT[:OS] strings or ImpersonateTarget instances
* a boolean value; True means any impersonate target is sufficient
* available_at Unix timestamp of when a format will be available to download
* downloader_options A dictionary of downloader options
(For internal use only)
* http_chunk_size Chunk size for HTTP downloads
@@ -1527,11 +1528,11 @@ class InfoExtractor:
r'>\s*(?:18\s+U(?:\.S\.C\.|SC)\s+)?(?:§+\s*)?2257\b',
]
age_limit = 0
age_limit = None
for marker in AGE_LIMIT_MARKERS:
mobj = re.search(marker, html)
if mobj:
age_limit = max(age_limit, int(traverse_obj(mobj, 1, default=18)))
age_limit = max(age_limit or 0, int(traverse_obj(mobj, 1, default=18)))
return age_limit
def _media_rating_search(self, html):
@@ -2968,7 +2969,7 @@ class InfoExtractor:
else:
codecs = parse_codecs(codec_str)
if content_type not in ('video', 'audio', 'text'):
if mime_type == 'image/jpeg':
if mime_type in ('image/avif', 'image/jpeg'):
content_type = mime_type
elif codecs.get('vcodec', 'none') != 'none':
content_type = 'video'
@@ -3028,14 +3029,14 @@ class InfoExtractor:
'manifest_url': mpd_url,
'filesize': filesize,
}
elif content_type == 'image/jpeg':
elif content_type in ('image/avif', 'image/jpeg'):
# See test case in VikiIE
# https://www.viki.com/videos/1175236v-choosing-spouse-by-lottery-episode-1
f = {
'format_id': format_id,
'ext': 'mhtml',
'manifest_url': mpd_url,
'format_note': 'DASH storyboards (jpeg)',
'format_note': f'DASH storyboards ({mimetype2ext(mime_type)})',
'acodec': 'none',
'vcodec': 'none',
}
@@ -3107,7 +3108,6 @@ class InfoExtractor:
else:
# $Number*$ or $Time$ in media template with S list available
# Example $Number*$: http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg
# Example $Time$: https://play.arkena.com/embed/avp/v2/player/media/b41dda37-d8e7-4d3f-b1b5-9a9db578bdfe/1/129411
representation_ms_info['fragments'] = []
segment_time = 0
segment_d = None
@@ -3177,7 +3177,7 @@ class InfoExtractor:
'url': mpd_url or base_url,
'fragment_base_url': base_url,
'fragments': [],
'protocol': 'http_dash_segments' if mime_type != 'image/jpeg' else 'mhtml',
'protocol': 'mhtml' if mime_type in ('image/avif', 'image/jpeg') else 'http_dash_segments',
})
if 'initialization_url' in representation_ms_info:
initialization_url = representation_ms_info['initialization_url']
@@ -3192,7 +3192,7 @@ class InfoExtractor:
else:
# Assuming direct URL to unfragmented media.
f['url'] = base_url
if content_type in ('video', 'audio', 'image/jpeg'):
if content_type in ('video', 'audio', 'image/avif', 'image/jpeg'):
f['manifest_stream_number'] = stream_numbers[f['url']]
stream_numbers[f['url']] += 1
period_entry['formats'].append(f)

View File

@@ -171,17 +171,6 @@ class DailymotionIE(DailymotionBaseInfoExtractor):
'view_count': int,
},
'skip': 'video gone',
}, {
# Vevo video
'url': 'http://www.dailymotion.com/video/x149uew_katy-perry-roar-official_musi',
'info_dict': {
'title': 'Roar (Official)',
'id': 'USUV71301934',
'ext': 'mp4',
'uploader': 'Katy Perry',
'upload_date': '20130905',
},
'skip': 'Invalid URL',
}, {
# age-restricted video
'url': 'http://www.dailymotion.com/video/xyh2zz_leanna-decker-cyber-girl-of-the-year-desires-nude-playboy-plus_redband',

View File

@@ -90,10 +90,6 @@ class DisneyIE(InfoExtractor):
webpage, 'embed data'), video_id)
video_data = page_data['video']
for external in video_data.get('externals', []):
if external.get('source') == 'vevo':
return self.url_result('vevo:' + external['data_id'], 'Vevo')
video_id = video_data['id']
title = video_data['title']

View File

@@ -2,18 +2,152 @@ import re
import urllib.parse
from .common import InfoExtractor
from ..utils import js_to_json, url_or_none
from ..utils import int_or_none, js_to_json, url_or_none
from ..utils.traversal import traverse_obj
class FaulioLiveIE(InfoExtractor):
class FaulioBaseIE(InfoExtractor):
_DOMAINS = (
'aloula.sba.sa',
'bahry.com',
'maraya.sba.net.ae',
'sat7plus.org',
)
_VALID_URL = fr'https?://(?:{"|".join(map(re.escape, _DOMAINS))})/(?:(?:en|ar|fa)/)?live/(?P<id>[a-zA-Z0-9-]+)'
_LANGUAGES = ('ar', 'en', 'fa')
_BASE_URL_RE = fr'https?://(?:{"|".join(map(re.escape, _DOMAINS))})/(?:(?:{"|".join(_LANGUAGES)})/)?'
def _get_headers(self, url):
parsed_url = urllib.parse.urlparse(url)
return {
'Referer': url,
'Origin': f'{parsed_url.scheme}://{parsed_url.hostname}',
}
def _get_api_base(self, url, video_id):
webpage = self._download_webpage(url, video_id)
config_data = self._search_json(
r'window\.__NUXT__\.config=', webpage, 'config', video_id, transform_source=js_to_json)
return config_data['public']['TRANSLATIONS_API_URL']
class FaulioIE(FaulioBaseIE):
_VALID_URL = fr'{FaulioBaseIE._BASE_URL_RE}(?:episode|media)/(?P<id>[a-zA-Z0-9-]+)'
_TESTS = [{
'url': 'https://aloula.sba.sa/en/episode/29102',
'info_dict': {
'id': 'aloula.faulio.com_29102',
'ext': 'mp4',
'display_id': 'هذا-مكانك-03-004-v-29102',
'title': 'الحلقة 4',
'episode': 'الحلقة 4',
'description': '',
'series': 'هذا مكانك',
'season': 'Season 3',
'season_number': 3,
'episode_number': 4,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 4855,
'age_limit': 3,
},
}, {
'url': 'https://bahry.com/en/media/1191',
'info_dict': {
'id': 'bahry.faulio.com_1191',
'ext': 'mp4',
'display_id': 'Episode-4-1191',
'title': 'Episode 4',
'episode': 'Episode 4',
'description': '',
'series': 'Wild Water',
'season': 'Season 1',
'season_number': 1,
'episode_number': 4,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 1653,
'age_limit': 0,
},
}, {
'url': 'https://maraya.sba.net.ae/episode/127735',
'info_dict': {
'id': 'maraya.faulio.com_127735',
'ext': 'mp4',
'display_id': 'عبدالله-الهاجري---عبدالرحمن-المطروشي-127735',
'title': 'عبدالله الهاجري - عبدالرحمن المطروشي',
'episode': 'عبدالله الهاجري - عبدالرحمن المطروشي',
'description': 'md5:53de01face66d3d6303221e5a49388a0',
'series': 'أبناؤنا في الخارج',
'season': 'Season 3',
'season_number': 3,
'episode_number': 7,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 1316,
'age_limit': 0,
},
}, {
'url': 'https://sat7plus.org/episode/18165',
'info_dict': {
'id': 'sat7.faulio.com_18165',
'ext': 'mp4',
'display_id': 'ep-13-ADHD-18165',
'title': 'ADHD and creativity',
'episode': 'ADHD and creativity',
'description': '',
'series': 'ADHD Podcast',
'season': 'Season 1',
'season_number': 1,
'episode_number': 13,
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 2492,
'age_limit': 0,
},
}, {
'url': 'https://aloula.sba.sa/en/episode/0',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
api_base = self._get_api_base(url, video_id)
video_info = self._download_json(f'{api_base}/video/{video_id}', video_id, fatal=False)
player_info = self._download_json(f'{api_base}/video/{video_id}/player', video_id)
headers = self._get_headers(url)
formats = []
subtitles = {}
if hls_url := traverse_obj(player_info, ('settings', 'protocols', 'hls', {url_or_none})):
fmts, subs = self._extract_m3u8_formats_and_subtitles(
hls_url, video_id, 'mp4', m3u8_id='hls', fatal=False, headers=headers)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
if mpd_url := traverse_obj(player_info, ('settings', 'protocols', 'dash', {url_or_none})):
fmts, subs = self._extract_mpd_formats_and_subtitles(
mpd_url, video_id, mpd_id='dash', fatal=False, headers=headers)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
return {
'id': f'{urllib.parse.urlparse(api_base).hostname}_{video_id}',
**traverse_obj(traverse_obj(video_info, ('blocks', 0)), {
'display_id': ('slug', {str}),
'title': ('title', {str}),
'episode': ('title', {str}),
'description': ('description', {str}),
'series': ('program_title', {str}),
'season_number': ('season_number', {int_or_none}),
'episode_number': ('episode', {int_or_none}),
'thumbnail': ('image', {url_or_none}),
'duration': ('duration', 'total', {int_or_none}),
'age_limit': ('age_rating', {int_or_none}),
}),
'formats': formats,
'subtitles': subtitles,
'http_headers': headers,
}
class FaulioLiveIE(FaulioBaseIE):
_VALID_URL = fr'{FaulioBaseIE._BASE_URL_RE}live/(?P<id>[a-zA-Z0-9-]+)'
_TESTS = [{
'url': 'https://aloula.sba.sa/live/saudiatv',
'info_dict': {
@@ -69,27 +203,24 @@ class FaulioLiveIE(InfoExtractor):
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
config_data = self._search_json(
r'window\.__NUXT__\.config=', webpage, 'config', video_id, transform_source=js_to_json)
api_base = config_data['public']['TRANSLATIONS_API_URL']
api_base = self._get_api_base(url, video_id)
channel = traverse_obj(
self._download_json(f'{api_base}/channels', video_id),
(lambda k, v: v['url'] == video_id, any))
headers = self._get_headers(url)
formats = []
subtitles = {}
if hls_url := traverse_obj(channel, ('streams', 'hls', {url_or_none})):
fmts, subs = self._extract_m3u8_formats_and_subtitles(
hls_url, video_id, 'mp4', m3u8_id='hls', live=True, fatal=False)
hls_url, video_id, 'mp4', m3u8_id='hls', live=True, fatal=False, headers=headers)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
if mpd_url := traverse_obj(channel, ('streams', 'mpd', {url_or_none})):
fmts, subs = self._extract_mpd_formats_and_subtitles(
mpd_url, video_id, mpd_id='dash', fatal=False)
mpd_url, video_id, mpd_id='dash', fatal=False, headers=headers)
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
@@ -101,5 +232,6 @@ class FaulioLiveIE(InfoExtractor):
}),
'formats': formats,
'subtitles': subtitles,
'http_headers': headers,
'is_live': True,
}

View File

@@ -363,13 +363,7 @@ class FranceTVSiteIE(FranceTVBaseInfoExtractor):
display_id = self._match_id(url)
webpage = self._download_webpage(url, display_id)
nextjs_data = self._search_nextjs_v13_data(webpage, display_id)
if get_first(nextjs_data, ('isLive', {bool})):
# For livestreams we need the id of the stream instead of the currently airing episode id
video_id = get_first(nextjs_data, ('options', 'id', {str}))
else:
video_id = get_first(nextjs_data, ('video', ('playerReplayId', 'siId'), {str}))
video_id = get_first(nextjs_data, ('options', 'id', {str}))
if not video_id:
raise ExtractorError('Unable to extract video ID')

View File

@@ -121,7 +121,6 @@ class GenericIE(InfoExtractor):
'id': 'cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867',
'ext': 'mp4',
'title': 'čauky lidi 70 finall',
'age_limit': 0,
'description': 'md5:47b2673a5b76780d9d329783e1fbf5aa',
'direct': True,
'duration': 318.0,
@@ -244,7 +243,6 @@ class GenericIE(InfoExtractor):
'id': 'paris-d-moll',
'ext': 'mp4',
'title': 'Paris d-moll',
'age_limit': 0,
'description': 'md5:319e37ea5542293db37e1e13072fe330',
'thumbnail': r're:https?://www\.filmarkivet\.se/wp-content/uploads/.+\.jpg',
},
@@ -255,7 +253,6 @@ class GenericIE(InfoExtractor):
'info_dict': {
'id': '60413035',
'title': 'Etter ett års planlegging, klaffet endelig alt: - Jeg måtte ta en liten dans',
'age_limit': 0,
'description': 'md5:bbb4e12e42e78609a74fd421b93b1239',
'thumbnail': r're:https?://www\.dagbladet\.no/images/.+',
},
@@ -267,7 +264,6 @@ class GenericIE(InfoExtractor):
'info_dict': {
'id': 'single_clip',
'title': 'Single Clip player examples',
'age_limit': 0,
},
'playlist_count': 3,
}, {
@@ -324,7 +320,6 @@ class GenericIE(InfoExtractor):
'id': 'videos-1',
'ext': 'mp4',
'title': 'Videos & Audio - King Machine (1)',
'age_limit': 0,
'description': 'Browse King Machine videos & audio for sweet media. Your eyes will thank you.',
'thumbnail': r're:https?://media\.indiedb\.com/cache/images/.+\.jpg',
'_old_archive_ids': ['generic videos'],
@@ -363,7 +358,6 @@ class GenericIE(InfoExtractor):
'id': '21217',
'ext': 'mp4',
'title': '40 ночей (2016) - BogMedia.org',
'age_limit': 0,
'description': 'md5:4e6d7d622636eb7948275432eb256dc3',
'display_id': '40-nochey-2016',
'thumbnail': r're:https?://bogmedia\.org/contents/videos_screenshots/.+\.jpg',
@@ -378,7 +372,6 @@ class GenericIE(InfoExtractor):
'id': '18485',
'ext': 'mp4',
'title': 'Клип: Ленинград - ЗОЖ скачать, смотреть онлайн | Youix.com',
'age_limit': 0,
'display_id': 'leningrad-zoj',
'thumbnail': r're:https?://youix\.com/contents/videos_screenshots/.+\.jpg',
},
@@ -419,7 +412,6 @@ class GenericIE(InfoExtractor):
'id': '105',
'ext': 'mp4',
'title': 'Kelis - 4th Of July / Embed Player',
'age_limit': 0,
'display_id': 'kelis-4th-of-july',
'thumbnail': r're:https?://www\.kvs-demo\.com/contents/videos_screenshots/.+\.jpg',
},
@@ -430,9 +422,8 @@ class GenericIE(InfoExtractor):
'info_dict': {
'id': 'beltzlaw-1',
'ext': 'mp4',
'title': 'Beltz Law Group | Dallas Traffic Ticket, Accident & Criminal Attorney (1)',
'age_limit': 0,
'description': 'md5:5bdf23fcb76801dc3b31e74cabf82147',
'title': str,
'description': str,
'thumbnail': r're:https?://beltzlaw\.com/wp-content/uploads/.+\.jpg',
'timestamp': int, # varies
'upload_date': str,
@@ -447,7 +438,6 @@ class GenericIE(InfoExtractor):
'id': 'cine-1',
'ext': 'webm',
'title': 'CINE.AR (1)',
'age_limit': 0,
'description': 'md5:a4e58f9e2291c940e485f34251898c4a',
'thumbnail': r're:https?://cine\.ar/img/.+\.png',
'_old_archive_ids': ['generic cine'],
@@ -461,7 +451,6 @@ class GenericIE(InfoExtractor):
'id': 'ipy2AcGL',
'ext': 'mp4',
'title': 'Hoe een bladvlo dit verwoestende Japanse onkruid moet vernietigen',
'age_limit': 0,
'description': 'md5:6a9d644bab0dc2dc06849c2505d8383d',
'duration': 111.0,
'thumbnail': r're:https?://images\.nu\.nl/.+\.jpg',
@@ -477,7 +466,6 @@ class GenericIE(InfoExtractor):
'id': 'porsche-911-gt3-rs-rij-impressie-2',
'ext': 'mp4',
'title': 'Test: Porsche 911 GT3 RS - AutoWeek',
'age_limit': 0,
'description': 'md5:a17b5bd84288448d8f11b838505718fc',
'direct': True,
'thumbnail': r're:https?://images\.autoweek\.nl/.+',
@@ -493,7 +481,6 @@ class GenericIE(InfoExtractor):
'id': 'k6gl2kt2eq',
'ext': 'mp4',
'title': 'Breezy HR\'s ATS helps you find & hire employees sooner',
'age_limit': 0,
'average_rating': 4.5,
'description': 'md5:eee75fdd3044c538003f3be327ba01e1',
'duration': 60.1,
@@ -509,7 +496,6 @@ class GenericIE(InfoExtractor):
'id': 'videojs_hls_test',
'ext': 'mp4',
'title': 'video',
'age_limit': 0,
'duration': 1800,
},
'params': {'skip_download': 'm3u8'},

View File

@@ -1,8 +1,8 @@
from .arkena import ArkenaIE
from .common import InfoExtractor
class LcpPlayIE(ArkenaIE): # XXX: Do not subclass from concrete IE
class LcpPlayIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://play\.lcp\.fr/embed/(?P<id>[^/]+)/(?P<account_id>[^/]+)/[^/]+/[^/]+'
_TESTS = [{
'url': 'http://play.lcp.fr/embed/327336/131064/darkmatter/0',
@@ -21,24 +21,9 @@ class LcpPlayIE(ArkenaIE): # XXX: Do not subclass from concrete IE
class LcpIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://(?:www\.)?lcp\.fr/(?:[^/]+/)*(?P<id>[^/]+)'
_TESTS = [{
# arkena embed
'url': 'http://www.lcp.fr/la-politique-en-video/schwartzenberg-prg-preconise-francois-hollande-de-participer-une-primaire',
'md5': 'b8bd9298542929c06c1c15788b1f277a',
'info_dict': {
'id': 'd56d03e9',
'ext': 'mp4',
'title': 'Schwartzenberg (PRG) préconise à François Hollande de participer à une primaire à gauche',
'description': 'md5:96ad55009548da9dea19f4120c6c16a8',
'timestamp': 1456488895,
'upload_date': '20160226',
},
'params': {
'skip_download': True,
},
}, {
# dailymotion live stream
'url': 'http://www.lcp.fr/le-direct',
'info_dict': {

View File

@@ -1,15 +1,73 @@
import re
from .common import InfoExtractor
from ..utils import (
clean_html,
determine_ext,
extract_attributes,
int_or_none,
mimetype2ext,
parse_iso8601,
parse_resolution,
str_or_none,
url_or_none,
)
from ..utils.traversal import find_elements, traverse_obj
class MedialaanIE(InfoExtractor):
class MedialaanBaseIE(InfoExtractor):
def _extract_from_mychannels_api(self, mychannels_id):
webpage = self._download_webpage(
f'https://mychannels.video/embed/{mychannels_id}', mychannels_id)
brand_config = self._search_json(
r'window\.mychannels\.brand_config\s*=', webpage, 'brand config', mychannels_id)
response = self._download_json(
f'https://api.mychannels.world/v1/embed/video/{mychannels_id}',
mychannels_id, headers={'X-Mychannels-Brand': brand_config['brand']})
formats = []
for stream in traverse_obj(response, (
'streams', lambda _, v: url_or_none(v['url']),
)):
source_url = stream['url']
ext = determine_ext(source_url)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
source_url, mychannels_id, 'mp4', m3u8_id='hls', fatal=False))
else:
format_id = traverse_obj(stream, ('quality', {str}))
formats.append({
'ext': ext,
'format_id': format_id,
'url': source_url,
**parse_resolution(format_id),
})
return {
'id': mychannels_id,
'formats': formats,
**traverse_obj(response, {
'title': ('title', {clean_html}),
'description': ('description', {clean_html}, filter),
'duration': ('durationMs', {int_or_none(scale=1000)}, {lambda x: x if x >= 0 else None}),
'genres': ('genre', 'title', {str}, filter, all, filter),
'is_live': ('live', {bool}),
'release_timestamp': ('publicationTimestampMs', {int_or_none(scale=1000)}),
'tags': ('tags', ..., 'title', {str}, filter, all, filter),
'thumbnail': ('image', 'baseUrl', {url_or_none}),
}),
**traverse_obj(response, ('channel', {
'channel': ('title', {clean_html}),
'channel_id': ('id', {str_or_none}),
})),
**traverse_obj(response, ('organisation', {
'uploader': ('title', {clean_html}),
'uploader_id': ('id', {str_or_none}),
})),
**traverse_obj(response, ('show', {
'series': ('title', {clean_html}),
'series_id': ('id', {str_or_none}),
})),
}
class MedialaanIE(MedialaanBaseIE):
_VALID_URL = r'''(?x)
https?://
(?:
@@ -32,7 +90,7 @@ class MedialaanIE(InfoExtractor):
tubantia|
volkskrant
)\.nl
)/video/(?:[^/]+/)*[^/?&#]+~p
)/videos?/(?:[^/?#]+/)*[^/?&#]+(?:-|~p)
)
(?P<id>\d+)
'''
@@ -42,19 +100,83 @@ class MedialaanIE(InfoExtractor):
'id': '193993',
'ext': 'mp4',
'title': 'De terugkeer van Ally de Aap en wie vertrekt er nog bij NAC?',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+',
'timestamp': 1611663540,
'upload_date': '20210126',
'description': 'In een nieuwe Gegenpressing video bespreken Yadran Blanco en Dennis Kas het nieuws omrent NAC.',
'duration': 238,
},
'params': {
'skip_download': True,
'channel': 'BN DeStem',
'channel_id': '418',
'genres': ['Sports'],
'release_date': '20210126',
'release_timestamp': 1611663540,
'series': 'Korte Reportage',
'series_id': '972',
'tags': 'count:2',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'BN De Stem',
'uploader_id': '26',
},
}, {
'url': 'https://www.gelderlander.nl/video/kanalen/degelderlander~c320/series/snel-nieuws~s984/noodbevel-in-doetinchem-politie-stuurt-mensen-centrum-uit~p194093',
'only_matching': True,
'info_dict': {
'id': '194093',
'ext': 'mp4',
'title': 'Noodbevel in Doetinchem: politie stuurt mensen centrum uit',
'description': 'md5:77e85b2cb26cfff9dc1fe2b1db524001',
'duration': 44,
'channel': 'De Gelderlander',
'channel_id': '320',
'genres': ['News'],
'release_date': '20210126',
'release_timestamp': 1611690600,
'series': 'Snel Nieuws',
'series_id': '984',
'tags': 'count:1',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'De Gelderlander',
'uploader_id': '25',
},
}, {
'url': 'https://embed.mychannels.video/sdk/production/193993?options=TFTFF_default',
'url': 'https://www.7sur7.be/videos/production/lla-tendance-tiktok-qui-enflamme-lespagne-707650',
'info_dict': {
'id': '707650',
'ext': 'mp4',
'title': 'La tendance TikTok qui enflamme lEspagne',
'description': 'md5:c7ec4cb733190f227fc8935899f533b5',
'duration': 70,
'channel': 'Lifestyle',
'channel_id': '770',
'genres': ['Beauty & Lifestyle'],
'release_date': '20240906',
'release_timestamp': 1725617330,
'series': 'Lifestyle',
'series_id': '1848',
'tags': 'count:1',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': '7sur7',
'uploader_id': '67',
},
}, {
'url': 'https://mychannels.video/embed/313117',
'info_dict': {
'id': '313117',
'ext': 'mp4',
'title': str,
'description': 'md5:255e2e52f6fe8a57103d06def438f016',
'channel': 'AD',
'channel_id': '238',
'genres': ['News'],
'live_status': 'is_live',
'release_date': '20241225',
'release_timestamp': 1735169425,
'series': 'Nieuws Update',
'series_id': '3337',
'tags': 'count:1',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'AD',
'uploader_id': '1',
},
'params': {'skip_download': 'Livestream'},
}, {
'url': 'https://embed.mychannels.video/sdk/production/193993',
'only_matching': True,
}, {
'url': 'https://embed.mychannels.video/script/production/193993',
@@ -62,9 +184,6 @@ class MedialaanIE(InfoExtractor):
}, {
'url': 'https://embed.mychannels.video/production/193993',
'only_matching': True,
}, {
'url': 'https://mychannels.video/embed/193993',
'only_matching': True,
}, {
'url': 'https://embed.mychannels.video/embed/193993',
'only_matching': True,
@@ -75,51 +194,31 @@ class MedialaanIE(InfoExtractor):
'id': '1576607',
'ext': 'mp4',
'title': 'Tom Waes blaastest',
'channel': 'De Morgen',
'channel_id': '352',
'description': 'Tom Waes werkt mee aan een alcoholcampagne op Werchter',
'duration': 62,
'genres': ['News'],
'release_date': '20250705',
'release_timestamp': 1751730795,
'series': 'Nieuwsvideo\'s',
'series_id': '1683',
'tags': 'count:1',
'thumbnail': r're:https?://video-images\.persgroep\.be/aws_generated.+\.jpg',
'timestamp': 1751730795,
'upload_date': '20250705',
'uploader': 'De Morgen',
'uploader_id': '17',
},
'params': {'extractor_args': {'generic': {'impersonate': ['chrome']}}},
}]
@classmethod
def _extract_embed_urls(cls, url, webpage):
entries = []
for element in re.findall(r'(<div[^>]+data-mychannels-type="video"[^>]*>)', webpage):
mychannels_id = extract_attributes(element).get('data-mychannels-id')
if mychannels_id:
entries.append('https://mychannels.video/embed/' + mychannels_id)
return entries
yield from traverse_obj(webpage, (
{find_elements(tag='div', attr='data-mychannels-type', value='video', html=True)},
..., {extract_attributes}, 'data-mychannels-id', {str}, filter,
{lambda x: f'https://mychannels.video/embed/{x}'}))
def _real_extract(self, url):
production_id = self._match_id(url)
production = self._download_json(
'https://embed.mychannels.video/sdk/production/' + production_id,
production_id, query={'options': 'UUUU_default'})['productions'][0]
title = production['title']
mychannels_id = self._match_id(url)
formats = []
for source in (production.get('sources') or []):
src = source.get('src')
if not src:
continue
ext = mimetype2ext(source.get('type'))
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
src, production_id, 'mp4', 'm3u8_native',
m3u8_id='hls', fatal=False))
else:
formats.append({
'ext': ext,
'url': src,
})
return {
'id': production_id,
'title': title,
'formats': formats,
'thumbnail': production.get('posterUrl'),
'timestamp': parse_iso8601(production.get('publicationDate'), ' '),
'duration': int_or_none(production.get('duration')) or None,
}
return self._extract_from_mychannels_api(mychannels_id)

View File

@@ -1,652 +1,268 @@
import re
import xml.etree.ElementTree
import base64
import json
import time
import urllib.parse
from .common import InfoExtractor
from ..networking import HEADRequest, Request
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
RegexNotFoundError,
find_xpath_attr,
fix_xml_ampersands,
float_or_none,
int_or_none,
join_nonempty,
strip_or_none,
timeconvert,
try_get,
unescapeHTML,
js_to_json,
jwt_decode_hs256,
parse_iso8601,
parse_qs,
update_url,
update_url_query,
url_basename,
xpath_text,
url_or_none,
)
from ..utils.traversal import require, traverse_obj
def _media_xml_tag(tag):
return f'{{http://search.yahoo.com/mrss/}}{tag}'
class MTVServicesInfoExtractor(InfoExtractor):
_MOBILE_TEMPLATE = None
_LANG = None
class MTVServicesBaseIE(InfoExtractor):
_GEO_BYPASS = False
_GEO_COUNTRIES = ['US']
_CACHE_SECTION = 'mtvservices'
_ACCESS_TOKEN_KEY = 'access'
_REFRESH_TOKEN_KEY = 'refresh'
_MEDIA_TOKEN_KEY = 'media'
_token_cache = {}
@staticmethod
def _id_from_uri(uri):
return uri.split(':')[-1]
def _jwt_is_expired(token):
return jwt_decode_hs256(token)['exp'] - time.time() < 120
@staticmethod
def _remove_template_parameter(url):
# Remove the templates, like &device={device}
return re.sub(r'&[^=]*?={.*?}(?=(&|$))', '', url)
def _get_auth_suite_data(config):
return traverse_obj(config, {
'clientId': ('clientId', {str}),
'countryCode': ('countryCode', {str}),
})
def _get_feed_url(self, uri, url=None):
return self._FEED_URL
def _get_thumbnail_url(self, uri, itemdoc):
search_path = '{}/{}'.format(_media_xml_tag('group'), _media_xml_tag('thumbnail'))
thumb_node = itemdoc.find(search_path)
if thumb_node is None:
return None
return thumb_node.get('url') or thumb_node.text or None
def _extract_mobile_video_formats(self, mtvn_id):
webpage_url = self._MOBILE_TEMPLATE % mtvn_id
req = Request(webpage_url)
# Otherwise we get a webpage that would execute some javascript
req.headers['User-Agent'] = 'curl/7'
webpage = self._download_webpage(req, mtvn_id,
'Downloading mobile page')
metrics_url = unescapeHTML(self._search_regex(r'<a href="(http://metrics.+?)"', webpage, 'url'))
req = HEADRequest(metrics_url)
response = self._request_webpage(req, mtvn_id, 'Resolving url')
url = response.url
# Transform the url to get the best quality:
url = re.sub(r'.+pxE=mp4', 'http://mtvnmobile.vo.llnwd.net/kip0/_pxn=0+_pxK=18639+_pxE=mp4', url, count=1)
return [{'url': url, 'ext': 'mp4'}]
def _extract_video_formats(self, mdoc, mtvn_id, video_id):
if re.match(r'.*/(error_country_block\.swf|geoblock\.mp4|copyright_error\.flv(?:\?geo\b.+?)?)$', mdoc.find('.//src').text) is not None:
if mtvn_id is not None and self._MOBILE_TEMPLATE is not None:
self.to_screen('The normal version is not available from your '
'country, trying with the mobile version')
return self._extract_mobile_video_formats(mtvn_id)
raise ExtractorError('This video is not available from your country.',
expected=True)
formats = []
for rendition in mdoc.findall('.//rendition'):
if rendition.get('method') == 'hls':
hls_url = rendition.find('./src').text
formats.extend(self._extract_m3u8_formats(
hls_url, video_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
else:
# fms
try:
_, _, ext = rendition.attrib['type'].partition('/')
rtmp_video_url = rendition.find('./src').text
if 'error_not_available.swf' in rtmp_video_url:
raise ExtractorError(
f'{self.IE_NAME} said: video is not available',
expected=True)
if rtmp_video_url.endswith('siteunavail.png'):
continue
formats.extend([{
'ext': 'flv' if rtmp_video_url.startswith('rtmp') else ext,
'url': rtmp_video_url,
'format_id': join_nonempty(
'rtmp' if rtmp_video_url.startswith('rtmp') else None,
rendition.get('bitrate')),
'width': int(rendition.get('width')),
'height': int(rendition.get('height')),
}])
except (KeyError, TypeError):
raise ExtractorError('Invalid rendition field.')
return formats
def _extract_subtitles(self, mdoc, mtvn_id):
subtitles = {}
for transcript in mdoc.findall('.//transcript'):
if transcript.get('kind') != 'captions':
continue
lang = transcript.get('srclang')
for typographic in transcript.findall('./typographic'):
sub_src = typographic.get('src')
if not sub_src:
continue
ext = typographic.get('format')
if ext == 'cea-608':
ext = 'scc'
subtitles.setdefault(lang, []).append({
'url': str(sub_src),
'ext': ext,
})
return subtitles
def _get_video_info(self, itemdoc, use_hls=True):
uri = itemdoc.find('guid').text
video_id = self._id_from_uri(uri)
self.report_extraction(video_id)
content_el = itemdoc.find('{}/{}'.format(_media_xml_tag('group'), _media_xml_tag('content')))
mediagen_url = self._remove_template_parameter(content_el.attrib['url'])
mediagen_url = mediagen_url.replace('device={device}', '')
if 'acceptMethods' not in mediagen_url:
mediagen_url += '&' if '?' in mediagen_url else '?'
mediagen_url += 'acceptMethods='
mediagen_url += 'hls' if use_hls else 'fms'
mediagen_doc = self._download_xml(
mediagen_url, video_id, 'Downloading video urls', fatal=False)
if not isinstance(mediagen_doc, xml.etree.ElementTree.Element):
return None
item = mediagen_doc.find('./video/item')
if item is not None and item.get('type') == 'text':
message = f'{self.IE_NAME} returned error: '
if item.get('code') is not None:
message += '{} - '.format(item.get('code'))
message += item.text
raise ExtractorError(message, expected=True)
description = strip_or_none(xpath_text(itemdoc, 'description'))
timestamp = timeconvert(xpath_text(itemdoc, 'pubDate'))
title_el = None
if title_el is None:
title_el = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:video_title')
if title_el is None:
title_el = itemdoc.find('.//{http://search.yahoo.com/mrss/}title')
if title_el is None:
title_el = itemdoc.find('.//title')
if title_el.text is None:
title_el = None
title = title_el.text
if title is None:
raise ExtractorError('Could not find video title')
title = title.strip()
series = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:franchise')
season = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:seasonN')
episode = find_xpath_attr(
itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:episodeN')
series = series.text if series is not None else None
season = season.text if season is not None else None
episode = episode.text if episode is not None else None
if season and episode:
# episode number includes season, so remove it
episode = re.sub(rf'^{season}', '', episode)
# This a short id that's used in the webpage urls
mtvn_id = None
mtvn_id_node = find_xpath_attr(itemdoc, './/{http://search.yahoo.com/mrss/}category',
'scheme', 'urn:mtvn:id')
if mtvn_id_node is not None:
mtvn_id = mtvn_id_node.text
formats = self._extract_video_formats(mediagen_doc, mtvn_id, video_id)
# Some parts of complete video may be missing (e.g. missing Act 3 in
# http://www.southpark.de/alle-episoden/s14e01-sexual-healing)
if not formats:
return None
return {
'title': title,
'formats': formats,
'subtitles': self._extract_subtitles(mediagen_doc, mtvn_id),
'id': video_id,
'thumbnail': self._get_thumbnail_url(uri, itemdoc),
'description': description,
'duration': float_or_none(content_el.attrib.get('duration')),
'timestamp': timestamp,
'series': series,
'season_number': int_or_none(season),
'episode_number': int_or_none(episode),
def _call_auth_api(self, path, config, display_id=None, note=None, data=None, headers=None, query=None):
headers = {
'Accept': 'application/json',
'Client-Description': 'deviceName=Chrome Windows;deviceType=desktop;system=Windows NT 10.0',
'Api-Version': '2025-07-09',
**(headers or {}),
}
if data is not None:
headers['Content-Type'] = 'application/json'
if isinstance(data, dict):
data = json.dumps(data, separators=(',', ':')).encode()
def _get_feed_query(self, uri):
data = {'uri': uri}
if self._LANG:
data['lang'] = self._LANG
return data
return self._download_json(
f'https://auth.mtvnservices.com/{path}', display_id,
note=note or 'Calling authentication API', data=data,
headers=headers, query={**self._get_auth_suite_data(config), **(query or {})})
def _get_videos_info(self, uri, use_hls=True, url=None):
video_id = self._id_from_uri(uri)
feed_url = self._get_feed_url(uri, url)
info_url = update_url_query(feed_url, self._get_feed_query(uri))
return self._get_videos_info_from_url(info_url, video_id, use_hls)
def _get_fresh_access_token(self, config, display_id=None, force_refresh=False):
resource_id = config['resourceId']
# resource_id should already be in _token_cache since _get_media_token is the caller
tokens = self._token_cache[resource_id]
def _get_videos_info_from_url(self, url, video_id, use_hls=True):
idoc = self._download_xml(
url, video_id,
'Downloading info', transform_source=fix_xml_ampersands)
access_token = tokens.get(self._ACCESS_TOKEN_KEY)
if not force_refresh and access_token and not self._jwt_is_expired(access_token):
return access_token
title = xpath_text(idoc, './channel/title')
description = xpath_text(idoc, './channel/description')
if self._REFRESH_TOKEN_KEY not in tokens:
response = self._call_auth_api(
'accessToken', config, display_id, 'Retrieving auth tokens', data=b'')
else:
response = self._call_auth_api(
'accessToken/refresh', config, display_id, 'Refreshing auth tokens',
data={'refreshToken': tokens[self._REFRESH_TOKEN_KEY]},
headers={'Authorization': f'Bearer {access_token}'})
entries = []
for item in idoc.findall('.//item'):
info = self._get_video_info(item, use_hls)
if info:
entries.append(info)
tokens[self._ACCESS_TOKEN_KEY] = response['applicationAccessToken']
tokens[self._REFRESH_TOKEN_KEY] = response['deviceRefreshToken']
self.cache.store(self._CACHE_SECTION, resource_id, tokens)
# TODO: should be multi-video
return self.playlist_result(
entries, playlist_title=title, playlist_description=description)
return tokens[self._ACCESS_TOKEN_KEY]
def _extract_triforce_mgid(self, webpage, data_zone=None, video_id=None):
triforce_feed = self._parse_json(self._search_regex(
r'triforceManifestFeed\s*=\s*({.+?})\s*;\s*\n', webpage,
'triforce feed', default='{}'), video_id, fatal=False)
def _get_media_token(self, video_config, config, display_id=None):
resource_id = config['resourceId']
if resource_id in self._token_cache:
tokens = self._token_cache[resource_id]
else:
tokens = self._token_cache[resource_id] = self.cache.load(self._CACHE_SECTION, resource_id) or {}
data_zone = self._search_regex(
r'data-zone=(["\'])(?P<zone>.+?_lc_promo.*?)\1', webpage,
'data zone', default=data_zone, group='zone')
media_token = tokens.get(self._MEDIA_TOKEN_KEY)
if media_token and not self._jwt_is_expired(media_token):
return media_token
feed_url = try_get(
triforce_feed, lambda x: x['manifest']['zones'][data_zone]['feed'],
str)
if not feed_url:
return
access_token = self._get_fresh_access_token(config, display_id)
if not jwt_decode_hs256(access_token).get('accessMethods'):
# MTVServices uses a custom AdobePass oauth flow which is incompatible with AdobePassIE
mso_id = self.get_param('ap_mso')
if not mso_id:
raise ExtractorError(
'This video is only available for users of participating TV providers. '
'Use --ap-mso to specify Adobe Pass Multiple-system operator Identifier and pass '
'cookies from a browser session where you are signed-in to your provider.', expected=True)
feed = self._download_json(feed_url, video_id, fatal=False)
if not feed:
return
auth_suite_data = json.dumps(
self._get_auth_suite_data(config), separators=(',', ':')).encode()
callback_url = update_url_query(config['callbackURL'], {
'authSuiteData': urllib.parse.quote(base64.b64encode(auth_suite_data).decode()),
'mvpdCode': mso_id,
})
auth_url = self._call_auth_api(
f'mvpd/{mso_id}/login', config, display_id,
'Retrieving provider authentication URL',
query={'callbackUrl': callback_url},
headers={'Authorization': f'Bearer {access_token}'})['authenticationUrl']
res = self._download_webpage_handle(auth_url, display_id, 'Downloading provider auth page')
# XXX: The following "provider-specific code" likely only works if mso_id == Comcast_SSO
# BEGIN provider-specific code
redirect_url = self._search_json(
r'initInterstitialRedirect\(', res[0], 'redirect JSON',
display_id, transform_source=js_to_json)['continue']
urlh = self._request_webpage(redirect_url, display_id, 'Requesting provider redirect page')
authorization_code = parse_qs(urlh.url)['authorizationCode'][-1]
# END provider-specific code
self._call_auth_api(
f'access/mvpd/{mso_id}', config, display_id,
'Submitting authorization code to MTVNServices',
query={'authorizationCode': authorization_code}, data=b'',
headers={'Authorization': f'Bearer {access_token}'})
access_token = self._get_fresh_access_token(config, display_id, force_refresh=True)
return try_get(feed, lambda x: x['result']['data']['id'], str)
tokens[self._MEDIA_TOKEN_KEY] = self._call_auth_api(
'mediaToken', config, display_id, 'Fetching media token', data={
'content': {('id' if k == 'videoId' else k): v for k, v in video_config.items()},
'resourceId': resource_id,
}, headers={'Authorization': f'Bearer {access_token}'})['mediaToken']
@staticmethod
def _extract_child_with_type(parent, t):
for c in parent['children']:
if c.get('type') == t:
return c
self.cache.store(self._CACHE_SECTION, resource_id, tokens)
return tokens[self._MEDIA_TOKEN_KEY]
def _real_extract(self, url):
display_id = self._match_id(url)
def _extract_mgid(self, webpage):
try:
# the url can be http://media.mtvnservices.com/fb/{mgid}.swf
# or http://media.mtvnservices.com/{mgid}
og_url = self._og_search_video_url(webpage)
mgid = url_basename(og_url)
if mgid.endswith('.swf'):
mgid = mgid[:-4]
except RegexNotFoundError:
mgid = None
data = self._download_json(
update_url(url, query=None), display_id,
query={'json': 'true'})
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 404 and not self.suitable(e.cause.response.url):
self.raise_geo_restricted(countries=self._GEO_COUNTRIES)
raise
if mgid is None or ':' not in mgid:
mgid = self._search_regex(
[r'data-mgid="(.*?)"', r'swfobject\.embedSWF\(".*?(mgid:.*?)"'],
webpage, 'mgid', default=None)
flex_wrapper = traverse_obj(data, (
'children', lambda _, v: v['type'] == 'MainContainer',
(None, ('children', lambda _, v: v['type'] == 'AviaWrapper')),
'children', lambda _, v: v['type'] == 'FlexWrapper', {dict}, any))
video_detail = traverse_obj(flex_wrapper, (
(None, ('children', lambda _, v: v['type'] == 'AuthSuiteWrapper')),
'children', lambda _, v: v['type'] == 'Player',
'props', 'videoDetail', {dict}, any))
if not video_detail:
video_detail = traverse_obj(data, (
'children', ..., ('handleTVEAuthRedirection', None),
'videoDetail', {dict}, any, {require('video detail')}))
if not mgid:
sm4_embed = self._html_search_meta(
'sm4:video:embed', webpage, 'sm4 embed', default='')
mgid = self._search_regex(
r'embed/(mgid:.+?)["\'&?/]', sm4_embed, 'mgid', default=None)
mgid = video_detail['mgid']
video_id = mgid.rpartition(':')[2]
service_url = traverse_obj(video_detail, ('videoServiceUrl', {url_or_none}, {update_url(query=None)}))
if not service_url:
raise ExtractorError('This content is no longer available', expected=True)
if not mgid:
mgid = self._extract_triforce_mgid(webpage)
headers = {}
if video_detail.get('authRequired'):
# The vast majority of provider-locked content has been moved to Paramount+
# BetIE is the only extractor that is currently known to reach this code path
video_config = traverse_obj(flex_wrapper, (
'children', lambda _, v: v['type'] == 'AuthSuiteWrapper',
'props', 'videoConfig', {dict}, any, {require('video config')}))
config = traverse_obj(data, (
'props', 'authSuiteConfig', {dict}, {require('auth suite config')}))
headers['X-VIA-TVE-MEDIATOKEN'] = self._get_media_token(video_config, config, display_id)
if not mgid:
data = self._parse_json(self._search_regex(
r'__DATA__\s*=\s*({.+?});', webpage, 'data'), None)
main_container = self._extract_child_with_type(data, 'MainContainer')
ab_testing = self._extract_child_with_type(main_container, 'ABTesting')
video_player = self._extract_child_with_type(ab_testing or main_container, 'VideoPlayer')
if video_player:
mgid = try_get(video_player, lambda x: x['props']['media']['video']['config']['uri'])
else:
flex_wrapper = self._extract_child_with_type(ab_testing or main_container, 'FlexWrapper')
auth_suite_wrapper = self._extract_child_with_type(flex_wrapper, 'AuthSuiteWrapper')
player = self._extract_child_with_type(auth_suite_wrapper or flex_wrapper, 'Player')
if player:
mgid = try_get(player, lambda x: x['props']['videoDetail']['mgid'])
stream_info = self._download_json(
service_url, video_id, 'Downloading API JSON', 'Unable to download API JSON',
query={'clientPlatform': 'desktop'}, headers=headers)['stitchedstream']
if not mgid:
raise ExtractorError('Could not extract mgid')
manifest_type = stream_info['manifesttype']
if manifest_type == 'hls':
formats, subtitles = self._extract_m3u8_formats_and_subtitles(
stream_info['source'], video_id, 'mp4', m3u8_id=manifest_type)
elif manifest_type == 'dash':
formats, subtitles = self._extract_mpd_formats_and_subtitles(
stream_info['source'], video_id, mpd_id=manifest_type)
else:
self.raise_no_formats(f'Unsupported manifest type "{manifest_type}"')
formats, subtitles = [], {}
return mgid
def _real_extract(self, url):
title = url_basename(url)
webpage = self._download_webpage(url, title)
mgid = self._extract_mgid(webpage)
return self._get_videos_info(mgid, url=url)
return {
**traverse_obj(video_detail, {
'title': ('title', {str}),
'channel': ('channel', 'name', {str}),
'thumbnails': ('images', ..., {'url': ('url', {url_or_none})}),
'description': (('fullDescription', 'description'), {str}, any),
'series': ('parentEntity', 'title', {str}),
'season_number': ('seasonNumber', {int_or_none}),
'episode_number': ('episodeAiringOrder', {int_or_none}),
'duration': ('duration', 'milliseconds', {float_or_none(scale=1000)}),
'timestamp': ((
('originalPublishDate', {parse_iso8601}),
('publishDate', 'timestamp', {int_or_none})), any),
'release_timestamp': ((
('originalAirDate', {parse_iso8601}),
('airDate', 'timestamp', {int_or_none})), any),
}),
'id': video_id,
'display_id': display_id,
'formats': formats,
'subtitles': subtitles,
}
class MTVServicesEmbeddedIE(MTVServicesInfoExtractor):
IE_NAME = 'mtvservices:embedded'
_VALID_URL = r'https?://media\.mtvnservices\.com/embed/(?P<mgid>.+?)(\?|/|$)'
_EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//media\.mtvnservices\.com/embed/.+?)\1']
_TEST = {
# From http://www.thewrap.com/peter-dinklage-sums-up-game-of-thrones-in-45-seconds-video/
'url': 'http://media.mtvnservices.com/embed/mgid:uma:video:mtv.com:1043906/cp~vid%3D1043906%26uri%3Dmgid%3Auma%3Avideo%3Amtv.com%3A1043906',
'md5': 'cb349b21a7897164cede95bd7bf3fbb9',
'info_dict': {
'id': '1043906',
'ext': 'mp4',
'title': 'Peter Dinklage Sums Up \'Game Of Thrones\' In 45 Seconds',
'description': '"Sexy sexy sexy, stabby stabby stabby, beautiful language," says Peter Dinklage as he tries summarizing "Game of Thrones" in under a minute.',
'timestamp': 1400126400,
'upload_date': '20140515',
},
}
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge', video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
def _real_extract(self, url):
mobj = self._match_valid_url(url)
mgid = mobj.group('mgid')
return self._get_videos_info(mgid)
class MTVIE(MTVServicesInfoExtractor):
class MTVIE(MTVServicesBaseIE):
IE_NAME = 'mtv'
_VALID_URL = r'https?://(?:www\.)?mtv\.com/(?:video-clips|(?:full-)?episodes)/(?P<id>[^/?#.]+)'
_FEED_URL = 'http://www.mtv.com/feeds/mrss/'
_VALID_URL = r'https?://(?:www\.)?mtv\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_TESTS = [{
'url': 'http://www.mtv.com/video-clips/vl8qof/unlocking-the-truth-trailer',
'md5': '1edbcdf1e7628e414a8c5dcebca3d32b',
'url': 'https://www.mtv.com/video-clips/syolsj',
'info_dict': {
'id': '5e14040d-18a4-47c4-a582-43ff602de88e',
'id': '213ea7f8-bac7-4a43-8cd5-8d8cb8c8160f',
'ext': 'mp4',
'title': 'Unlocking The Truth|July 18, 2016|1|101|Trailer',
'description': '"Unlocking the Truth" premieres August 17th at 11/10c.',
'timestamp': 1468846800,
'upload_date': '20160718',
'display_id': 'syolsj',
'title': 'The Challenge: Vets & New Threats',
'description': 'md5:c4d2e90a5fff6463740fbf96b2bb6a41',
'duration': 95.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref',
'series': 'The Challenge',
'season': 'Season 41',
'season_number': 41,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1753945200,
'upload_date': '20250731',
'release_timestamp': 1753945200,
'release_date': '20250731',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'http://www.mtv.com/full-episodes/94tujl/unlocking-the-truth-gates-of-hell-season-1-ep-101',
'only_matching': True,
}, {
'url': 'http://www.mtv.com/episodes/g8xu7q/teen-mom-2-breaking-the-wall-season-7-ep-713',
'only_matching': True,
'url': 'https://www.mtv.com/episodes/uzvigh',
'info_dict': {
'id': '364e8b9e-e415-11ef-b405-16fff45bc035',
'ext': 'mp4',
'display_id': 'uzvigh',
'title': 'CT Tamburello and Johnny Bananas',
'description': 'md5:364cea52001e9c13f92784e3365c6606',
'channel': 'MTV',
'duration': 1260.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref',
'series': 'Ridiculousness',
'season': 'Season 47',
'season_number': 47,
'episode': 'Episode 19',
'episode_number': 19,
'timestamp': 1753318800,
'upload_date': '20250724',
'release_timestamp': 1753318800,
'release_date': '20250724',
},
'params': {'skip_download': 'm3u8'},
}]
class MTVJapanIE(MTVServicesInfoExtractor):
IE_NAME = 'mtvjapan'
_VALID_URL = r'https?://(?:www\.)?mtvjapan\.com/videos/(?P<id>[0-9a-z]+)'
_TEST = {
'url': 'http://www.mtvjapan.com/videos/prayht/fresh-info-cadillac-escalade',
'info_dict': {
'id': 'bc01da03-6fe5-4284-8880-f291f4e368f5',
'ext': 'mp4',
'title': '【Fresh Info】Cadillac ESCALADE Sport Edition',
},
'params': {
'skip_download': True,
},
}
_GEO_COUNTRIES = ['JP']
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
def _get_feed_query(self, uri):
return {
'arcEp': 'mtvjapan.com',
'mgid': uri,
}
class MTVVideoIE(MTVServicesInfoExtractor):
IE_NAME = 'mtv:video'
_VALID_URL = r'''(?x)^https?://
(?:(?:www\.)?mtv\.com/videos/.+?/(?P<videoid>[0-9]+)/[^/]+$|
m\.mtv\.com/videos/video\.rbml\?.*?id=(?P<mgid>[^&]+))'''
_FEED_URL = 'http://www.mtv.com/player/embed/AS3/rss/'
_TESTS = [
{
'url': 'http://www.mtv.com/videos/misc/853555/ours-vh1-storytellers.jhtml',
'md5': '850f3f143316b1e71fa56a4edfd6e0f8',
'info_dict': {
'id': '853555',
'ext': 'mp4',
'title': 'Taylor Swift - "Ours (VH1 Storytellers)"',
'description': 'Album: Taylor Swift performs "Ours" for VH1 Storytellers at Harvey Mudd College.',
'timestamp': 1352610000,
'upload_date': '20121111',
},
},
]
def _get_thumbnail_url(self, uri, itemdoc):
return 'http://mtv.mtvnimages.com/uri/' + uri
def _real_extract(self, url):
mobj = self._match_valid_url(url)
video_id = mobj.group('videoid')
uri = mobj.groupdict().get('mgid')
if uri is None:
webpage = self._download_webpage(url, video_id)
# Some videos come from Vevo.com
m_vevo = re.search(
r'(?s)isVevoVideo = true;.*?vevoVideoId = "(.*?)";', webpage)
if m_vevo:
vevo_id = m_vevo.group(1)
self.to_screen(f'Vevo video detected: {vevo_id}')
return self.url_result(f'vevo:{vevo_id}', ie='Vevo')
uri = self._html_search_regex(r'/uri/(.*?)\?', webpage, 'uri')
return self._get_videos_info(uri)
class MTVDEIE(MTVServicesInfoExtractor):
_WORKING = False
IE_NAME = 'mtv.de'
_VALID_URL = r'https?://(?:www\.)?mtv\.de/(?:musik/videoclips|folgen|news)/(?P<id>[0-9a-z]+)'
_TESTS = [{
'url': 'http://www.mtv.de/musik/videoclips/2gpnv7/Traum',
'info_dict': {
'id': 'd5d472bc-f5b7-11e5-bffd-a4badb20dab5',
'ext': 'mp4',
'title': 'Traum',
'description': 'Traum',
},
'params': {
# rtmp download
'skip_download': True,
},
'skip': 'Blocked at Travis CI',
}, {
# mediagen URL without query (e.g. http://videos.mtvnn.com/mediagen/e865da714c166d18d6f80893195fcb97)
'url': 'http://www.mtv.de/folgen/6b1ylu/teen-mom-2-enthuellungen-S5-F1',
'info_dict': {
'id': '1e5a878b-31c5-11e7-a442-0e40cf2fc285',
'ext': 'mp4',
'title': 'Teen Mom 2',
'description': 'md5:dc65e357ef7e1085ed53e9e9d83146a7',
},
'params': {
# rtmp download
'skip_download': True,
},
'skip': 'Blocked at Travis CI',
}, {
'url': 'http://www.mtv.de/news/glolix/77491-mtv-movies-spotlight--pixels--teil-3',
'info_dict': {
'id': 'local_playlist-4e760566473c4c8c5344',
'ext': 'mp4',
'title': 'Article_mtv-movies-spotlight-pixels-teil-3_short-clips_part1',
'description': 'MTV Movies Supercut',
},
'params': {
# rtmp download
'skip_download': True,
},
'skip': 'Das Video kann zur Zeit nicht abgespielt werden.',
}]
_GEO_COUNTRIES = ['DE']
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
def _get_feed_query(self, uri):
return {
'arcEp': 'mtv.de',
'mgid': uri,
}
class MTVItaliaIE(MTVServicesInfoExtractor):
IE_NAME = 'mtv.it'
_VALID_URL = r'https?://(?:www\.)?mtv\.it/(?:episodi|video|musica)/(?P<id>[0-9a-z]+)'
_TESTS = [{
'url': 'http://www.mtv.it/episodi/24bqab/mario-una-serie-di-maccio-capatonda-cavoli-amario-episodio-completo-S1-E1',
'info_dict': {
'id': '0f0fc78e-45fc-4cce-8f24-971c25477530',
'ext': 'mp4',
'title': 'Cavoli amario (episodio completo)',
'description': 'md5:4962bccea8fed5b7c03b295ae1340660',
'series': 'Mario - Una Serie Di Maccio Capatonda',
'season_number': 1,
'episode_number': 1,
},
'params': {
'skip_download': True,
},
}]
_GEO_COUNTRIES = ['IT']
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
def _get_feed_query(self, uri):
return {
'arcEp': 'mtv.it',
'mgid': uri,
}
class MTVItaliaProgrammaIE(MTVItaliaIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'mtv.it:programma'
_VALID_URL = r'https?://(?:www\.)?mtv\.it/(?:programmi|playlist)/(?P<id>[0-9a-z]+)'
_TESTS = [{
# program page: general
'url': 'http://www.mtv.it/programmi/s2rppv/mario-una-serie-di-maccio-capatonda',
'info_dict': {
'id': 'a6f155bc-8220-4640-aa43-9b95f64ffa3d',
'title': 'Mario - Una Serie Di Maccio Capatonda',
'description': 'md5:72fbffe1f77ccf4e90757dd4e3216153',
},
'playlist_count': 2,
'params': {
'skip_download': True,
},
}, {
# program page: specific season
'url': 'http://www.mtv.it/programmi/d9ncjf/mario-una-serie-di-maccio-capatonda-S2',
'info_dict': {
'id': '4deeb5d8-f272-490c-bde2-ff8d261c6dd1',
'title': 'Mario - Una Serie Di Maccio Capatonda - Stagione 2',
},
'playlist_count': 34,
'params': {
'skip_download': True,
},
}, {
# playlist page + redirect
'url': 'http://www.mtv.it/playlist/sexy-videos/ilctal',
'info_dict': {
'id': 'dee8f9ee-756d-493b-bf37-16d1d2783359',
'title': 'Sexy Videos',
},
'playlist_mincount': 145,
'params': {
'skip_download': True,
},
}]
_GEO_COUNTRIES = ['IT']
_FEED_URL = 'http://www.mtv.it/feeds/triforce/manifest/v8'
def _get_entries(self, title, url):
while True:
pg = self._search_regex(r'/(\d+)$', url, 'entries', '1')
entries = self._download_json(url, title, f'page {pg}')
url = try_get(
entries, lambda x: x['result']['nextPageURL'], str)
entries = try_get(
entries, (
lambda x: x['result']['data']['items'],
lambda x: x['result']['data']['seasons']),
list)
for entry in entries or []:
if entry.get('canonicalURL'):
yield self.url_result(entry['canonicalURL'])
if not url:
break
def _real_extract(self, url):
query = {'url': url}
info_url = update_url_query(self._FEED_URL, query)
video_id = self._match_id(url)
info = self._download_json(info_url, video_id).get('manifest')
redirect = try_get(
info, lambda x: x['newLocation']['url'], str)
if redirect:
return self.url_result(redirect)
title = info.get('title')
video_id = try_get(
info, lambda x: x['reporting']['itemId'], str)
parent_id = try_get(
info, lambda x: x['reporting']['parentId'], str)
playlist_url = current_url = None
for z in (info.get('zones') or {}).values():
if z.get('moduleName') in ('INTL_M304', 'INTL_M209'):
info_url = z.get('feed')
if z.get('moduleName') in ('INTL_M308', 'INTL_M317'):
playlist_url = playlist_url or z.get('feed')
if z.get('moduleName') in ('INTL_M300',):
current_url = current_url or z.get('feed')
if not info_url:
raise ExtractorError('No info found')
if video_id == parent_id:
video_id = self._search_regex(
r'([^\/]+)/[^\/]+$', info_url, 'video_id')
info = self._download_json(info_url, video_id, 'Show infos')
info = try_get(info, lambda x: x['result']['data'], dict)
title = title or try_get(
info, (
lambda x: x['title'],
lambda x: x['headline']),
str)
description = try_get(info, lambda x: x['content'], str)
if current_url:
season = try_get(
self._download_json(playlist_url, video_id, 'Seasons info'),
lambda x: x['result']['data'], dict)
current = try_get(
season, lambda x: x['currentSeason'], str)
seasons = try_get(
season, lambda x: x['seasons'], list) or []
if current in [s.get('eTitle') for s in seasons]:
playlist_url = current_url
title = re.sub(
r'[-|]\s*(?:mtv\s*italia|programma|playlist)',
'', title, flags=re.IGNORECASE).strip()
return self.playlist_result(
self._get_entries(title, playlist_url),
video_id, title, description)

View File

@@ -111,12 +111,8 @@ class MySpaceIE(InfoExtractor):
search_data('stream-url'), search_data('hls-stream-url'),
search_data('http-stream-url'))
if not formats:
vevo_id = search_data('vevo-id')
youtube_id = search_data('youtube-id')
if vevo_id:
self.to_screen(f'Vevo video detected: {vevo_id}')
return self.url_result(f'vevo:{vevo_id}', ie='Vevo')
elif youtube_id:
if youtube_id:
self.to_screen(f'Youtube video detected: {youtube_id}')
return self.url_result(youtube_id, ie='Youtube')
else:

View File

@@ -1,224 +1,48 @@
from .mtv import MTVServicesInfoExtractor
from ..utils import update_url_query
from .mtv import MTVServicesBaseIE
class NickIE(MTVServicesInfoExtractor):
class NickIE(MTVServicesBaseIE):
IE_NAME = 'nick.com'
_VALID_URL = r'https?://(?P<domain>(?:www\.)?nick(?:jr)?\.com)/(?:[^/]+/)?(?P<type>videos/clip|[^/]+/videos|episodes/[^/]+)/(?P<id>[^/?#.]+)'
_FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm'
_GEO_COUNTRIES = ['US']
_VALID_URL = r'https?://(?:www\.)?nick\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_TESTS = [{
'url': 'https://www.nick.com/episodes/sq47rw/spongebob-squarepants-a-place-for-pets-lockdown-for-love-season-13-ep-1',
'url': 'https://www.nick.com/episodes/u3smw8/wylde-pak-best-summer-ever-season-1-ep-1',
'info_dict': {
'description': 'md5:0650a9eb88955609d5c1d1c79292e234',
'title': 'A Place for Pets/Lockdown for Love',
},
'playlist': [
{
'md5': 'cb8a2afeafb7ae154aca5a64815ec9d6',
'info_dict': {
'id': '85ee8177-d6ce-48f8-9eee-a65364f8a6df',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S1',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
{
'md5': '839a04f49900a1fcbf517020d94e0737',
'info_dict': {
'id': '2e2a9960-8fd4-411d-868b-28eb1beb7fae',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S2',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
{
'md5': 'f1145699f199770e2919ee8646955d46',
'info_dict': {
'id': 'dc91c304-6876-40f7-84a6-7aece7baa9d0',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S3',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
{
'md5': 'd463116875aee2585ee58de3b12caebd',
'info_dict': {
'id': '5d929486-cf4c-42a1-889a-6e0d183a101a',
'ext': 'mp4',
'title': 'SpongeBob SquarePants: "A Place for Pets/Lockdown for Love" S4',
'description': 'A Place for Pets/Lockdown for Love: When customers bring pets into the Krusty Krab, Mr. Krabs realizes pets are more profitable than owners. Plankton ruins another date with Karen, so she puts the Chum Bucket on lockdown until he proves his affection.',
},
},
],
}, {
'url': 'http://www.nickjr.com/blues-clues-and-you/videos/blues-clues-and-you-original-209-imagination-station/',
'info_dict': {
'id': '31631529-2fc5-430b-b2ef-6a74b4609abd',
'id': 'eb9d4db0-274a-11ef-a913-0e37995d42c9',
'ext': 'mp4',
'description': 'md5:9d65a66df38e02254852794b2809d1cf',
'title': 'Blue\'s Imagination Station',
'display_id': 'u3smw8',
'title': 'Best Summer Ever?',
'description': 'md5:c737a0ade3fbc09d569c3b3d029a7792',
'channel': 'Nickelodeon',
'duration': 1296.0,
'thumbnail': r're:https://assets\.nick\.com/uri/mgid:arc:imageassetref:',
'series': 'Wylde Pak',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 1746100800,
'upload_date': '20250501',
'release_timestamp': 1746100800,
'release_date': '20250501',
},
'skip': 'Not accessible?',
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.nick.com/video-clips/0p4706/spongebob-squarepants-spongebob-loving-the-krusty-krab-for-7-minutes',
'info_dict': {
'id': '4aac2228-5295-4076-b986-159513cf4ce4',
'ext': 'mp4',
'display_id': '0p4706',
'title': 'SpongeBob Loving the Krusty Krab for 7 Minutes!',
'description': 'md5:72bf59babdf4e6d642187502864e111d',
'duration': 423.423,
'thumbnail': r're:https://assets\.nick\.com/uri/mgid:arc:imageassetref:',
'series': 'SpongeBob SquarePants',
'season': 'Season 0',
'season_number': 0,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1663819200,
'upload_date': '20220922',
},
'params': {'skip_download': 'm3u8'},
}]
def _get_feed_query(self, uri):
return {
'feed': 'nick_arc_player_prime',
'mgid': uri,
}
def _real_extract(self, url):
domain, video_type, display_id = self._match_valid_url(url).groups()
if video_type.startswith('episodes'):
return super()._real_extract(url)
video_data = self._download_json(
f'http://{domain}/data/video.endLevel.json',
display_id, query={
'urlKey': display_id,
})
return self._get_videos_info(video_data['player'] + video_data['id'])
class NickBrIE(MTVServicesInfoExtractor):
IE_NAME = 'nickelodeon:br'
_VALID_URL = r'''(?x)
https?://
(?:
(?P<domain>(?:www\.)?nickjr|mundonick\.uol)\.com\.br|
(?:www\.)?nickjr\.[a-z]{2}|
(?:www\.)?nickelodeonjunior\.fr
)
/(?:programas/)?[^/]+/videos/(?:episodios/)?(?P<id>[^/?\#.]+)
'''
_TESTS = [{
'url': 'http://www.nickjr.com.br/patrulha-canina/videos/210-labirinto-de-pipoca/',
'only_matching': True,
}, {
'url': 'http://mundonick.uol.com.br/programas/the-loud-house/videos/muitas-irmas/7ljo9j',
'only_matching': True,
}, {
'url': 'http://www.nickjr.nl/paw-patrol/videos/311-ge-wol-dig-om-terug-te-zijn/',
'only_matching': True,
}, {
'url': 'http://www.nickjr.de/blaze-und-die-monster-maschinen/videos/f6caaf8f-e4e8-4cc1-b489-9380d6dcd059/',
'only_matching': True,
}, {
'url': 'http://www.nickelodeonjunior.fr/paw-patrol-la-pat-patrouille/videos/episode-401-entier-paw-patrol/',
'only_matching': True,
}]
def _real_extract(self, url):
domain, display_id = self._match_valid_url(url).groups()
webpage = self._download_webpage(url, display_id)
uri = self._search_regex(
r'data-(?:contenturi|mgid)="([^"]+)', webpage, 'mgid')
video_id = self._id_from_uri(uri)
config = self._download_json(
'http://media.mtvnservices.com/pmt/e1/access/index.html',
video_id, query={
'uri': uri,
'configtype': 'edge',
}, headers={
'Referer': url,
})
info_url = self._remove_template_parameter(config['feedWithQueryParams'])
if info_url == 'None':
if domain.startswith('www.'):
domain = domain[4:]
content_domain = {
'mundonick.uol': 'mundonick.com.br',
'nickjr': 'br.nickelodeonjunior.tv',
}[domain]
query = {
'mgid': uri,
'imageEp': content_domain,
'arcEp': content_domain,
}
if domain == 'nickjr.com.br':
query['ep'] = 'c4b16088'
info_url = update_url_query(
'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed', query)
return self._get_videos_info_from_url(info_url, video_id)
class NickDeIE(MTVServicesInfoExtractor):
IE_NAME = 'nick.de'
_VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.(?:de|com\.pl|ch)|nickelodeon\.(?:nl|be|at|dk|no|se))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',
'only_matching': True,
}, {
'url': 'http://www.nick.de/shows/342-icarly',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.nl/shows/474-spongebob/videos/17403-een-kijkje-in-de-keuken-met-sandy-van-binnenuit',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.at/playlist/3773-top-videos/videos/episode/77993-das-letzte-gefecht',
'only_matching': True,
}, {
'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.no/program/2626-bulderhuset/videoer/90947-femteklasse-veronica-vs-vanzilla',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.dk/serier/2626-hojs-hus/videoer/761-tissepause',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.se/serier/2626-lugn-i-stormen/videos/998-',
'only_matching': True,
}, {
'url': 'http://www.nick.ch/shows/2304-adventure-time-abenteuerzeit-mit-finn-und-jake',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.be/afspeellijst/4530-top-videos/videos/episode/73917-inval-broodschapper-lariekoek-arie',
'only_matching': True,
}]
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge&ref={url}', video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
class NickRuIE(MTVServicesInfoExtractor):
IE_NAME = 'nickelodeonru'
_VALID_URL = r'https?://(?:www\.)nickelodeon\.(?:ru|fr|es|pt|ro|hu|com\.tr)/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.nickelodeon.ru/shows/henrydanger/videos/episodes/3-sezon-15-seriya-licenziya-na-polyot/pmomfb#playlist/7airc6',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.ru/videos/smotri-na-nickelodeon-v-iyule/g9hvh7',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.fr/programmes/bob-l-eponge/videos/le-marathon-de-booh-kini-bottom-mardi-31-octobre/nfn7z0',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.es/videos/nickelodeon-consejos-tortitas/f7w7xy',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.pt/series/spongebob-squarepants/videos/a-bolha-de-tinta-gigante/xutq1b',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.ro/emisiuni/shimmer-si-shine/video/nahal-din-bomboane/uw5u2k',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.hu/musorok/spongyabob-kockanadrag/videok/episodes/buborekfujas-az-elszakadt-nadrag/q57iob#playlist/k6te4y',
'only_matching': True,
}, {
'url': 'http://www.nickelodeon.com.tr/programlar/sunger-bob/videolar/kayip-yatak/mgqbjy',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
mgid = self._extract_mgid(webpage, url)
return self.url_result(f'http://media.mtvnservices.com/embed/{mgid}')

View File

@@ -873,39 +873,67 @@ class NiconicoLiveIE(NiconicoBaseIE):
IE_DESC = 'ニコニコ生放送'
_VALID_URL = r'https?://(?:sp\.)?live2?\.nicovideo\.jp/(?:watch|gate)/(?P<id>lv\d+)'
_TESTS = [{
'note': 'this test case includes invisible characters for title, pasting them as-is',
'url': 'https://live.nicovideo.jp/watch/lv339533123',
'url': 'https://live.nicovideo.jp/watch/lv329299587',
'info_dict': {
'id': 'lv339533123',
'title': '激辛ペヤング食べます\u202a( ;ᯅ; )\u202c(歌枠オーディション参加中)',
'view_count': 1526,
'comment_count': 1772,
'description': '初めましてもかって言います❕\nのんびり自由に適当に暮らしてます',
'uploader': 'もか',
'channel': 'ゲストさんのコミュニティ',
'channel_id': 'co5776900',
'channel_url': 'https://com.nicovideo.jp/community/co5776900',
'timestamp': 1670677328,
'is_live': True,
'id': 'lv329299587',
'ext': 'mp4',
'title': str,
'channel': 'ニコニコエンタメチャンネル',
'channel_id': 'ch2640322',
'channel_url': 'https://ch.nicovideo.jp/channel/ch2640322',
'comment_count': int,
'description': 'md5:281edd7f00309e99ec46a87fb16d7033',
'live_status': 'is_live',
'thumbnail': r're:https?://.+',
'timestamp': 1608803400,
'upload_date': '20201224',
'uploader': '株式会社ドワンゴ',
'view_count': int,
},
'skip': 'livestream',
'params': {'skip_download': True},
}, {
'url': 'https://live2.nicovideo.jp/watch/lv339533123',
'only_matching': True,
}, {
'url': 'https://sp.live.nicovideo.jp/watch/lv339533123',
'only_matching': True,
}, {
'url': 'https://sp.live2.nicovideo.jp/watch/lv339533123',
'only_matching': True,
'url': 'https://live.nicovideo.jp/watch/lv331050399',
'info_dict': {
'id': 'lv331050399',
'ext': 'mp4',
'title': str,
'age_limit': 18,
'channel': 'みんなのおもちゃ REBOOT',
'channel_id': 'ch2642088',
'channel_url': 'https://ch.nicovideo.jp/channel/ch2642088',
'comment_count': int,
'description': 'md5:8d0bb5beaca73b911725478a1e7c7b91',
'live_status': 'is_live',
'thumbnail': r're:https?://.+',
'timestamp': 1617029400,
'upload_date': '20210329',
'uploader': '株式会社ドワンゴ',
'view_count': int,
},
'params': {'skip_download': True},
}]
def _real_extract(self, url):
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id, expected_status=404)
webpage, urlh = self._download_webpage_handle(url, video_id, expected_status=404)
if err_msg := traverse_obj(webpage, ({find_element(cls='message')}, {clean_html})):
raise ExtractorError(err_msg, expected=True)
age_limit = 18 if 'age_auth' in urlh.url else None
if age_limit:
if not self.is_logged_in:
self.raise_login_required('Login is required to access age-restricted content')
my = self._download_webpage('https://www.nicovideo.jp/my', None, 'Checking age verification')
if traverse_obj(my, (
{find_element(id='js-initial-userpage-data', html=True)}, {extract_attributes},
'data-environment', {json.loads}, 'allowSensitiveContents', {bool},
)):
self._set_cookie('.nicovideo.jp', 'age_auth', '1')
webpage = self._download_webpage(url, video_id)
else:
raise ExtractorError('Sensitive content setting must be enabled', expected=True)
embedded_data = traverse_obj(webpage, (
{find_element(tag='script', id='embedded-data', html=True)},
{extract_attributes}, 'data-props', {json.loads}))
@@ -1008,6 +1036,7 @@ class NiconicoLiveIE(NiconicoBaseIE):
return {
'id': video_id,
'title': title,
'age_limit': age_limit,
'downloader_options': {
'max_quality': traverse_obj(embedded_data, ('program', 'stream', 'maxQuality', {str})) or 'normal',
'ws': ws,

View File

@@ -446,9 +446,10 @@ class NRKTVIE(InfoExtractor):
class NRKTVEpisodeIE(InfoExtractor):
_VALID_URL = r'https?://tv\.nrk\.no/serie/(?P<id>[^/]+/sesong/(?P<season_number>\d+)/episode/(?P<episode_number>\d+))'
_VALID_URL = r'https?://tv\.nrk\.no/serie/(?P<id>[^/?#]+/sesong/(?P<season_number>\d+)/episode/(?P<episode_number>\d+))'
_TESTS = [{
'url': 'https://tv.nrk.no/serie/hellums-kro/sesong/1/episode/2',
'add_ie': [NRKIE.ie_key()],
'info_dict': {
'id': 'MUHH36005220',
'ext': 'mp4',
@@ -485,26 +486,23 @@ class NRKTVEpisodeIE(InfoExtractor):
}]
def _real_extract(self, url):
display_id, season_number, episode_number = self._match_valid_url(url).groups()
display_id, season_number, episode_number = self._match_valid_url(url).group(
'id', 'season_number', 'episode_number')
webpage, urlh = self._download_webpage_handle(url, display_id)
webpage = self._download_webpage(url, display_id)
if NRKTVIE.suitable(urlh.url):
nrk_id = NRKTVIE._match_id(urlh.url)
else:
nrk_id = self._search_json(
r'<script\b[^>]+\bid="pageData"[^>]*>', webpage,
'page data', display_id)['initialState']['selectedEpisodePrfId']
if not re.fullmatch(NRKTVIE._EPISODE_RE, nrk_id):
raise ExtractorError('Unable to extract NRK ID')
info = self._search_json_ld(webpage, display_id, default={})
nrk_id = info.get('@id') or self._html_search_meta(
'nrk:program-id', webpage, default=None) or self._search_regex(
rf'data-program-id=["\']({NRKTVIE._EPISODE_RE})', webpage,
'nrk id')
assert re.match(NRKTVIE._EPISODE_RE, nrk_id)
info.update({
'_type': 'url',
'id': nrk_id,
'url': f'nrk:{nrk_id}',
'ie_key': NRKIE.ie_key(),
'season_number': int(season_number),
'episode_number': int(episode_number),
})
return info
return self.url_result(
f'nrk:{nrk_id}', NRKIE, nrk_id,
season_number=int(season_number),
episode_number=int(episode_number))
class NRKTVSerieBaseIE(NRKBaseIE):

View File

@@ -352,14 +352,6 @@ class OdnoklassnikiIE(InfoExtractor):
'subtitles': subtitles,
}
# pladform
if provider == 'OPEN_GRAPH':
info.update({
'_type': 'url_transparent',
'url': movie['contentId'],
})
return info
if provider == 'USER_YOUTUBE':
info.update({
'_type': 'url_transparent',

View File

@@ -1,135 +0,0 @@
from .common import InfoExtractor
from ..utils import (
ExtractorError,
determine_ext,
int_or_none,
parse_qs,
qualities,
xpath_text,
)
class PladformIE(InfoExtractor):
_VALID_URL = r'''(?x)
https?://
(?:
(?:
out\.pladform\.ru/player|
static\.pladform\.ru/player\.swf
)
\?.*\bvideoid=|
video\.pladform\.ru/catalog/video/videoid/
)
(?P<id>\d+)
'''
_EMBED_REGEX = [r'<iframe[^>]+src=(["\'])(?P<url>(?:https?:)?//out\.pladform\.ru/player\?.+?)\1']
_TESTS = [{
'url': 'http://out.pladform.ru/player?pl=18079&type=html5&videoid=100231282',
'info_dict': {
'id': '6216d548e755edae6e8280667d774791',
'ext': 'mp4',
'timestamp': 1406117012,
'title': 'Гарик Мартиросян и Гарик Харламов - Кастинг на концерт ко Дню милиции',
'age_limit': 0,
'upload_date': '20140723',
'thumbnail': str,
'view_count': int,
'description': str,
'uploader_id': '12082',
'uploader': 'Comedy Club',
'duration': 367,
},
'expected_warnings': ['HTTP Error 404: Not Found'],
}, {
'url': 'https://out.pladform.ru/player?pl=64471&videoid=3777899&vk_puid15=0&vk_puid34=0',
'md5': '53362fac3a27352da20fa2803cc5cd6f',
'info_dict': {
'id': '3777899',
'ext': 'mp4',
'title': 'СТУДИЯ СОЮЗ • Шоу Студия Союз, 24 выпуск (01.02.2018) Нурлан Сабуров и Слава Комиссаренко',
'description': 'md5:05140e8bf1b7e2d46e7ba140be57fd95',
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 3190,
},
}, {
'url': 'http://static.pladform.ru/player.swf?pl=21469&videoid=100183293&vkcid=0',
'only_matching': True,
}, {
'url': 'http://video.pladform.ru/catalog/video/videoid/100183293/vkcid/0',
'only_matching': True,
}]
def _real_extract(self, url):
video_id = self._match_id(url)
qs = parse_qs(url)
pl = qs.get('pl', ['1'])[0]
video = self._download_xml(
'http://out.pladform.ru/getVideo', video_id, query={
'pl': pl,
'videoid': video_id,
}, fatal=False)
def fail(text):
raise ExtractorError(
f'{self.IE_NAME} returned error: {text}',
expected=True)
if not video:
target_url = self._request_webpage(url, video_id, note='Resolving final URL').url
if target_url == url:
raise ExtractorError('Can\'t parse page')
return self.url_result(target_url)
if video.tag == 'error':
fail(video.text)
quality = qualities(('ld', 'sd', 'hd'))
formats = []
for src in video.findall('./src'):
if src is None:
continue
format_url = src.text
if not format_url:
continue
if src.get('type') == 'hls' or determine_ext(format_url) == 'm3u8':
formats.extend(self._extract_m3u8_formats(
format_url, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
else:
formats.append({
'url': src.text,
'format_id': src.get('quality'),
'quality': quality(src.get('quality')),
})
if not formats:
error = xpath_text(video, './cap', 'error', default=None)
if error:
fail(error)
webpage = self._download_webpage(
f'http://video.pladform.ru/catalog/video/videoid/{video_id}',
video_id)
title = self._og_search_title(webpage, fatal=False) or xpath_text(
video, './/title', 'title', fatal=True)
description = self._search_regex(
r'</h3>\s*<p>([^<]+)</p>', webpage, 'description', fatal=False)
thumbnail = self._og_search_thumbnail(webpage) or xpath_text(
video, './/cover', 'cover')
duration = int_or_none(xpath_text(video, './/time', 'duration'))
age_limit = int_or_none(xpath_text(video, './/age18', 'age limit'))
return {
'id': video_id,
'title': title,
'description': description,
'thumbnail': thumbnail,
'duration': duration,
'age_limit': age_limit,
'formats': formats,
}

View File

@@ -6,6 +6,7 @@ from ..utils import (
int_or_none,
parse_resolution,
str_or_none,
traverse_obj,
try_get,
unified_timestamp,
url_or_none,
@@ -18,20 +19,21 @@ class PuhuTVIE(InfoExtractor):
IE_NAME = 'puhutv'
_TESTS = [{
# film
'url': 'https://puhutv.com/sut-kardesler-izle',
'md5': 'a347470371d56e1585d1b2c8dab01c96',
'url': 'https://puhutv.com/bi-kucuk-eylul-meselesi-izle',
'md5': '4de98170ccb84c05779b1f046b3c86f8',
'info_dict': {
'id': '5085',
'display_id': 'sut-kardesler',
'id': '11909',
'display_id': 'bi-kucuk-eylul-meselesi',
'ext': 'mp4',
'title': 'Süt Kardeşler',
'description': 'md5:ca09da25b7e57cbb5a9280d6e48d17aa',
'title': 'Bi Küçük Eylül Meselesi',
'description': 'md5:c2ab964c6542b7b26acacca0773adce2',
'thumbnail': r're:^https?://.*\.jpg$',
'duration': 4832.44,
'creator': 'Arzu Film',
'timestamp': 1561062602,
'duration': 6176.96,
'creator': 'Ay Yapım',
'creators': ['Ay Yapım'],
'timestamp': 1561062749,
'upload_date': '20190620',
'release_year': 1976,
'release_year': 2014,
'view_count': int,
'tags': list,
},
@@ -181,7 +183,7 @@ class PuhuTVSerieIE(InfoExtractor):
'playlist_mincount': 205,
}, {
# a film detail page which is using same url with serie page
'url': 'https://puhutv.com/kaybedenler-kulubu-detay',
'url': 'https://puhutv.com/bizim-icin-sampiyon-detay',
'only_matching': True,
}]
@@ -194,24 +196,19 @@ class PuhuTVSerieIE(InfoExtractor):
has_more = True
while has_more is True:
season = self._download_json(
f'https://galadriel.puhutv.com/seasons/{season_id}',
f'https://appservice.puhutv.com/api/seasons/{season_id}/episodes?v=2',
season_id, f'Downloading page {page}', query={
'page': page,
'per': 40,
})
episodes = season.get('episodes')
if isinstance(episodes, list):
for ep in episodes:
slug_path = str_or_none(ep.get('slugPath'))
if not slug_path:
continue
video_id = str_or_none(int_or_none(ep.get('id')))
yield self.url_result(
f'https://puhutv.com/{slug_path}',
ie=PuhuTVIE.ie_key(), video_id=video_id,
video_title=ep.get('name') or ep.get('eventLabel'))
'per': 100,
})['data']
for episode in traverse_obj(season, ('episodes', lambda _, v: v.get('slug') or v['assets'][0]['slug'])):
slug = episode.get('slug') or episode['assets'][0]['slug']
yield self.url_result(
f'https://puhutv.com/{slug}', PuhuTVIE, episode.get('id'), episode.get('name'))
page += 1
has_more = season.get('hasMore')
has_more = traverse_obj(season, 'has_more')
def _real_extract(self, url):
playlist_id = self._match_id(url)
@@ -226,7 +223,10 @@ class PuhuTVSerieIE(InfoExtractor):
self._extract_entries(seasons), playlist_id, info.get('name'))
# For films, these are using same url with series
video_id = info.get('slug') or info['assets'][0]['slug']
video_id = info.get('slug')
if video_id:
video_id = video_id.removesuffix('-detay')
else:
video_id = info['assets'][0]['slug'].removesuffix('-izle')
return self.url_result(
f'https://puhutv.com/{video_id}-izle',
PuhuTVIE.ie_key(), video_id)
f'https://puhutv.com/{video_id}-izle', PuhuTVIE, video_id)

View File

@@ -1,58 +1,94 @@
from .mtv import MTVServicesInfoExtractor
from .mtv import MTVServicesBaseIE
class SouthParkIE(MTVServicesInfoExtractor):
class SouthParkIE(MTVServicesBaseIE):
IE_NAME = 'southpark.cc.com'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark(?:\.cc|studios)\.com/((?:video-)?clips|(?:full-)?episodes|collections)/(?P<id>.+?)(\?|#|$))'
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_VALID_URL = r'https?://(?:www\.)?southpark(?:\.cc|studios)\.com/(?:video-clips|episodes|collections)/(?P<id>[^?#]+)'
_TESTS = [{
'url': 'https://southpark.cc.com/video-clips/d7wr06/south-park-you-all-agreed-to-counseling',
'info_dict': {
'id': '31929ad5-8269-11eb-8774-70df2f866ace',
'ext': 'mp4',
'display_id': 'd7wr06/south-park-you-all-agreed-to-counseling',
'title': 'You All Agreed to Counseling',
'description': 'Kenny, Cartman, Stan, and Kyle visit Mr. Mackey and ask for his help getting Mrs. Nelson to come back. Mr. Mackey reveals the only way to get things back to normal is to get the teachers vaccinated.',
'description': 'md5:01f78fb306c7042f3f05f3c78edfc212',
'duration': 134.552,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 24',
'season_number': 24,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 1615352400,
'upload_date': '20210310',
'release_timestamp': 1615352400,
'release_date': '20210310',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'http://southpark.cc.com/collections/7758/fan-favorites/1',
'url': 'https://southpark.cc.com/episodes/940f8z/south-park-cartman-gets-an-anal-probe-season-1-ep-1',
'info_dict': {
'id': '5fb8887e-ecfd-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'display_id': '940f8z/south-park-cartman-gets-an-anal-probe-season-1-ep-1',
'title': 'Cartman Gets An Anal Probe',
'description': 'md5:964e1968c468545752feef102b140300',
'channel': 'Comedy Central',
'duration': 1319.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 871473600,
'upload_date': '19970813',
'release_timestamp': 871473600,
'release_date': '19970813',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://southpark.cc.com/collections/dejukt/south-park-best-of-mr-mackey/tphx9j',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.com/episodes/h4o269/south-park-stunning-and-brave-season-19-ep-1',
'only_matching': True,
}]
def _get_feed_query(self, uri):
return {
'accountOverride': 'intl.mtvi.com',
'arcEp': 'shared.southpark.global',
'ep': '90877963',
'imageEp': 'shared.southpark.global',
'mgid': uri,
}
class SouthParkEsIE(SouthParkIE): # XXX: Do not subclass from concrete IE
class SouthParkEsIE(MTVServicesBaseIE):
IE_NAME = 'southpark.cc.com:español'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark\.cc\.com/es/episodios/(?P<id>.+?)(\?|#|$))'
_LANG = 'es'
_VALID_URL = r'https?://(?:www\.)?southpark\.cc\.com/es/episodios/(?P<id>[^?#]+)'
_TESTS = [{
'url': 'http://southpark.cc.com/es/episodios/s01e01-cartman-consigue-una-sonda-anal#source=351c1323-0b96-402d-a8b9-40d01b2e9bde&position=1&sort=!airdate',
'url': 'https://southpark.cc.com/es/episodios/er4a32/south-park-aumento-de-peso-4000-temporada-1-ep-2',
'info_dict': {
'title': 'Cartman Consigue Una Sonda Anal',
'description': 'Cartman Consigue Una Sonda Anal',
'id': '5fb94f0c-ecfd-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'display_id': 'er4a32/south-park-aumento-de-peso-4000-temporada-1-ep-2',
'title': 'Aumento de peso 4000',
'description': 'md5:a939b4819ea74c245a0cde180de418c0',
'channel': 'Comedy Central',
'duration': 1320.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 872078400,
'upload_date': '19970820',
'release_timestamp': 872078400,
'release_date': '19970820',
},
'playlist_count': 4,
'skip': 'Geo-restricted',
'params': {'skip_download': 'm3u8'},
}]
class SouthParkDeIE(SouthParkIE): # XXX: Do not subclass from concrete IE
class SouthParkDeIE(MTVServicesBaseIE):
IE_NAME = 'southpark.de'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark\.de/(?:(en/(videoclip|collections|episodes|video-clips))|(videoclip|collections|folgen))/(?P<id>(?P<unique_id>.+?)/.+?)(?:\?|#|$))'
_VALID_URL = r'https?://(?:www\.)?southpark\.de/(?:en/)?(?:videoclip|collections|episodes|video-clips|folgen)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['DE']
_GEO_BYPASS = True
_TESTS = [{
'url': 'https://www.southpark.de/videoclip/rsribv/south-park-rueckzug-zum-gummibonbon-wald',
'only_matching': True,
@@ -66,123 +102,253 @@ class SouthParkDeIE(SouthParkIE): # XXX: Do not subclass from concrete IE
# clip
'url': 'https://www.southpark.de/en/video-clips/ct46op/south-park-tooth-fairy-cartman',
'info_dict': {
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'display_id': 'ct46op/south-park-tooth-fairy-cartman',
'title': 'Tooth Fairy Cartman',
'description': 'md5:db02e23818b4dc9cb5f0c5a7e8833a68',
'description': 'Cartman steals Butters\' tooth and gets four dollars for it.',
'duration': 93.26,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 4',
'season_number': 4,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 954990360,
'upload_date': '20000406',
'release_timestamp': 954990360,
'release_date': '20000406',
},
'params': {'skip_download': 'm3u8'},
}, {
# episode
'url': 'https://www.southpark.de/en/episodes/yy0vjs/south-park-the-pandemic-special-season-24-ep-1',
'info_dict': {
'id': 'f5fbd823-04bc-11eb-9b1b-0e40cf2fc285',
'ext': 'mp4',
'title': 'South Park',
'id': '230a4f02-f583-11ea-834d-70df2f866ace',
'display_id': 'yy0vjs/south-park-the-pandemic-special-season-24-ep-1',
'title': 'The Pandemic Special',
'description': 'md5:ae0d875eff169dcbed16b21531857ac1',
'channel': 'Comedy Central',
'duration': 2724.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 24',
'season_number': 24,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 1601932260,
'upload_date': '20201005',
'release_timestamp': 1601932270,
'release_date': '20201005',
},
'params': {'skip_download': 'm3u8'},
}, {
# clip
'url': 'https://www.southpark.de/videoclip/ct46op/south-park-zahnfee-cartman',
'info_dict': {
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'display_id': 'ct46op/south-park-zahnfee-cartman',
'title': 'Zahnfee Cartman',
'description': 'md5:b917eec991d388811d911fd1377671ac',
'duration': 93.26,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 4',
'season_number': 4,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 954990360,
'upload_date': '20000406',
'release_timestamp': 954990360,
'release_date': '20000406',
},
'params': {'skip_download': 'm3u8'},
}, {
# episode
'url': 'https://www.southpark.de/folgen/242csn/south-park-her-mit-dem-hirn-staffel-1-ep-7',
'url': 'https://www.southpark.de/folgen/4r4367/south-park-katerstimmung-staffel-12-ep-3',
'info_dict': {
'id': '607115f3-496f-40c3-8647-2b0bcff486c0',
'ext': 'mp4',
'title': 'md5:South Park | Pink Eye | E 0107 | HDSS0107X deu | Version: 634312 | Comedy Central S1',
'id': '68c79aa4-ecfd-11e0-aca6-0026b9414f30',
'display_id': '4r4367/south-park-katerstimmung-staffel-12-ep-3',
'title': 'Katerstimmung',
'description': 'md5:94e0e2cd568ffa635e0725518bb4b180',
'channel': 'Comedy Central',
'duration': 1320.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 12',
'season_number': 12,
'episode': 'Episode 3',
'episode_number': 3,
'timestamp': 1206504000,
'upload_date': '20080326',
'release_timestamp': 1206504000,
'release_date': '20080326',
},
'params': {'skip_download': 'm3u8'},
}]
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge&ref={url}', video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
def _get_feed_query(self, uri):
return
class SouthParkLatIE(SouthParkIE): # XXX: Do not subclass from concrete IE
class SouthParkLatIE(MTVServicesBaseIE):
IE_NAME = 'southpark.lat'
_VALID_URL = r'https?://(?:www\.)?southpark\.lat/(?:en/)?(?:video-?clips?|collections|episod(?:e|io)s)/(?P<id>[^/?#&]+)'
_VALID_URL = r'https?://(?:www\.)?southpark\.lat/(?:en/)?(?:video-?clips?|collections|episod(?:e|io)s)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['MX']
_GEO_BYPASS = True
_TESTS = [{
'url': 'https://www.southpark.lat/en/video-clips/ct46op/south-park-tooth-fairy-cartman',
'only_matching': True,
'info_dict': {
'ext': 'mp4',
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'display_id': 'ct46op/south-park-tooth-fairy-cartman',
'title': 'Tooth Fairy Cartman',
'description': 'Cartman steals Butters\' tooth and gets four dollars for it.',
'duration': 93.26,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 4',
'season_number': 4,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 954990360,
'upload_date': '20000406',
'release_timestamp': 954990360,
'release_date': '20000406',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.southpark.lat/episodios/9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7',
'only_matching': True,
'info_dict': {
'ext': 'mp4',
'id': '600d273a-ecfd-11e0-aca6-0026b9414f30',
'display_id': '9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7',
'title': 'Orgía Gatuna ',
'description': 'md5:73c6648413f5977026abb792a25c65d5',
'channel': 'Comedy Central',
'duration': 1319.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 3',
'season_number': 3,
'episode': 'Episode 7',
'episode_number': 7,
'timestamp': 931924800,
'upload_date': '19990714',
'release_timestamp': 931924800,
'release_date': '19990714',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.southpark.lat/en/collections/29ve08/south-park-heating-up/lydbrc',
'only_matching': True,
}, {
# clip
'url': 'https://www.southpark.lat/en/video-clips/ct46op/south-park-tooth-fairy-cartman',
'info_dict': {
'id': 'e99d45ea-ed00-11e0-aca6-0026b9414f30',
'ext': 'mp4',
'title': 'Tooth Fairy Cartman',
'description': 'md5:db02e23818b4dc9cb5f0c5a7e8833a68',
},
}, {
# episode
'url': 'https://www.southpark.lat/episodios/9h0qbg/south-park-orgia-gatuna-temporada-3-ep-7',
'info_dict': {
'id': 'f5fbd823-04bc-11eb-9b1b-0e40cf2fc285',
'ext': 'mp4',
'title': 'South Park',
'description': 'md5:ae0d875eff169dcbed16b21531857ac1',
},
}]
def _get_feed_url(self, uri, url=None):
video_id = self._id_from_uri(uri)
config = self._download_json(
f'http://media.mtvnservices.com/pmt/e1/access/index.html?uri={uri}&configtype=edge&ref={url}',
video_id)
return self._remove_template_parameter(config['feedWithQueryParams'])
def _get_feed_query(self, uri):
return
class SouthParkNlIE(SouthParkIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'southpark.nl'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southpark\.nl/(?:clips|(?:full-)?episodes|collections)/(?P<id>.+?)(\?|#|$))'
_FEED_URL = 'http://www.southpark.nl/feeds/video-player/mrss/'
_TESTS = [{
'url': 'http://www.southpark.nl/full-episodes/s18e06-freemium-isnt-free',
'info_dict': {
'title': 'Freemium Isn\'t Free',
'description': 'Stan is addicted to the new Terrance and Phillip mobile game.',
},
'playlist_mincount': 3,
}]
class SouthParkDkIE(SouthParkIE): # XXX: Do not subclass from concrete IE
IE_NAME = 'southparkstudios.dk'
_VALID_URL = r'https?://(?:www\.)?(?P<url>southparkstudios\.(?:dk|nu)/(?:clips|full-episodes|collections)/(?P<id>.+?)(\?|#|$))'
_FEED_URL = 'http://www.southparkstudios.dk/feeds/video-player/mrss/'
class SouthParkDkIE(MTVServicesBaseIE):
IE_NAME = 'southparkstudios.nu'
_VALID_URL = r'https?://(?:www\.)?southparkstudios\.nu/(?:video-clips|episodes|collections)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['DK']
_GEO_BYPASS = True
_TESTS = [{
'url': 'http://www.southparkstudios.dk/full-episodes/s18e07-grounded-vindaloop',
'url': 'https://www.southparkstudios.nu/episodes/y3uvvc/south-park-grounded-vindaloop-season-18-ep-7',
'info_dict': {
'ext': 'mp4',
'id': 'f60690a7-21a7-4ee7-8834-d7099a8707ab',
'display_id': 'y3uvvc/south-park-grounded-vindaloop-season-18-ep-7',
'title': 'Grounded Vindaloop',
'description': 'Butters is convinced he\'s living in a virtual reality.',
'channel': 'Comedy Central',
'duration': 1319.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 18',
'season_number': 18,
'episode': 'Episode 7',
'episode_number': 7,
'timestamp': 1415847600,
'upload_date': '20141113',
'release_timestamp': 1415768400,
'release_date': '20141112',
},
'playlist_mincount': 3,
'params': {'skip_download': 'm3u8'},
}, {
'url': 'http://www.southparkstudios.dk/collections/2476/superhero-showdown/1',
'url': 'https://www.southparkstudios.nu/collections/8dk7kr/south-park-best-of-south-park/sd5ean',
'only_matching': True,
}, {
'url': 'http://www.southparkstudios.nu/collections/2476/superhero-showdown/1',
'url': 'https://www.southparkstudios.nu/video-clips/k42mrf/south-park-kick-the-baby',
'only_matching': True,
}]
class SouthParkComBrIE(MTVServicesBaseIE):
IE_NAME = 'southparkstudios.com.br'
_VALID_URL = r'https?://(?:www\.)?southparkstudios\.com\.br/(?:en/)?(?:video-clips|episodios|collections|episodes)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['BR']
_GEO_BYPASS = True
_TESTS = [{
'url': 'https://www.southparkstudios.com.br/video-clips/3vifo0/south-park-welcome-to-mar-a-lago7',
'info_dict': {
'ext': 'mp4',
'id': 'ccc3e952-7352-11f0-b405-16fff45bc035',
'display_id': '3vifo0/south-park-welcome-to-mar-a-lago7',
'title': 'Welcome to Mar-a-Lago',
'description': 'The President welcomes Mr. Mackey to Mar-a-Lago, a magical place where anything can happen.',
'duration': 139.223,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 27',
'season_number': 27,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 1754546400,
'upload_date': '20250807',
'release_timestamp': 1754546400,
'release_date': '20250807',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.southparkstudios.com.br/episodios/940f8z/south-park-cartman-ganha-uma-sonda-anal-temporada-1-ep-1',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.com.br/collections/8dk7kr/south-park-best-of-south-park/sd5ean',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.com.br/en/episodes/5v0oap/south-park-south-park-the-25th-anniversary-concert-ep-1',
'only_matching': True,
}]
class SouthParkCoUkIE(MTVServicesBaseIE):
IE_NAME = 'southparkstudios.co.uk'
_VALID_URL = r'https?://(?:www\.)?southparkstudios\.co\.uk/(?:video-clips|collections|episodes)/(?P<id>[^?#]+)'
_GEO_COUNTRIES = ['UK']
_GEO_BYPASS = True
_TESTS = [{
'url': 'https://www.southparkstudios.co.uk/video-clips/8kabfr/south-park-respectclydesauthority',
'info_dict': {
'ext': 'mp4',
'id': 'f6d9af23-734e-11f0-b405-16fff45bc035',
'display_id': '8kabfr/south-park-respectclydesauthority',
'title': '#RespectClydesAuthority',
'description': 'After learning about Clyde\'s Podcast, Cartman needs to see it for himself.',
'duration': 45.045,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'South Park',
'season': 'Season 27',
'season_number': 27,
'episode': 'Episode 2',
'episode_number': 2,
'timestamp': 1754546400,
'upload_date': '20250807',
'release_timestamp': 1754546400,
'release_date': '20250807',
},
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.southparkstudios.co.uk/episodes/e1yoxn/south-park-imaginationland-season-11-ep-10',
'only_matching': True,
}, {
'url': 'https://www.southparkstudios.co.uk/collections/8dk7kr/south-park-best-of-south-park/sd5ean',
'only_matching': True,
}]

View File

@@ -1,46 +0,0 @@
from .mtv import MTVServicesInfoExtractor
class BellatorIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?bellator\.com/[^/]+/[\da-z]{6}(?:[/?#&]|$)'
_TESTS = [{
'url': 'http://www.bellator.com/fight/atwr7k/bellator-158-michael-page-vs-evangelista-cyborg',
'info_dict': {
'title': 'Michael Page vs. Evangelista Cyborg',
'description': 'md5:0d917fc00ffd72dd92814963fc6cbb05',
},
'playlist_count': 3,
}, {
'url': 'http://www.bellator.com/video-clips/bw6k7n/bellator-158-foundations-michael-venom-page',
'only_matching': True,
}]
_FEED_URL = 'http://www.bellator.com/feeds/mrss/'
_GEO_COUNTRIES = ['US']
class ParamountNetworkIE(MTVServicesInfoExtractor):
_VALID_URL = r'https?://(?:www\.)?paramountnetwork\.com/[^/]+/[\da-z]{6}(?:[/?#&]|$)'
_TESTS = [{
'url': 'http://www.paramountnetwork.com/episodes/j830qm/lip-sync-battle-joel-mchale-vs-jim-rash-season-2-ep-13',
'info_dict': {
'id': '37ace3a8-1df6-48be-85b8-38df8229e241',
'ext': 'mp4',
'title': 'Lip Sync Battle|April 28, 2016|2|209|Joel McHale Vs. Jim Rash|Act 1',
'description': 'md5:a739ca8f978a7802f67f8016d27ce114',
},
'params': {
# m3u8 download
'skip_download': True,
},
}]
_FEED_URL = 'http://feeds.mtvnservices.com/od/feed/intl-mrss-player-feed'
_GEO_COUNTRIES = ['US']
def _get_feed_query(self, uri):
return {
'arcEp': 'paramountnetwork.com',
'imageEp': 'paramountnetwork.com',
'mgid': uri,
}

View File

@@ -1,11 +1,19 @@
import json
import re
from .common import InfoExtractor
from ..utils import (
ExtractorError,
clean_html,
extract_attributes,
get_element_by_class,
str_or_none,
url_or_none,
)
from ..utils.traversal import (
find_element,
find_elements,
traverse_obj,
trim_str,
)
@@ -19,44 +27,22 @@ class SteamIE(InfoExtractor):
|
https?://(?:www\.)?steamcommunity\.com/sharedfiles/filedetails/\?id=(?P<fileID>[0-9]+)
'''
_VIDEO_PAGE_TEMPLATE = 'http://store.steampowered.com/video/%s/'
_AGECHECK_TEMPLATE = 'http://store.steampowered.com/agecheck/video/%s/?snr=1_agecheck_agecheck__age-gate&ageDay=1&ageMonth=January&ageYear=1970'
_VIDEO_PAGE_TEMPLATE = 'https://store.steampowered.com/video/%s/'
_AGECHECK_TEMPLATE = 'https://store.steampowered.com/agecheck/video/%s/?snr=1_agecheck_agecheck__age-gate&ageDay=1&ageMonth=January&ageYear=1970'
_TESTS = [{
'url': 'http://store.steampowered.com/video/105600/',
'playlist': [
{
'md5': '695242613303ffa2a4c44c9374ddc067',
'info_dict': {
'id': '256785003',
'ext': 'mp4',
'title': 'Terraria video 256785003',
'thumbnail': r're:^https://cdn\.[^\.]+\.steamstatic\.com',
},
},
{
'md5': '6a294ee0c4b1f47f5bb76a65e31e3592',
'info_dict': {
'id': '2040428',
'ext': 'mp4',
'title': 'Terraria video 2040428',
'thumbnail': r're:^https://cdn\.[^\.]+\.steamstatic\.com',
},
},
],
'url': 'https://store.steampowered.com/video/105600/',
'info_dict': {
'id': '105600',
'title': 'Terraria',
},
'params': {
'playlistend': 2,
},
'playlist_mincount': 3,
}, {
'url': 'https://store.steampowered.com/app/271590/Grand_Theft_Auto_V/',
'info_dict': {
'id': '271590',
'title': 'Grand Theft Auto V',
'title': 'Grand Theft Auto V Legacy',
},
'playlist_count': 23,
'playlist_mincount': 26,
}]
def _real_extract(self, url):
@@ -81,32 +67,29 @@ class SteamIE(InfoExtractor):
self.report_age_confirmation()
webpage = self._download_webpage(video_url, playlist_id)
videos = re.findall(r'(<div[^>]+id=[\'"]highlight_movie_(\d+)[\'"][^>]+>)', webpage)
app_name = traverse_obj(webpage, ({find_element(cls='apphub_AppName')}, {clean_html}))
entries = []
playlist_title = get_element_by_class('apphub_AppName', webpage)
for movie, movie_id in videos:
if not movie:
continue
movie = extract_attributes(movie)
if not movie_id:
continue
entry = {
'id': movie_id,
'title': f'{playlist_title} video {movie_id}',
}
for data_prop in traverse_obj(webpage, (
{find_elements(cls='highlight_player_item highlight_movie', html=True)},
..., {extract_attributes}, 'data-props', {json.loads}, {dict},
)):
formats = []
if movie:
entry['thumbnail'] = movie.get('data-poster')
for quality in ('', '-hd'):
for ext in ('webm', 'mp4'):
video_url = movie.get(f'data-{ext}{quality}-source')
if video_url:
formats.append({
'format_id': ext + quality,
'url': video_url,
})
entry['formats'] = formats
entries.append(entry)
if hls_manifest := traverse_obj(data_prop, ('hlsManifest', {url_or_none})):
formats.extend(self._extract_m3u8_formats(
hls_manifest, playlist_id, 'mp4', m3u8_id='hls', fatal=False))
for dash_manifest in traverse_obj(data_prop, ('dashManifests', ..., {url_or_none})):
formats.extend(self._extract_mpd_formats(
dash_manifest, playlist_id, mpd_id='dash', fatal=False))
movie_id = traverse_obj(data_prop, ('id', {trim_str(start='highlight_movie_')}))
entries.append({
'id': movie_id,
'title': f'{app_name} video {movie_id}',
'formats': formats,
'thumbnail': traverse_obj(data_prop, ('screenshot', {url_or_none})),
})
embedded_videos = re.findall(r'(<iframe[^>]+>)', webpage)
for evideos in embedded_videos:
evideos = extract_attributes(evideos).get('src')
@@ -121,7 +104,7 @@ class SteamIE(InfoExtractor):
if not entries:
raise ExtractorError('Could not find any videos')
return self.playlist_result(entries, playlist_id, playlist_title)
return self.playlist_result(entries, playlist_id, app_name)
class SteamCommunityBroadcastIE(InfoExtractor):

View File

@@ -88,7 +88,7 @@ class SVTBaseIE(InfoExtractor):
'id': video_id,
'title': title,
'formats': formats,
'subtitles': subtitles,
'subtitles': self._fixup_subtitles(subtitles),
'duration': duration,
'timestamp': timestamp,
'age_limit': age_limit,
@@ -99,6 +99,16 @@ class SVTBaseIE(InfoExtractor):
'is_live': is_live,
}
@staticmethod
def _fixup_subtitles(subtitles):
# See: https://github.com/yt-dlp/yt-dlp/issues/14020
fixed_subtitles = {}
for lang, subs in subtitles.items():
for sub in subs:
fixed_lang = f'{lang}-forced' if 'text-open' in sub['url'] else lang
fixed_subtitles.setdefault(fixed_lang, []).append(sub)
return fixed_subtitles
class SVTPlayIE(SVTBaseIE):
IE_NAME = 'svt:play'
@@ -115,6 +125,26 @@ class SVTPlayIE(SVTBaseIE):
)
'''
_TESTS = [{
'url': 'https://www.svtplay.se/video/eXYgwZb/sverige-och-kriget/1-utbrottet',
'md5': '2382036fd6f8c994856c323fe51c426e',
'info_dict': {
'id': 'ePBvGRq',
'ext': 'mp4',
'title': '1. Utbrottet',
'description': 'md5:02291cc3159dbc9aa95d564e77a8a92b',
'series': 'Sverige och kriget',
'episode': '1. Utbrottet',
'timestamp': 1746921600,
'upload_date': '20250511',
'duration': 3585,
'thumbnail': r're:^https?://(?:.*[\.-]jpg|www.svtstatic.se/image/.*)$',
'age_limit': 0,
'subtitles': {'sv': 'count:3', 'sv-forced': 'count:3'},
},
'params': {
'skip_download': 'm3u8',
},
}, {
'url': 'https://www.svtplay.se/video/30479064',
'md5': '2382036fd6f8c994856c323fe51c426e',
'info_dict': {

View File

@@ -65,7 +65,7 @@ class TikTokBaseIE(InfoExtractor):
@functools.cached_property
def _DEVICE_ID(self):
return self._KNOWN_DEVICE_ID or str(random.randint(7250000000000000000, 7351147085025500000))
return self._KNOWN_DEVICE_ID or str(random.randint(7250000000000000000, 7325099899999994577))
@functools.cached_property
def _API_HOSTNAME(self):
@@ -921,6 +921,8 @@ class TikTokUserIE(TikTokBaseIE):
'id': 'MS4wLjABAAAAepiJKgwWhulvCpSuUVsp7sgVVsFJbbNaLeQ6OQ0oAJERGDUIXhb2yxxHZedsItgT',
'title': 'corgibobaa',
},
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}, {
'url': 'https://www.tiktok.com/@6820838815978423302',
'playlist_mincount': 5,
@@ -928,6 +930,8 @@ class TikTokUserIE(TikTokBaseIE):
'id': 'MS4wLjABAAAA0tF1nBwQVVMyrGu3CqttkNgM68Do1OXUFuCY0CRQk8fEtSVDj89HqoqvbSTmUP2W',
'title': '6820838815978423302',
},
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}, {
'url': 'https://www.tiktok.com/@meme',
'playlist_mincount': 593,
@@ -935,14 +939,17 @@ class TikTokUserIE(TikTokBaseIE):
'id': 'MS4wLjABAAAAiKfaDWeCsT3IHwY77zqWGtVRIy9v4ws1HbVi7auP1Vx7dJysU_hc5yRiGywojRD6',
'title': 'meme',
},
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}, {
'url': 'tiktokuser:MS4wLjABAAAAM3R2BtjzVT-uAtstkl2iugMzC6AtnpkojJbjiOdDDrdsTiTR75-8lyWJCY5VvDrZ',
'playlist_mincount': 31,
'info_dict': {
'id': 'MS4wLjABAAAAM3R2BtjzVT-uAtstkl2iugMzC6AtnpkojJbjiOdDDrdsTiTR75-8lyWJCY5VvDrZ',
},
'expected_warnings': ['TikTok API keeps sending the same page'],
'params': {'extractor_retries': 10},
}]
_USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:115.0) Gecko/20100101 Firefox/115.0'
_API_BASE_URL = 'https://www.tiktok.com/api/creator/item_list/'
def _build_web_query(self, sec_uid, cursor):
@@ -980,15 +987,29 @@ class TikTokUserIE(TikTokBaseIE):
'webcast_language': 'en',
}
def _entries(self, sec_uid, user_name):
def _entries(self, sec_uid, user_name, fail_early=False):
display_id = user_name or sec_uid
seen_ids = set()
cursor = int(time.time() * 1E3)
for page in itertools.count(1):
response = self._download_json(
self._API_BASE_URL, display_id, f'Downloading page {page}',
query=self._build_web_query(sec_uid, cursor), headers={'User-Agent': self._USER_AGENT})
for retry in self.RetryManager():
response = self._download_json(
self._API_BASE_URL, display_id, f'Downloading page {page}',
query=self._build_web_query(sec_uid, cursor))
# Avoid infinite loop caused by bad device_id
# See: https://github.com/yt-dlp/yt-dlp/issues/14031
current_batch = sorted(traverse_obj(response, ('itemList', ..., 'id', {str})))
if current_batch and current_batch == sorted(seen_ids):
message = 'TikTok API keeps sending the same page'
if self._KNOWN_DEVICE_ID:
raise ExtractorError(
f'{message}. Try again with a different device_id', expected=True)
# The user didn't pass a device_id so we can reset it and retry
del self._DEVICE_ID
retry.error = ExtractorError(
f'{message}. Taking measures to avoid an infinite loop', expected=True)
for video in traverse_obj(response, ('itemList', lambda _, v: v['id'])):
video_id = video['id']
@@ -1008,42 +1029,60 @@ class TikTokUserIE(TikTokBaseIE):
cursor = old_cursor - 7 * 86_400_000
# In case 'hasMorePrevious' is wrong, break if we have gone back before TikTok existed
if cursor < 1472706000000 or not traverse_obj(response, 'hasMorePrevious'):
break
return
def _get_sec_uid(self, user_url, user_name, msg):
# This code path is ideally only reached when one of the following is true:
# 1. TikTok profile is private and webpage detection was bypassed due to a tiktokuser:sec_uid URL
# 2. TikTok profile is *not* private but all of their videos are private
if fail_early and not seen_ids:
self.raise_login_required(
'This user\'s account is likely either private or all of their videos are private. '
'Log into an account that has access')
def _extract_sec_uid_from_embed(self, user_name):
webpage = self._download_webpage(
user_url, user_name, fatal=False, headers={'User-Agent': 'Mozilla/5.0'},
note=f'Downloading {msg} webpage', errnote=f'Unable to download {msg} webpage') or ''
return (traverse_obj(self._get_universal_data(webpage, user_name),
('webapp.user-detail', 'userInfo', 'user', 'secUid', {str}))
or traverse_obj(self._get_sigi_state(webpage, user_name),
('LiveRoom', 'liveRoomUserInfo', 'user', 'secUid', {str}),
('UserModule', 'users', ..., 'secUid', {str}, any)))
f'https://www.tiktok.com/embed/@{user_name}', user_name,
'Downloading user embed page', errnote=False, fatal=False)
if not webpage:
self.report_warning('This user\'s account is either private or has embedding disabled')
return None
data = traverse_obj(self._search_json(
r'<script[^>]+\bid=[\'"]__FRONTITY_CONNECT_STATE__[\'"][^>]*>',
webpage, 'data', user_name, default={}),
('source', 'data', f'/embed/@{user_name}', {dict}))
for aweme_id in traverse_obj(data, ('videoList', ..., 'id', {str})):
webpage_url = self._create_url(user_name, aweme_id)
video_data, _ = self._extract_web_data_and_status(webpage_url, aweme_id, fatal=False)
sec_uid = self._parse_aweme_video_web(
video_data, webpage_url, aweme_id, extract_flat=True).get('channel_id')
if sec_uid:
return sec_uid
return None
def _real_extract(self, url):
user_name, sec_uid = self._match_id(url), None
if mobj := re.fullmatch(r'MS4wLjABAAAA[\w-]{64}', user_name):
user_name, sec_uid = None, mobj.group(0)
if re.fullmatch(r'MS4wLjABAAAA[\w-]{64}', user_name):
user_name, sec_uid = None, user_name
fail_early = True
else:
sec_uid = (self._get_sec_uid(self._UPLOADER_URL_FORMAT % user_name, user_name, 'user')
or self._get_sec_uid(self._UPLOADER_URL_FORMAT % f'{user_name}/live', user_name, 'live'))
if not sec_uid:
webpage = self._download_webpage(
f'https://www.tiktok.com/embed/@{user_name}', user_name,
note='Downloading user embed page', fatal=False) or ''
data = traverse_obj(self._search_json(
r'<script[^>]+\bid=[\'"]__FRONTITY_CONNECT_STATE__[\'"][^>]*>',
webpage, 'data', user_name, default={}),
('source', 'data', f'/embed/@{user_name}', {dict}))
for aweme_id in traverse_obj(data, ('videoList', ..., 'id', {str})):
webpage_url = self._create_url(user_name, aweme_id)
video_data, _ = self._extract_web_data_and_status(webpage_url, aweme_id, fatal=False)
sec_uid = self._parse_aweme_video_web(
video_data, webpage_url, aweme_id, extract_flat=True).get('channel_id')
if sec_uid:
break
self._UPLOADER_URL_FORMAT % user_name, user_name,
'Downloading user webpage', 'Unable to download user webpage',
fatal=False, headers={'User-Agent': 'Mozilla/5.0'}) or ''
detail = traverse_obj(
self._get_universal_data(webpage, user_name), ('webapp.user-detail', {dict})) or {}
if detail.get('statusCode') == 10222:
self.raise_login_required(
'This user\'s account is private. Log into an account that has access')
sec_uid = traverse_obj(detail, ('userInfo', 'user', 'secUid', {str}))
if sec_uid:
fail_early = not traverse_obj(detail, ('userInfo', 'itemList', ...))
else:
sec_uid = self._extract_sec_uid_from_embed(user_name)
fail_early = False
if not sec_uid:
raise ExtractorError(
@@ -1051,7 +1090,7 @@ class TikTokUserIE(TikTokBaseIE):
'from a video posted by this user, try using "tiktokuser:channel_id" as the '
'input URL (replacing `channel_id` with its actual value)', expected=True)
return self.playlist_result(self._entries(sec_uid, user_name), sec_uid, user_name)
return self.playlist_result(self._entries(sec_uid, user_name, fail_early), sec_uid, user_name)
class TikTokBaseListIE(TikTokBaseIE): # XXX: Conventionally, base classes should end with BaseIE/InfoExtractor

View File

@@ -1,37 +0,0 @@
from .mtv import MTVServicesInfoExtractor
# TODO: Remove - Reason not used anymore - Service moved to youtube
class TVLandIE(MTVServicesInfoExtractor):
IE_NAME = 'tvland.com'
_VALID_URL = r'https?://(?:www\.)?tvland\.com/(?:video-clips|(?:full-)?episodes)/(?P<id>[^/?#.]+)'
_FEED_URL = 'http://www.tvland.com/feeds/mrss/'
_TESTS = [{
# Geo-restricted. Without a proxy metadata are still there. With a
# proxy it redirects to http://m.tvland.com/app/
'url': 'https://www.tvland.com/episodes/s04pzf/everybody-loves-raymond-the-dog-season-1-ep-19',
'info_dict': {
'description': 'md5:84928e7a8ad6649371fbf5da5e1ad75a',
'title': 'The Dog',
},
'playlist_mincount': 5,
'skip': '404 Not found',
}, {
'url': 'https://www.tvland.com/video-clips/4n87f2/younger-a-first-look-at-younger-season-6',
'md5': 'e2c6389401cf485df26c79c247b08713',
'info_dict': {
'id': '891f7d3c-5b5b-4753-b879-b7ba1a601757',
'ext': 'mp4',
'title': 'Younger|April 30, 2019|6|NO-EPISODE#|A First Look at Younger Season 6',
'description': 'md5:595ea74578d3a888ae878dfd1c7d4ab2',
'upload_date': '20190430',
'timestamp': 1556658000,
},
'params': {
'skip_download': True,
},
}, {
'url': 'http://www.tvland.com/full-episodes/iu0hz6/younger-a-kiss-is-just-a-kiss-season-3-ep-301',
'only_matching': True,
}]

View File

@@ -1,352 +0,0 @@
import json
import re
from .common import InfoExtractor
from ..networking.exceptions import HTTPError
from ..utils import (
ExtractorError,
int_or_none,
parse_iso8601,
parse_qs,
)
class VevoBaseIE(InfoExtractor):
def _extract_json(self, webpage, video_id):
return self._parse_json(
self._search_regex(
r'window\.__INITIAL_STORE__\s*=\s*({.+?});\s*</script>',
webpage, 'initial store'),
video_id)
class VevoIE(VevoBaseIE):
"""
Accepts urls from vevo.com or in the format 'vevo:{id}'
(currently used by MTVIE and MySpaceIE)
"""
_VALID_URL = r'''(?x)
(?:https?://(?:www\.)?vevo\.com/watch/(?!playlist|genre)(?:[^/]+/(?:[^/]+/)?)?|
https?://cache\.vevo\.com/m/html/embed\.html\?video=|
https?://videoplayer\.vevo\.com/embed/embedded\?videoId=|
https?://embed\.vevo\.com/.*?[?&]isrc=|
https?://tv\.vevo\.com/watch/artist/(?:[^/]+)/|
vevo:)
(?P<id>[^&?#]+)'''
_EMBED_REGEX = [r'<iframe[^>]+?src=(["\'])(?P<url>(?:https?:)?//(?:cache\.)?vevo\.com/.+?)\1']
_TESTS = [{
'url': 'http://www.vevo.com/watch/hurts/somebody-to-die-for/GB1101300280',
'md5': '95ee28ee45e70130e3ab02b0f579ae23',
'info_dict': {
'id': 'GB1101300280',
'ext': 'mp4',
'title': 'Hurts - Somebody to Die For',
'timestamp': 1372057200,
'upload_date': '20130624',
'uploader': 'Hurts',
'track': 'Somebody to Die For',
'artist': 'Hurts',
'genre': 'Pop',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'v3 SMIL format',
'url': 'http://www.vevo.com/watch/cassadee-pope/i-wish-i-could-break-your-heart/USUV71302923',
'md5': 'f6ab09b034f8c22969020b042e5ac7fc',
'info_dict': {
'id': 'USUV71302923',
'ext': 'mp4',
'title': 'Cassadee Pope - I Wish I Could Break Your Heart',
'timestamp': 1392796919,
'upload_date': '20140219',
'uploader': 'Cassadee Pope',
'track': 'I Wish I Could Break Your Heart',
'artist': 'Cassadee Pope',
'genre': 'Country',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'Age-limited video',
'url': 'https://www.vevo.com/watch/justin-timberlake/tunnel-vision-explicit/USRV81300282',
'info_dict': {
'id': 'USRV81300282',
'ext': 'mp4',
'title': 'Justin Timberlake - Tunnel Vision (Explicit)',
'age_limit': 18,
'timestamp': 1372888800,
'upload_date': '20130703',
'uploader': 'Justin Timberlake',
'track': 'Tunnel Vision (Explicit)',
'artist': 'Justin Timberlake',
'genre': 'Pop',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'No video_info',
'url': 'http://www.vevo.com/watch/k-camp-1/Till-I-Die/USUV71503000',
'md5': '8b83cc492d72fc9cf74a02acee7dc1b0',
'info_dict': {
'id': 'USUV71503000',
'ext': 'mp4',
'title': 'K Camp ft. T.I. - Till I Die',
'age_limit': 18,
'timestamp': 1449468000,
'upload_date': '20151207',
'uploader': 'K Camp',
'track': 'Till I Die',
'artist': 'K Camp',
'genre': 'Hip-Hop',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'Featured test',
'url': 'https://www.vevo.com/watch/lemaitre/Wait/USUV71402190',
'md5': 'd28675e5e8805035d949dc5cf161071d',
'info_dict': {
'id': 'USUV71402190',
'ext': 'mp4',
'title': 'Lemaitre ft. LoLo - Wait',
'age_limit': 0,
'timestamp': 1413432000,
'upload_date': '20141016',
'uploader': 'Lemaitre',
'track': 'Wait',
'artist': 'Lemaitre',
'genre': 'Electronic',
},
'expected_warnings': ['Unable to download SMIL file', 'Unable to download info'],
}, {
'note': 'Only available via webpage',
'url': 'http://www.vevo.com/watch/GBUV71600656',
'md5': '67e79210613865b66a47c33baa5e37fe',
'info_dict': {
'id': 'GBUV71600656',
'ext': 'mp4',
'title': 'ABC - Viva Love',
'age_limit': 0,
'timestamp': 1461830400,
'upload_date': '20160428',
'uploader': 'ABC',
'track': 'Viva Love',
'artist': 'ABC',
'genre': 'Pop',
},
'expected_warnings': ['Failed to download video versions info'],
}, {
# no genres available
'url': 'http://www.vevo.com/watch/INS171400764',
'only_matching': True,
}, {
# Another case available only via the webpage; using streams/streamsV3 formats
# Geo-restricted to Netherlands/Germany
'url': 'http://www.vevo.com/watch/boostee/pop-corn-clip-officiel/FR1A91600909',
'only_matching': True,
}, {
'url': 'https://embed.vevo.com/?isrc=USH5V1923499&partnerId=4d61b777-8023-4191-9ede-497ed6c24647&partnerAdCode=',
'only_matching': True,
}, {
'url': 'https://tv.vevo.com/watch/artist/janet-jackson/US0450100550',
'only_matching': True,
}]
_VERSIONS = {
0: 'youtube', # only in AuthenticateVideo videoVersions
1: 'level3',
2: 'akamai',
3: 'level3',
4: 'amazon',
}
def _initialize_api(self, video_id):
webpage = self._download_webpage(
'https://accounts.vevo.com/token', None,
note='Retrieving oauth token',
errnote='Unable to retrieve oauth token',
data=json.dumps({
'client_id': 'SPupX1tvqFEopQ1YS6SS',
'grant_type': 'urn:vevo:params:oauth:grant-type:anonymous',
}).encode(),
headers={
'Content-Type': 'application/json',
})
if re.search(r'(?i)THIS PAGE IS CURRENTLY UNAVAILABLE IN YOUR REGION', webpage):
self.raise_geo_restricted(
f'{self.IE_NAME} said: This page is currently unavailable in your region')
auth_info = self._parse_json(webpage, video_id)
self._api_url_template = self.http_scheme() + '//apiv2.vevo.com/%s?token=' + auth_info['legacy_token']
def _call_api(self, path, *args, **kwargs):
try:
data = self._download_json(self._api_url_template % path, *args, **kwargs)
except ExtractorError as e:
if isinstance(e.cause, HTTPError):
errors = self._parse_json(e.cause.response.read().decode(), None)['errors']
error_message = ', '.join([error['message'] for error in errors])
raise ExtractorError(f'{self.IE_NAME} said: {error_message}', expected=True)
raise
return data
def _real_extract(self, url):
video_id = self._match_id(url)
self._initialize_api(video_id)
video_info = self._call_api(
f'video/{video_id}', video_id, 'Downloading api video info',
'Failed to download video info')
video_versions = self._call_api(
f'video/{video_id}/streams', video_id,
'Downloading video versions info',
'Failed to download video versions info',
fatal=False)
# Some videos are only available via webpage (e.g.
# https://github.com/ytdl-org/youtube-dl/issues/9366)
if not video_versions:
webpage = self._download_webpage(url, video_id)
json_data = self._extract_json(webpage, video_id)
if 'streams' in json_data.get('default', {}):
video_versions = json_data['default']['streams'][video_id][0]
else:
video_versions = [
value
for key, value in json_data['apollo']['data'].items()
if key.startswith(f'{video_id}.streams')]
uploader = None
artist = None
featured_artist = None
artists = video_info.get('artists')
for curr_artist in artists:
if curr_artist.get('role') == 'Featured':
featured_artist = curr_artist['name']
else:
artist = uploader = curr_artist['name']
formats = []
for video_version in video_versions:
version = self._VERSIONS.get(video_version.get('version'), 'generic')
version_url = video_version.get('url')
if not version_url:
continue
if '.ism' in version_url:
continue
elif '.mpd' in version_url:
formats.extend(self._extract_mpd_formats(
version_url, video_id, mpd_id=f'dash-{version}',
note=f'Downloading {version} MPD information',
errnote=f'Failed to download {version} MPD information',
fatal=False))
elif '.m3u8' in version_url:
formats.extend(self._extract_m3u8_formats(
version_url, video_id, 'mp4', 'm3u8_native',
m3u8_id=f'hls-{version}',
note=f'Downloading {version} m3u8 information',
errnote=f'Failed to download {version} m3u8 information',
fatal=False))
else:
m = re.search(r'''(?xi)
_(?P<quality>[a-z0-9]+)
_(?P<width>[0-9]+)x(?P<height>[0-9]+)
_(?P<vcodec>[a-z0-9]+)
_(?P<vbr>[0-9]+)
_(?P<acodec>[a-z0-9]+)
_(?P<abr>[0-9]+)
\.(?P<ext>[a-z0-9]+)''', version_url)
if not m:
continue
formats.append({
'url': version_url,
'format_id': f'http-{version}-{video_version.get("quality") or m.group("quality")}',
'vcodec': m.group('vcodec'),
'acodec': m.group('acodec'),
'vbr': int(m.group('vbr')),
'abr': int(m.group('abr')),
'ext': m.group('ext'),
'width': int(m.group('width')),
'height': int(m.group('height')),
})
track = video_info['title']
if featured_artist:
artist = f'{artist} ft. {featured_artist}'
title = f'{artist} - {track}' if artist else track
genres = video_info.get('genres')
genre = (
genres[0] if genres and isinstance(genres, list)
and isinstance(genres[0], str) else None)
is_explicit = video_info.get('isExplicit')
if is_explicit is True:
age_limit = 18
elif is_explicit is False:
age_limit = 0
else:
age_limit = None
return {
'id': video_id,
'title': title,
'formats': formats,
'thumbnail': video_info.get('imageUrl') or video_info.get('thumbnailUrl'),
'timestamp': parse_iso8601(video_info.get('releaseDate')),
'uploader': uploader,
'duration': int_or_none(video_info.get('duration')),
'view_count': int_or_none(video_info.get('views', {}).get('total')),
'age_limit': age_limit,
'track': track,
'artist': uploader,
'genre': genre,
}
class VevoPlaylistIE(VevoBaseIE):
_VALID_URL = r'https?://(?:www\.)?vevo\.com/watch/(?P<kind>playlist|genre)/(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.vevo.com/watch/genre/rock',
'info_dict': {
'id': 'rock',
'title': 'Rock',
},
'playlist_count': 20,
}, {
'url': 'http://www.vevo.com/watch/genre/rock?index=0',
'only_matching': True,
}]
def _real_extract(self, url):
mobj = self._match_valid_url(url)
playlist_id = mobj.group('id')
playlist_kind = mobj.group('kind')
webpage = self._download_webpage(url, playlist_id)
qs = parse_qs(url)
index = qs.get('index', [None])[0]
if index:
video_id = self._search_regex(
r'<meta[^>]+content=(["\'])vevo://video/(?P<id>.+?)\1[^>]*>',
webpage, 'video id', default=None, group='id')
if video_id:
return self.url_result(f'vevo:{video_id}', VevoIE.ie_key())
playlists = self._extract_json(webpage, playlist_id)['default'][f'{playlist_kind}s']
playlist = (next(iter(playlists.values()))
if playlist_kind == 'playlist' else playlists[playlist_id])
entries = [
self.url_result(f'vevo:{src}', VevoIE.ie_key())
for src in playlist['isrcs']]
return self.playlist_result(
entries, playlist.get('playlistId') or playlist_id,
playlist.get('name'), playlist.get('description'))

View File

@@ -1,33 +1,50 @@
from .mtv import MTVServicesInfoExtractor
# TODO: Remove - Reason: Outdated Site
from .mtv import MTVServicesBaseIE
class VH1IE(MTVServicesInfoExtractor):
class VH1IE(MTVServicesBaseIE):
IE_NAME = 'vh1.com'
_FEED_URL = 'http://www.vh1.com/feeds/mrss/'
_VALID_URL = r'https?://(?:www\.)?vh1\.com/(?:video-clips|episodes)/(?P<id>[\da-z]{6})'
_TESTS = [{
'url': 'https://www.vh1.com/episodes/0aqivv/nick-cannon-presents-wild-n-out-foushee-season-16-ep-12',
'url': 'https://www.vh1.com/episodes/d06ta1/barely-famous-barely-famous-season-1-ep-1',
'info_dict': {
'title': 'Fousheé',
'description': 'Fousheé joins Team Evolutions fight against Nick and Team Revolution in Baby Daddy, Baby Mama; Kick Em Out the Classroom; Backseat of My Ride and Wildstyle; and Fousheé performs.',
},
'playlist_mincount': 4,
'skip': '404 Not found',
}, {
# Clip
'url': 'https://www.vh1.com/video-clips/e0sja0/nick-cannon-presents-wild-n-out-foushee-clap-for-him',
'info_dict': {
'id': 'a07563f7-a37b-4e7f-af68-85855c2c7cc3',
'id': '4af4cf2c-a854-11e4-9596-0026b9414f30',
'ext': 'mp4',
'title': 'Fousheé - "clap for him"',
'description': 'Singer Fousheé hits the Wild N Out: In the Dark stage with a performance of the tongue-in-cheek track "clap for him" from her 2021 album "time machine."',
'upload_date': '20210826',
'display_id': 'd06ta1',
'title': 'Barely Famous',
'description': 'md5:6da5c9d88012eba0a80fc731c99b5fed',
'channel': 'VH1',
'duration': 1280.0,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'Barely Famous',
'season': 'Season 1',
'season_number': 1,
'episode': 'Episode 1',
'episode_number': 1,
'timestamp': 1426680000,
'upload_date': '20150318',
'release_timestamp': 1426680000,
'release_date': '20150318',
},
'params': {
# m3u8 download
'skip_download': True,
'params': {'skip_download': 'm3u8'},
}, {
'url': 'https://www.vh1.com/video-clips/ryzt2n/love-hip-hop-miami-love-hip-hop-miami-season-5-recap',
'info_dict': {
'id': '59e62974-4a5c-4417-91c3-5044cb2f4ce2',
'ext': 'mp4',
'display_id': 'ryzt2n',
'title': 'Love & Hip Hop Miami - Season 5 Recap',
'description': 'md5:4e49c65d0007bfc8d06db555a6b76ef0',
'duration': 792.083,
'thumbnail': r're:https://images\.paramount\.tech/uri/mgid:arc:imageassetref:',
'series': 'Love & Hip Hop Miami',
'season': 'Season 6',
'season_number': 6,
'episode': 'Episode 0',
'episode_number': 0,
'timestamp': 1732597200,
'upload_date': '20241126',
'release_timestamp': 1732597200,
'release_date': '20241126',
},
'params': {'skip_download': 'm3u8'},
}]
_VALID_URL = r'https?://(?:www\.)?vh1\.com/(?:video-clips|episodes)/(?P<id>[^/?#.]+)'

View File

@@ -28,7 +28,6 @@ from ..utils import (
qualities,
smuggle_url,
str_or_none,
traverse_obj,
try_call,
try_get,
unified_timestamp,
@@ -39,6 +38,7 @@ from ..utils import (
urlhandle_detect_ext,
urljoin,
)
from ..utils.traversal import require, traverse_obj
class VimeoBaseInfoExtractor(InfoExtractor):
@@ -117,13 +117,13 @@ class VimeoBaseInfoExtractor(InfoExtractor):
def _jwt_is_expired(self, token):
return jwt_decode_hs256(token)['exp'] - time.time() < 120
def _fetch_viewer_info(self, display_id=None, fatal=True):
def _fetch_viewer_info(self, display_id=None):
if self._viewer_info and not self._jwt_is_expired(self._viewer_info['jwt']):
return self._viewer_info
self._viewer_info = self._download_json(
'https://vimeo.com/_next/viewer', display_id, 'Downloading web token info',
'Failed to download web token info', fatal=fatal, headers={'Accept': 'application/json'})
'Failed to download web token info', headers={'Accept': 'application/json'})
return self._viewer_info
@@ -502,6 +502,43 @@ class VimeoBaseInfoExtractor(InfoExtractor):
'quality': 1,
}
@staticmethod
def _get_embed_params(is_embed, referer):
return {
'is_embed': 'true' if is_embed else 'false',
'referrer': urllib.parse.urlparse(referer).hostname if referer and is_embed else '',
}
def _get_album_data_and_hashed_pass(self, album_id, is_embed, referer):
viewer = self._fetch_viewer_info(album_id)
jwt = viewer['jwt']
album = self._download_json(
'https://api.vimeo.com/albums/' + album_id,
album_id, headers={'Authorization': 'jwt ' + jwt, 'Accept': 'application/json'},
query={**self._get_embed_params(is_embed, referer), 'fields': 'description,name,privacy'})
hashed_pass = None
if traverse_obj(album, ('privacy', 'view')) == 'password':
password = self.get_param('videopassword')
if not password:
raise ExtractorError(
'This album is protected by a password, use the --video-password option',
expected=True)
try:
hashed_pass = self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth',
album_id, 'Verifying the password', data=urlencode_postdata({
'password': password,
'token': viewer['xsrft'],
}), headers={
'X-Requested-With': 'XMLHttpRequest',
})['hashed_pass']
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Wrong password', expected=True)
raise
return album, hashed_pass
class VimeoIE(VimeoBaseInfoExtractor):
"""Information extractor for vimeo.com."""
@@ -1188,42 +1225,6 @@ class VimeoIE(VimeoBaseInfoExtractor):
info[k + '_count'] = int_or_none(try_get(connections, lambda x: x[k + 's']['total']))
return info
def _try_album_password(self, url):
album_id = self._search_regex(
r'vimeo\.com/(?:album|showcase)/([^/]+)', url, 'album id', default=None)
if not album_id:
return
viewer = self._fetch_viewer_info(album_id, fatal=False)
if not viewer:
webpage = self._download_webpage(url, album_id)
viewer = self._parse_json(self._search_regex(
r'bootstrap_data\s*=\s*({.+?})</script>',
webpage, 'bootstrap data'), album_id)['viewer']
jwt = viewer['jwt']
album = self._download_json(
'https://api.vimeo.com/albums/' + album_id,
album_id, headers={'Authorization': 'jwt ' + jwt, 'Accept': 'application/json'},
query={'fields': 'description,name,privacy'})
if try_get(album, lambda x: x['privacy']['view']) == 'password':
password = self.get_param('videopassword')
if not password:
raise ExtractorError(
'This album is protected by a password, use the --video-password option',
expected=True)
try:
self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth',
album_id, 'Verifying the password', data=urlencode_postdata({
'password': password,
'token': viewer['xsrft'],
}), headers={
'X-Requested-With': 'XMLHttpRequest',
})
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Wrong password', expected=True)
raise
def _real_extract(self, url):
url, data, headers = self._unsmuggle_headers(url)
if 'Referer' not in headers:
@@ -1238,8 +1239,14 @@ class VimeoIE(VimeoBaseInfoExtractor):
if any(p in url for p in ('play_redirect_hls', 'moogaloop.swf')):
url = 'https://vimeo.com/' + video_id
self._try_album_password(url)
is_secure = urllib.parse.urlparse(url).scheme == 'https'
album_id = self._search_regex(
r'vimeo\.com/(?:album|showcase)/([0-9]+)/', url, 'album id', default=None)
if album_id:
# Detect password-protected showcase video => POST album password => set cookies
self._get_album_data_and_hashed_pass(album_id, False, None)
parsed_url = urllib.parse.urlparse(url)
is_secure = parsed_url.scheme == 'https'
try:
# Retrieve video webpage to extract further information
webpage, urlh = self._download_webpage_handle(
@@ -1265,7 +1272,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
f'{self._downloader._format_err("compromising your security/cookies", "light red")}, '
f'try replacing "https:" with "http:" in the input URL. {dcip_msg}.', expected=True)
if '://player.vimeo.com/video/' in url:
if parsed_url.hostname == 'player.vimeo.com':
config = self._search_json(
r'\b(?:playerC|c)onfig\s*=', webpage, 'info section', video_id)
if config.get('view') == 4:
@@ -1293,7 +1300,7 @@ class VimeoIE(VimeoBaseInfoExtractor):
config_url = None
channel_id = self._search_regex(
r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None)
r'vimeo\.com/channels/([^/?#]+)', url, 'channel id', default=None)
if channel_id:
config_url = self._extract_config_url(webpage, default=None)
video_description = clean_html(get_element_by_class('description', webpage))
@@ -1531,7 +1538,7 @@ class VimeoUserIE(VimeoChannelIE): # XXX: Do not subclass from concrete IE
class VimeoAlbumIE(VimeoBaseInfoExtractor):
IE_NAME = 'vimeo:album'
_VALID_URL = r'https://vimeo\.com/(?:album|showcase)/(?P<id>\d+)(?:$|[?#]|/(?!video))'
_VALID_URL = r'https://vimeo\.com/(?:album|showcase)/(?P<id>[^/?#]+)(?:$|[?#]|(?P<is_embed>/embed))'
_TITLE_RE = r'<header id="page_header">\n\s*<h1>(.*?)</h1>'
_TESTS = [{
'url': 'https://vimeo.com/album/2632481',
@@ -1549,12 +1556,63 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
},
'playlist_count': 1,
'params': {'videopassword': 'youtube-dl'},
}, {
'note': 'embedded album that requires "referrer" in query (smuggled)',
'url': 'https://vimeo.com/showcase/10677689/embed#__youtubedl_smuggle=%7B%22referer%22%3A+%22https%3A%2F%2Fwww.riccardomutimusic.com%2F%22%7D',
'info_dict': {
'title': 'La Traviata - la serie completa',
'id': '10677689',
},
'playlist': [{
'url': 'https://player.vimeo.com/video/505682113#__youtubedl_smuggle=%7B%22referer%22%3A+%22https%3A%2F%2Fwww.riccardomutimusic.com%2F%22%7D',
'info_dict': {
'id': '505682113',
'ext': 'mp4',
'title': 'La Traviata - Episodio 7',
'uploader': 'RMMusic',
'uploader_id': 'user62556494',
'uploader_url': 'https://vimeo.com/user62556494',
'duration': 3202,
'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+',
},
}],
'params': {
'playlist_items': '1',
'skip_download': 'm3u8',
},
'expected_warnings': ['Failed to parse XML: not well-formed'],
}, {
'note': 'embedded album that requires "referrer" in query (passed as param)',
'url': 'https://vimeo.com/showcase/10677689/embed',
'info_dict': {
'title': 'La Traviata - la serie completa',
'id': '10677689',
},
'playlist_mincount': 9,
'params': {'http_headers': {'Referer': 'https://www.riccardomutimusic.com/'}},
}, {
'url': 'https://vimeo.com/showcase/11803104/embed2',
'info_dict': {
'title': 'Romans Video Ministry',
'id': '11803104',
},
'playlist_mincount': 41,
}, {
'note': 'non-numeric slug, need to fetch numeric album ID',
'url': 'https://vimeo.com/showcase/BethelTally-Homegoing-Services',
'info_dict': {
'title': 'BethelTally Homegoing Services',
'id': '11547429',
'description': 'Bethel Missionary Baptist Church\nTallahassee, FL',
},
'playlist_mincount': 8,
}]
_PAGE_SIZE = 100
def _fetch_page(self, album_id, authorization, hashed_pass, page):
def _fetch_page(self, album_id, hashed_pass, is_embed, referer, page):
api_page = page + 1
query = {
**self._get_embed_params(is_embed, referer),
'fields': 'link,uri',
'page': api_page,
'per_page': self._PAGE_SIZE,
@@ -1565,7 +1623,7 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
videos = self._download_json(
f'https://api.vimeo.com/albums/{album_id}/videos',
album_id, f'Downloading page {api_page}', query=query, headers={
'Authorization': 'jwt ' + authorization,
'Authorization': 'jwt ' + self._fetch_viewer_info(album_id)['jwt'],
'Accept': 'application/json',
})['data']
except ExtractorError as e:
@@ -1577,44 +1635,37 @@ class VimeoAlbumIE(VimeoBaseInfoExtractor):
if not link:
continue
uri = video.get('uri')
video_id = self._search_regex(r'/videos/(\d+)', uri, 'video_id', default=None) if uri else None
video_id = self._search_regex(r'/videos/(\d+)', uri, 'id', default=None) if uri else None
if is_embed:
if not video_id:
self.report_warning(f'Skipping due to missing video ID: {link}')
continue
link = f'https://player.vimeo.com/video/{video_id}'
if referer:
link = self._smuggle_referrer(link, referer)
yield self.url_result(link, VimeoIE.ie_key(), video_id)
def _real_extract(self, url):
album_id = self._match_id(url)
viewer = self._fetch_viewer_info(album_id, fatal=False)
if not viewer:
webpage = self._download_webpage(url, album_id)
viewer = self._parse_json(self._search_regex(
r'bootstrap_data\s*=\s*({.+?})</script>',
webpage, 'bootstrap data'), album_id)['viewer']
jwt = viewer['jwt']
album = self._download_json(
'https://api.vimeo.com/albums/' + album_id,
album_id, headers={'Authorization': 'jwt ' + jwt, 'Accept': 'application/json'},
query={'fields': 'description,name,privacy'})
hashed_pass = None
if try_get(album, lambda x: x['privacy']['view']) == 'password':
password = self.get_param('videopassword')
if not password:
raise ExtractorError(
'This album is protected by a password, use the --video-password option',
expected=True)
try:
hashed_pass = self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth',
album_id, 'Verifying the password', data=urlencode_postdata({
'password': password,
'token': viewer['xsrft'],
}), headers={
'X-Requested-With': 'XMLHttpRequest',
})['hashed_pass']
except ExtractorError as e:
if isinstance(e.cause, HTTPError) and e.cause.status == 401:
raise ExtractorError('Wrong password', expected=True)
raise
url, _, http_headers = self._unsmuggle_headers(url)
album_id, is_embed = self._match_valid_url(url).group('id', 'is_embed')
referer = http_headers.get('Referer')
if not re.fullmatch(r'[0-9]+', album_id):
auth_info = self._download_json(
f'https://vimeo.com/showcase/{album_id}/auth', album_id, 'Downloading album info',
headers={'X-Requested-With': 'XMLHttpRequest'}, expected_status=(401, 403))
album_id = traverse_obj(auth_info, (
'metadata', 'id', {int}, {str_or_none}, {require('album ID')}))
try:
album, hashed_pass = self._get_album_data_and_hashed_pass(album_id, is_embed, referer)
except ExtractorError as e:
if is_embed and not referer and isinstance(e.cause, HTTPError) and e.cause.status == 403:
raise ExtractorError(self._REFERER_HINT, expected=True)
raise
entries = OnDemandPagedList(functools.partial(
self._fetch_page, album_id, jwt, hashed_pass), self._PAGE_SIZE)
self._fetch_page, album_id, hashed_pass, is_embed, referer), self._PAGE_SIZE)
return self.playlist_result(
entries, album_id, album.get('name'), album.get('description'))
@@ -1910,10 +1961,10 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
'like_count': int,
'duration': 9810,
'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+',
'timestamp': 1747502974,
'upload_date': '20250517',
'release_timestamp': 1747502998,
'release_date': '20250517',
'timestamp': 1746627359,
'upload_date': '20250507',
'release_timestamp': 1746627359,
'release_date': '20250507',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
@@ -1970,7 +2021,7 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
# "24/7" livestream
'url': 'https://vimeo.com/event/4768062',
'info_dict': {
'id': '1097650937',
'id': '1108792268',
'ext': 'mp4',
'display_id': '4768062',
'title': r're:GRACELAND CAM \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
@@ -1978,8 +2029,8 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
'uploader': 'Elvis Presley\'s Graceland',
'uploader_id': 'visitgraceland',
'uploader_url': 'https://vimeo.com/visitgraceland',
'release_timestamp': 1751396691,
'release_date': '20250701',
'release_timestamp': 1754812109,
'release_date': '20250810',
'live_status': 'is_live',
},
'params': {'skip_download': 'livestream'},
@@ -2023,10 +2074,10 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
'like_count': int,
'duration': 4466,
'thumbnail': r're:https?://i\.vimeocdn\.com/video/.+',
'timestamp': 1612228466,
'upload_date': '20210202',
'release_timestamp': 1612228538,
'release_date': '20210202',
'timestamp': 1610059450,
'upload_date': '20210107',
'release_timestamp': 1610059450,
'release_date': '20210107',
'live_status': 'was_live',
},
'params': {'skip_download': 'm3u8'},
@@ -2214,7 +2265,7 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
video_filter = lambda _, v: self._extract_video_id_and_unlisted_hash(v)[0] == video_id
else:
video_filter = lambda _, v: v['live']['status'] in ('streaming', 'done')
video_filter = lambda _, v: traverse_obj(v, ('live', 'status')) != 'unavailable'
for page in itertools.count(1):
videos_data = self._call_events_api(
@@ -2226,15 +2277,18 @@ class VimeoEventIE(VimeoBaseInfoExtractor):
if video or not traverse_obj(videos_data, ('paging', 'next', {str})):
break
if not video: # requested video_id is unavailable or no videos are available
raise ExtractorError('This event video is unavailable', expected=True)
live_status = {
'streaming': 'is_live',
'done': 'was_live',
None: 'was_live',
}.get(traverse_obj(video, ('live', 'status', {str})))
if not live_status: # requested video_id is unavailable or no videos are available
raise ExtractorError('This event video is unavailable', expected=True)
elif live_status == 'was_live':
if live_status == 'was_live':
return self._vimeo_url_result(*self._extract_video_id_and_unlisted_hash(video), event_id)
config_url = video['config_url']
if config_url: # view_policy == 'embed_only' or live_status == 'is_live'

View File

@@ -5,7 +5,6 @@ import re
from .common import InfoExtractor
from .dailymotion import DailymotionIE
from .odnoklassniki import OdnoklassnikiIE
from .pladform import PladformIE
from .sibnet import SibnetEmbedIE
from .vimeo import VimeoIE
from .youtube import YoutubeIE
@@ -334,11 +333,6 @@ class VKIE(VKBaseIE):
'url': 'https://vk.com/video205387401_164765225',
'only_matching': True,
},
{
# pladform embed
'url': 'https://vk.com/video-76116461_171554880',
'only_matching': True,
},
{
'url': 'http://new.vk.com/video205387401_165548505',
'only_matching': True,
@@ -456,10 +450,6 @@ class VKIE(VKBaseIE):
if vimeo_url is not None:
return self.url_result(vimeo_url, VimeoIE.ie_key())
pladform_url = PladformIE._extract_url(info_page)
if pladform_url:
return self.url_result(pladform_url, PladformIE.ie_key())
m_rutube = re.search(
r'\ssrc="((?:https?:)?//rutube\.ru\\?/(?:video|play)\\?/embed(?:.*?))\\?"', info_page)
if m_rutube is not None:

View File

@@ -14,7 +14,7 @@ from ..utils import (
get_element_html_by_class,
int_or_none,
jwt_decode_hs256,
jwt_encode_hs256,
jwt_encode,
make_archive_id,
merge_dicts,
parse_age_limit,
@@ -98,9 +98,9 @@ class VRTBaseIE(InfoExtractor):
'Content-Type': 'application/json',
}, data=json.dumps({
'identityToken': id_token or '',
'playerInfo': jwt_encode_hs256(player_info, self._JWT_SIGNING_KEY, headers={
'playerInfo': jwt_encode(player_info, self._JWT_SIGNING_KEY, headers={
'kid': self._JWT_KEY_ID,
}).decode(),
}),
}, separators=(',', ':')).encode())['vrtPlayerToken']
return self._download_json(

View File

@@ -1,60 +1,42 @@
from .common import InfoExtractor
from ..utils import (
int_or_none,
parse_iso8601,
try_get,
)
from .medialaan import MedialaanBaseIE
from ..utils import str_or_none
from ..utils.traversal import require, traverse_obj
class VTMIE(InfoExtractor):
_WORKING = False
_VALID_URL = r'https?://(?:www\.)?vtm\.be/([^/?&#]+)~v(?P<id>[0-9a-f]{8}(?:-[0-9a-f]{4}){3}-[0-9a-f]{12})'
_TEST = {
class VTMIE(MedialaanBaseIE):
_VALID_URL = r'https?://(?:www\.)?vtm\.be/[^/?#]+~v(?P<id>[\da-f]{8}(?:-[\da-f]{4}){3}-[\da-f]{12})'
_TESTS = [{
'url': 'https://vtm.be/gast-vernielt-genkse-hotelkamer~ve7534523-279f-4b4d-a5c9-a33ffdbe23e1',
'md5': '37dca85fbc3a33f2de28ceb834b071f8',
'info_dict': {
'id': '192445',
'ext': 'mp4',
'title': 'Gast vernielt Genkse hotelkamer',
'timestamp': 1611060180,
'upload_date': '20210119',
'channel': 'VTM',
'channel_id': '867',
'description': 'md5:75fce957d219646ff1b65ba449ab97b5',
'duration': 74,
# TODO: fix url _type result processing
# 'series': 'Op Interventie',
'genres': ['Documentaries'],
'release_date': '20210119',
'release_timestamp': 1611060180,
'series': 'Op Interventie',
'series_id': '2658',
'tags': 'count:2',
'thumbnail': r're:https?://images\.mychannels\.video/imgix/.+\.(?:jpe?g|png)',
'uploader': 'VTM',
'uploader_id': '74',
},
}
}]
def _real_initialize(self):
if not self._get_cookies('https://vtm.be/').get('authId'):
self.raise_login_required()
def _real_extract(self, url):
uuid = self._match_id(url)
video = self._download_json(
'https://omc4vm23offuhaxx6hekxtzspi.appsync-api.eu-west-1.amazonaws.com/graphql',
uuid, query={
'query': '''{
getComponent(type: Video, uuid: "%s") {
... on Video {
description
duration
myChannelsVideo
program {
title
}
publishedAt
title
}
}
}''' % uuid, # noqa: UP031
}, headers={
'x-api-key': 'da2-lz2cab4tfnah3mve6wiye4n77e',
})['data']['getComponent']
video_id = self._match_id(url)
webpage = self._download_webpage(url, video_id)
apollo_state = self._search_json(
r'window\.__APOLLO_STATE__\s*=', webpage, 'apollo state', video_id)
mychannels_id = traverse_obj(apollo_state, (
f'Video:{{"uuid":"{video_id}"}}', 'myChannelsVideo', {str_or_none}, {require('mychannels ID')}))
return {
'_type': 'url',
'id': uuid,
'title': video.get('title'),
'url': 'http://mychannels.video/embed/%d' % video['myChannelsVideo'],
'description': video.get('description'),
'timestamp': parse_iso8601(video.get('publishedAt')),
'duration': int_or_none(video.get('duration')),
'series': try_get(video, lambda x: x['program']['title']),
'ie_key': 'Medialaan',
}
return self._extract_from_mychannels_api(mychannels_id)

View File

@@ -8,6 +8,7 @@ from ..utils import (
int_or_none,
make_archive_id,
mimetype2ext,
parse_qs,
parse_resolution,
str_or_none,
strip_jsonp,
@@ -52,13 +53,16 @@ class WeiboBaseIE(InfoExtractor):
'_rand': random.random(),
})
def _weibo_download_json(self, url, video_id, *args, fatal=True, note='Downloading JSON metadata', **kwargs):
# XXX: Always fatal; _download_webpage_handle only returns False (not a tuple) on error
webpage, urlh = self._download_webpage_handle(url, video_id, *args, fatal=fatal, note=note, **kwargs)
def _weibo_download_json(self, url, video_id, note='Downloading JSON metadata', data=None, headers=None, query=None):
headers = {
'Referer': 'https://weibo.com/',
**(headers or {}),
}
webpage, urlh = self._download_webpage_handle(url, video_id, note=note, data=data, headers=headers, query=query)
if urllib.parse.urlparse(urlh.url).netloc == 'passport.weibo.com':
self._update_visitor_cookies(urlh.url, video_id)
webpage = self._download_webpage(url, video_id, *args, fatal=fatal, note=note, **kwargs)
return self._parse_json(webpage, video_id, fatal=fatal)
webpage = self._download_webpage(url, video_id, note=note, data=data, headers=headers, query=query)
return self._parse_json(webpage, video_id)
def _extract_formats(self, video_info):
media_info = traverse_obj(video_info, ('page_info', 'media_info'))
@@ -189,7 +193,8 @@ class WeiboIE(WeiboBaseIE):
def _real_extract(self, url):
video_id = self._match_id(url)
meta = self._weibo_download_json(f'https://weibo.com/ajax/statuses/show?id={video_id}', video_id)
meta = self._weibo_download_json(
'https://weibo.com/ajax/statuses/show', video_id, query={'id': video_id})
mix_media_info = traverse_obj(meta, ('mix_media_info', 'items', ...))
if not mix_media_info:
return self._parse_video_info(meta)
@@ -205,7 +210,11 @@ class WeiboIE(WeiboBaseIE):
class WeiboVideoIE(WeiboBaseIE):
_VALID_URL = r'https?://(?:www\.)?weibo\.com/tv/show/(?P<id>\d+:\d+)'
_VIDEO_ID_RE = r'\d+:(?:[\da-f]{32}|\d{16,})'
_VALID_URL = [
fr'https?://(?:www\.)?weibo\.com/tv/show/(?P<id>{_VIDEO_ID_RE})',
fr'https?://video\.weibo\.com/show/?\?(?:[^#]+&)?fid=(?P<id>{_VIDEO_ID_RE})',
]
_TESTS = [{
'url': 'https://weibo.com/tv/show/1034:4797699866951785?from=old_pc_videoshow',
'info_dict': {
@@ -227,6 +236,49 @@ class WeiboVideoIE(WeiboBaseIE):
'repost_count': int,
'_old_archive_ids': ['weibomobile 4797700463137878'],
},
}, {
'url': 'https://weibo.com/tv/show/1034:633c288cc043d0ca7808030f1157da64',
'info_dict': {
'id': '4189191225395228',
'ext': 'mp4',
'display_id': 'FBqgOmDxO',
'title': '柴犬柴犬的秒拍视频',
'alt_title': '柴犬柴犬的秒拍视频',
'description': '午睡当然是要甜甜蜜蜜的啦![坏笑] Instagramshibainu.gaku http://t.cn/RHbmjzW \u200B\u200B\u200B',
'uploader': '柴犬柴犬',
'uploader_id': '5926682210',
'uploader_url': 'https://weibo.com/u/5926682210',
'view_count': int,
'like_count': int,
'repost_count': int,
'duration': 53,
'thumbnail': 'https://wx1.sinaimg.cn/large/006t5KMygy1fmu31fsqbej30hs0hstav.jpg',
'timestamp': 1514264429,
'upload_date': '20171226',
'_old_archive_ids': ['weibomobile 4189191225395228'],
},
}, {
'url': 'https://video.weibo.com/show?fid=1034:4967272104787984',
'info_dict': {
'id': '4967273022359838',
'ext': 'mp4',
'display_id': 'Nse4S9TTU',
'title': '#张婧仪[超话]#📸#婧仪的相册集#  早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'alt_title': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'description': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手] http://t.cn/A6WTpbEu \u200B\u200B\u200B',
'uploader': '张婧仪工作室',
'uploader_id': '7610808848',
'uploader_url': 'https://weibo.com/u/7610808848',
'view_count': int,
'like_count': int,
'repost_count': int,
'duration': 85,
'thumbnail': 'https://wx2.sinaimg.cn/orj480/008j4b3qly1hjsce01gnqj30u00gvwf8.jpg',
'tags': ['婧仪的相册集'],
'timestamp': 1699773545,
'upload_date': '20231112',
'_old_archive_ids': ['weibomobile 4967273022359838'],
},
}]
def _real_extract(self, url):
@@ -234,8 +286,8 @@ class WeiboVideoIE(WeiboBaseIE):
post_data = f'data={{"Component_Play_Playinfo":{{"oid":"{video_id}"}}}}'.encode()
video_info = self._weibo_download_json(
f'https://weibo.com/tv/api/component?page=%2Ftv%2Fshow%2F{video_id.replace(":", "%3A")}',
video_id, headers={'Referer': url}, data=post_data)['data']['Component_Play_Playinfo']
'https://weibo.com/tv/api/component', video_id, data=post_data, headers={'Referer': url},
query={'page': f'/tv/show/{video_id}'})['data']['Component_Play_Playinfo']
return self.url_result(f'https://weibo.com/0/{video_info["mid"]}', WeiboIE)
@@ -250,6 +302,38 @@ class WeiboUserIE(WeiboBaseIE):
'uploader': '萧影殿下',
},
'playlist_mincount': 195,
}, {
'url': 'https://weibo.com/u/7610808848?tabtype=newVideo&layerid=4967273022359838',
'info_dict': {
'id': '7610808848',
'title': '张婧仪工作室的视频',
'description': '张婧仪工作室的全部视频',
'uploader': '张婧仪工作室',
},
'playlist_mincount': 61,
}, {
'url': 'https://weibo.com/u/7610808848?tabtype=newVideo&layerid=4967273022359838',
'info_dict': {
'id': '4967273022359838',
'ext': 'mp4',
'display_id': 'Nse4S9TTU',
'title': '#张婧仪[超话]#📸#婧仪的相册集#  早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'alt_title': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手]',
'description': '#张婧仪[超话]#📸#婧仪的相册集# \n早收工的一天,小张@张婧仪 变身可可爱爱小导游来次说走就走的泉州City Walk[举手] http://t.cn/A6WTpbEu \u200B\u200B\u200B',
'uploader': '张婧仪工作室',
'uploader_id': '7610808848',
'uploader_url': 'https://weibo.com/u/7610808848',
'view_count': int,
'like_count': int,
'repost_count': int,
'duration': 85,
'thumbnail': 'https://wx2.sinaimg.cn/orj480/008j4b3qly1hjsce01gnqj30u00gvwf8.jpg',
'tags': ['婧仪的相册集'],
'timestamp': 1699773545,
'upload_date': '20231112',
'_old_archive_ids': ['weibomobile 4967273022359838'],
},
'params': {'noplaylist': True},
}]
def _fetch_page(self, uid, cursor=0, page=1):
@@ -270,6 +354,11 @@ class WeiboUserIE(WeiboBaseIE):
def _real_extract(self, url):
uid = self._match_id(url)
params = {k: v[-1] for k, v in parse_qs(url).items()}
video_id = params.get('layerid') if params.get('tabtype') == 'newVideo' else None
if not self._yes_playlist(uid, video_id):
return self.url_result(f'https://weibo.com/{uid}/{video_id}', WeiboIE, video_id)
first_page = self._fetch_page(uid)
uploader = traverse_obj(first_page, ('list', ..., 'user', 'screen_name', {str}), get_all=False)
metainfo = {

View File

@@ -105,7 +105,6 @@ INNERTUBE_CLIENTS = {
'INNERTUBE_CONTEXT_CLIENT_NAME': 1,
'SUPPORTS_COOKIES': True,
**WEB_PO_TOKEN_POLICIES,
'PLAYER_PARAMS': '8AEB2AMB',
},
# Safari UA returns pre-merged video+audio 144p/240p/360p/720p/1080p HLS formats
'web_safari': {
@@ -119,7 +118,6 @@ INNERTUBE_CLIENTS = {
'INNERTUBE_CONTEXT_CLIENT_NAME': 1,
'SUPPORTS_COOKIES': True,
**WEB_PO_TOKEN_POLICIES,
'PLAYER_PARAMS': '8AEB2AMB',
},
'web_embedded': {
'INNERTUBE_CONTEXT': {
@@ -282,7 +280,6 @@ INNERTUBE_CLIENTS = {
'userAgent': 'Mozilla/5.0 (iPad; CPU OS 16_7_10 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1,gzip(gfe)',
},
},
'PLAYER_PARAMS': '8AEB2AMB',
'INNERTUBE_CONTEXT_CLIENT_NAME': 2,
'GVS_PO_TOKEN_POLICY': {
StreamingProtocol.HTTPS: GvsPoTokenPolicy(
@@ -314,7 +311,6 @@ INNERTUBE_CLIENTS = {
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 7,
'SUPPORTS_COOKIES': True,
'PLAYER_PARAMS': '8AEB2AMB',
},
'tv_simply': {
'INNERTUBE_CONTEXT': {

View File

@@ -566,6 +566,7 @@ class YoutubeTabBaseInfoExtractor(YoutubeBaseInfoExtractor):
'gridContinuation': (self._grid_entries, None),
'itemSectionContinuation': (self._post_thread_continuation_entries, None),
'sectionListContinuation': (extract_entries, None), # for feeds
'lockupViewModel': (self._grid_entries, 'items'), # for playlists tab
}
continuation_items = traverse_obj(response, (
@@ -1026,7 +1027,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'info_dict': {
'id': 'UCqj7Cz7revf5maW9g5pgNcg',
'title': 'Igor Kleiner - Playlists',
'description': 'md5:15d7dd9e333cb987907fcb0d604b233a',
'description': r're:(?s)Добро пожаловать на мой канал! Здесь вы найдете видео .{504}/a1/50b/10a$',
'uploader': 'Igor Kleiner ',
'uploader_id': '@IgorDataScience',
'uploader_url': 'https://www.youtube.com/@IgorDataScience',
@@ -1043,7 +1044,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'info_dict': {
'id': 'UCqj7Cz7revf5maW9g5pgNcg',
'title': 'Igor Kleiner - Playlists',
'description': 'md5:15d7dd9e333cb987907fcb0d604b233a',
'description': r're:(?s)Добро пожаловать на мой канал! Здесь вы найдете видео .{504}/a1/50b/10a$',
'uploader': 'Igor Kleiner ',
'uploader_id': '@IgorDataScience',
'uploader_url': 'https://www.youtube.com/@IgorDataScience',
@@ -1093,7 +1094,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'url': 'https://www.youtube.com/c/ChristophLaimer/playlists',
'only_matching': True,
}, {
# TODO: fix availability extraction
# TODO: fix availability and view_count extraction
'note': 'basic, single video playlist',
'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlSLRHmI1qNm0wjyVNWw1pCU',
'info_dict': {
@@ -1215,6 +1216,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': 'lex will',
},
'playlist_mincount': 18,
'skip': 'This Community isn\'t available',
}, {
# TODO: fix channel_is_verified extraction
'note': 'Search tab',
@@ -1399,7 +1401,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
}, {
'url': 'https://www.youtube.com/channel/UCoMdktPbSTixAyNGwb-UYkQ/live',
'info_dict': {
'id': 'YDvsBbKfLPA', # This will keep changing
'id': 'VFGoUmo74wE', # This will keep changing
'ext': 'mp4',
'title': str,
'upload_date': r're:\d{8}',
@@ -1578,6 +1580,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'playlist_count': 50,
'expected_warnings': ['YouTube Music is not directly supported'],
}, {
# TODO: fix test suite, 208163447408c78673b08c172beafe5c310fb167 broke this test
'note': 'unlisted single video playlist',
'url': 'https://www.youtube.com/playlist?list=PLt5yu3-wZAlQLfIN0MMgp0wVV6MP3bM4_',
'info_dict': {
@@ -1597,19 +1600,19 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
},
'playlist': [{
'info_dict': {
'title': 'youtube-dl test video "\'/\\ä↭𝕐',
'id': 'BaW_jenozKc',
'title': 'Big Buck Bunny 60fps 4K - Official Blender Foundation Short Film',
'id': 'aqz-KE-bpKQ',
'_type': 'url',
'ie_key': 'Youtube',
'duration': 10,
'channel_id': 'UCLqxVugv74EIW3VWh2NOa3Q',
'channel_url': 'https://www.youtube.com/channel/UCLqxVugv74EIW3VWh2NOa3Q',
'duration': 635,
'channel_id': 'UCSMOQeBJ2RAnuFungnQOxLg',
'channel_url': 'https://www.youtube.com/channel/UCSMOQeBJ2RAnuFungnQOxLg',
'view_count': int,
'url': 'https://www.youtube.com/watch?v=BaW_jenozKc',
'channel': 'Philipp Hagemeister',
'uploader_id': '@PhilippHagemeister',
'uploader_url': 'https://www.youtube.com/@PhilippHagemeister',
'uploader': 'Philipp Hagemeister',
'url': 'https://www.youtube.com/watch?v=aqz-KE-bpKQ',
'channel': 'Blender',
'uploader_id': '@BlenderOfficial',
'uploader_url': 'https://www.youtube.com/@BlenderOfficial',
'uploader': 'Blender',
},
}],
'playlist_count': 1,
@@ -1675,7 +1678,6 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'url': 'https://www.youtube.com/channel/UCwVVpHQ2Cs9iGJfpdFngePQ',
'only_matching': True,
}, {
# TODO: fix metadata extraction
'note': 'collaborative playlist (uploader name in the form "by <uploader> and x other(s)")',
'url': 'https://www.youtube.com/playlist?list=PLx-_-Kk4c89oOHEDQAojOXzEzemXxoqx6',
'info_dict': {
@@ -1694,6 +1696,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': 'pukkandan',
},
'playlist_mincount': 2,
'skip': 'https://github.com/yt-dlp/yt-dlp/issues/13690',
}, {
'note': 'translated tab name',
'url': 'https://www.youtube.com/channel/UCiu-3thuViMebBjw_5nWYrA/playlists',
@@ -1801,7 +1804,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'title': 'Not Just Bikes - Shorts',
'tags': 'count:10',
'channel_url': 'https://www.youtube.com/channel/UC0intLFzLaudFG-xAvUEO-A',
'description': 'md5:1d9fc1bad7f13a487299d1fe1712e031',
'description': 'md5:295758591d0d43d8594277be54584da7',
'channel_follower_count': int,
'channel_id': 'UC0intLFzLaudFG-xAvUEO-A',
'channel': 'Not Just Bikes',
@@ -1822,7 +1825,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'channel_url': 'https://www.youtube.com/channel/UC3eYAvjCVwNHgkaGbXX3sig',
'channel': '中村悠一',
'channel_follower_count': int,
'description': 'md5:e8fd705073a594f27d6d6d020da560dc',
'description': 'md5:76b312b48a26c3b0e4d90e2dfc1b417d',
'uploader_url': 'https://www.youtube.com/@Yuichi-Nakamura',
'uploader_id': '@Yuichi-Nakamura',
'uploader': '中村悠一',
@@ -1865,12 +1868,13 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'tags': [],
},
'playlist_mincount': 30,
'skip': 'The channel/playlist does not exist and the URL redirected to youtube.com home page',
}, {
# Trending Gaming Tab. tab id is empty
'url': 'https://www.youtube.com/feed/trending?bp=4gIcGhpnYW1pbmdfY29ycHVzX21vc3RfcG9wdWxhcg%3D%3D',
'info_dict': {
'id': 'trending',
'title': 'trending - Gaming',
'title': 'trending',
'tags': [],
},
'playlist_mincount': 30,
@@ -2018,7 +2022,7 @@ class YoutubeTabIE(YoutubeTabBaseInfoExtractor):
'uploader': 'A Himitsu',
'channel_id': 'UCgFwu-j5-xNJml2FtTrrB3A',
'tags': 'count:12',
'description': 'I make music',
'description': 'Music producer, sometimes.',
'channel_url': 'https://www.youtube.com/channel/UCgFwu-j5-xNJml2FtTrrB3A',
'channel_follower_count': int,
'channel_is_verified': True,
@@ -2304,19 +2308,20 @@ class YoutubePlaylistIE(YoutubeBaseInfoExtractor):
)
IE_NAME = 'youtube:playlist'
_TESTS = [{
# TODO: fix availability extraction
'note': 'issue #673',
'url': 'PLBB231211A4F62143',
'info_dict': {
'title': '[OLD]Team Fortress 2 (Class-based LP)',
'title': 'Team Fortress 2 [2010 Version]',
'id': 'PLBB231211A4F62143',
'uploader': 'Wickman',
'uploader_id': '@WickmanVT',
'uploader': 'Wickman Wish',
'uploader_id': '@WickmanWish',
'description': 'md5:8fa6f52abb47a9552002fa3ddfc57fc2',
'view_count': int,
'uploader_url': 'https://www.youtube.com/@WickmanVT',
'uploader_url': 'https://www.youtube.com/@WickmanWish',
'modified_date': r're:\d{8}',
'channel_id': 'UCKSpbfbl5kRQpTdL7kMc-1Q',
'channel': 'Wickman',
'channel': 'Wickman Wish',
'tags': [],
'channel_url': 'https://www.youtube.com/channel/UCKSpbfbl5kRQpTdL7kMc-1Q',
'availability': 'public',
@@ -2331,6 +2336,7 @@ class YoutubePlaylistIE(YoutubeBaseInfoExtractor):
'playlist_count': 2,
'skip': 'This playlist is private',
}, {
# TODO: fix availability extraction
'note': 'embedded',
'url': 'https://www.youtube.com/embed/videoseries?list=PL6IaIsEjSbf96XFRuNccS_RuEXwNdsoEu',
'playlist_count': 4,
@@ -2351,6 +2357,7 @@ class YoutubePlaylistIE(YoutubeBaseInfoExtractor):
},
'expected_warnings': [r'[Uu]navailable videos? (is|are|will be) hidden', 'Retrying', 'Giving up'],
}, {
# TODO: fix availability extraction
'url': 'http://www.youtube.com/embed/_xDOZElKyNU?list=PLsyOSbh5bs16vubvKePAQ1x3PhKavfBIl',
'playlist_mincount': 455,
'info_dict': {

View File

@@ -1817,6 +1817,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
_PLAYER_JS_VARIANT_MAP = {
'main': 'player_ias.vflset/en_US/base.js',
'tce': 'player_ias_tce.vflset/en_US/base.js',
'es5': 'player_es5.vflset/en_US/base.js',
'es6': 'player_es6.vflset/en_US/base.js',
'tv': 'tv-player-ias.vflset/tv-player-ias.js',
'tv_es6': 'tv-player-es6.vflset/tv-player-es6.js',
'phone': 'player-plasma-ias-phone-en_US.vflset/base.js',
@@ -2018,7 +2020,7 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if not player_url:
return
requested_js_variant = self._configuration_arg('player_js_variant', [''])[0] or 'actual'
requested_js_variant = self._configuration_arg('player_js_variant', [''])[0] or 'main'
if requested_js_variant in self._PLAYER_JS_VARIANT_MAP:
player_id = self._extract_player_info(player_url)
original_url = player_url
@@ -3848,11 +3850,33 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
player_responses, (..., 'microformat', 'playerMicroformatRenderer'),
expected_type=dict)
# Fallbacks in case player responses are missing metadata
initial_sdcr = traverse_obj(initial_data, (
'engagementPanels', ..., 'engagementPanelSectionListRenderer',
'content', 'structuredDescriptionContentRenderer', {dict}, any))
initial_description = traverse_obj(initial_sdcr, (
'items', ..., 'expandableVideoDescriptionBodyRenderer',
'attributedDescriptionBodyText', 'content', {str}, any))
# videoDescriptionHeaderRenderer also has publishDate/channel/handle/ucid, but not needed
initial_vdhr = traverse_obj(initial_sdcr, (
'items', ..., 'videoDescriptionHeaderRenderer', {dict}, any)) or {}
initial_video_details_renderer = traverse_obj(initial_data, (
'playerOverlays', 'playerOverlayRenderer', 'videoDetails',
'playerOverlayVideoDetailsRenderer', {dict})) or {}
initial_title = (
self._get_text(initial_vdhr, 'title')
or self._get_text(initial_video_details_renderer, 'title'))
translated_title = self._get_text(microformats, (..., 'title'))
video_title = ((self._preferred_lang and translated_title)
or get_first(video_details, 'title') # primary
or translated_title
or search_meta(['og:title', 'twitter:title', 'title']))
if not video_title and initial_title:
self.report_warning(
'No title found in player responses; falling back to title from initial data. '
'Other metadata may also be missing')
video_title = initial_title
translated_description = self._get_text(microformats, (..., 'description'))
original_description = get_first(video_details, 'shortDescription')
video_description = (
@@ -3860,6 +3884,8 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
# If original description is blank, it will be an empty string.
# Do not prefer translated description in this case.
or original_description if original_description is not None else translated_description)
if video_description is None:
video_description = initial_description
multifeed_metadata_list = get_first(
player_responses,
@@ -4018,6 +4044,12 @@ class YoutubeIE(YoutubeBaseInfoExtractor):
if needs_live_processing:
self._prepare_live_from_start_formats(
formats, video_id, live_start_time, url, webpage_url, smuggled_data, live_status == 'is_live')
elif live_status != 'is_live':
# Handle pre-playback waiting period
playback_wait = int_or_none(self._configuration_arg('playback_wait', [None])[0], default=6)
available_at = int(time.time()) + playback_wait
for fmt in formats:
fmt['available_at'] = available_at
formats.extend(self._extract_storyboard(player_responses, duration))

View File

@@ -1,4 +1,5 @@
import os
import sys
from .common import PostProcessor
from ..utils import (
@@ -54,8 +55,8 @@ class XAttrMetadataPP(PostProcessor):
if infoname == 'upload_date':
value = hyphenate_date(value)
elif xattrname == 'com.apple.metadata:kMDItemWhereFroms':
# NTFS ADS doesn't support colons in names
if os.name == 'nt':
# Colon in xattr name throws errors on Windows/NTFS and Linux
if sys.platform != 'darwin':
continue
value = self.APPLE_PLIST_TEMPLATE % value
write_xattr(info['filepath'], xattrname, value.encode())

View File

@@ -58,26 +58,30 @@ def _get_variant_and_executable_path():
"""@returns (variant, executable_path)"""
if getattr(sys, 'frozen', False):
path = sys.executable
# py2exe is unsupported but we should still correctly identify it for debugging purposes
if not hasattr(sys, '_MEIPASS'):
return 'py2exe', path
elif sys._MEIPASS == os.path.dirname(path):
if sys._MEIPASS == os.path.dirname(path):
return f'{sys.platform}_dir', path
elif sys.platform == 'darwin':
if sys.platform == 'darwin':
# darwin_legacy_exe is no longer supported, but still identify it to block updates
machine = '_legacy' if version_tuple(platform.mac_ver()[0]) < (10, 15) else ''
else:
machine = f'_{platform.machine().lower()}'
is_64bits = sys.maxsize > 2**32
# Ref: https://en.wikipedia.org/wiki/Uname#Examples
if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'):
machine = '_x86' if not is_64bits else ''
# platform.machine() on 32-bit raspbian OS may return 'aarch64', so check "64-bitness"
# See: https://github.com/yt-dlp/yt-dlp/issues/11813
elif machine[1:] == 'aarch64' and not is_64bits:
machine = '_armv7l'
# sys.executable returns a /tmp/ path for staticx builds (linux_static)
# Ref: https://staticx.readthedocs.io/en/latest/usage.html#run-time-information
if static_exe_path := os.getenv('STATICX_PROG_PATH'):
path = static_exe_path
return f'darwin{machine}_exe', path
machine = f'_{platform.machine().lower()}'
is_64bits = sys.maxsize > 2**32
# Ref: https://en.wikipedia.org/wiki/Uname#Examples
if machine[1:] in ('x86', 'x86_64', 'amd64', 'i386', 'i686'):
machine = '_x86' if not is_64bits else ''
# platform.machine() on 32-bit raspbian OS may return 'aarch64', so check "64-bitness"
# See: https://github.com/yt-dlp/yt-dlp/issues/11813
elif machine[1:] == 'aarch64' and not is_64bits:
machine = '_armv7l'
# sys.executable returns a /tmp/ path for staticx builds (linux_static)
# Ref: https://staticx.readthedocs.io/en/latest/usage.html#run-time-information
if static_exe_path := os.getenv('STATICX_PROG_PATH'):
path = static_exe_path
return f'{remove_end(sys.platform, "32")}{machine}_exe', path
path = os.path.dirname(__file__)
@@ -110,8 +114,8 @@ _FILE_SUFFIXES = {
'zip': '',
'win_exe': '.exe',
'win_x86_exe': '_x86.exe',
'win_arm64_exe': '_arm64.exe',
'darwin_exe': '_macos',
'darwin_legacy_exe': '_macos_legacy',
'linux_exe': '_linux',
'linux_aarch64_exe': '_linux_aarch64',
'linux_armv7l_exe': '_linux_armv7l',
@@ -147,12 +151,6 @@ def _get_system_deprecation():
STOP_MSG = 'You may stop receiving updates on this version at any time!'
variant = detect_variant()
# Temporary until macos_legacy executable builds are discontinued
if variant == 'darwin_legacy_exe':
return EXE_MSG_TMPL.format(
f'{variant} (the PyInstaller-bundled executable for macOS versions older than 10.15)',
'issues/13856', STOP_MSG)
# Temporary until linux_armv7l executable builds are discontinued
if variant == 'linux_armv7l_exe':
return EXE_MSG_TMPL.format(

View File

@@ -1,4 +1,8 @@
"""Deprecated - New code should avoid these"""
import base64
import hashlib
import hmac
import json
import warnings
from ..compat.compat_utils import passthrough_module
@@ -28,4 +32,18 @@ def intlist_to_bytes(xs):
return struct.pack('%dB' % len(xs), *xs)
def jwt_encode_hs256(payload_data, key, headers={}):
header_data = {
'alg': 'HS256',
'typ': 'JWT',
}
if headers:
header_data.update(headers)
header_b64 = base64.b64encode(json.dumps(header_data).encode())
payload_b64 = base64.b64encode(json.dumps(payload_data).encode())
h = hmac.new(key.encode(), header_b64 + b'.' + payload_b64, hashlib.sha256)
signature_b64 = base64.b64encode(h.digest())
return header_b64 + b'.' + payload_b64 + b'.' + signature_b64
compiled_regex_type = type(re.compile(''))

View File

@@ -4739,22 +4739,36 @@ def time_seconds(**kwargs):
return time.time() + dt.timedelta(**kwargs).total_seconds()
# create a JSON Web Signature (jws) with HS256 algorithm
# the resulting format is in JWS Compact Serialization
# implemented following JWT https://www.rfc-editor.org/rfc/rfc7519.html
# implemented following JWS https://www.rfc-editor.org/rfc/rfc7515.html
def jwt_encode_hs256(payload_data, key, headers={}):
def jwt_encode(payload_data, key, *, alg='HS256', headers=None):
assert alg in ('HS256',), f'Unsupported algorithm "{alg}"'
def jwt_json_bytes(obj):
return json.dumps(obj, separators=(',', ':')).encode()
def jwt_b64encode(bytestring):
return base64.urlsafe_b64encode(bytestring).rstrip(b'=')
header_data = {
'alg': 'HS256',
'alg': alg,
'typ': 'JWT',
}
if headers:
header_data.update(headers)
header_b64 = base64.b64encode(json.dumps(header_data).encode())
payload_b64 = base64.b64encode(json.dumps(payload_data).encode())
# Allow re-ordering of keys if both 'alg' and 'typ' are present
if 'alg' in headers and 'typ' in headers:
header_data = headers
else:
header_data.update(headers)
header_b64 = jwt_b64encode(jwt_json_bytes(header_data))
payload_b64 = jwt_b64encode(jwt_json_bytes(payload_data))
# HS256 is the only algorithm currently supported
h = hmac.new(key.encode(), header_b64 + b'.' + payload_b64, hashlib.sha256)
signature_b64 = base64.b64encode(h.digest())
return header_b64 + b'.' + payload_b64 + b'.' + signature_b64
signature_b64 = jwt_b64encode(h.digest())
return (header_b64 + b'.' + payload_b64 + b'.' + signature_b64).decode()
# can be extended in future to verify the signature and parse header and return the algorithm used if it's not HS256

View File

@@ -1,8 +1,8 @@
# Autogenerated by devscripts/update-version.py
__version__ = '2025.08.11'
__version__ = '2025.08.20'
RELEASE_GIT_HEAD = '5e4ceb35cf997af0dbf100e1de37f4e2bcbaa0b7'
RELEASE_GIT_HEAD = 'c2fc4f3e7f6d757250183b177130c64beee50520'
VARIANT = None
@@ -12,4 +12,4 @@ CHANNEL = 'stable'
ORIGIN = 'yt-dlp/yt-dlp'
_pkg_version = '2025.08.11'
_pkg_version = '2025.08.20'