Compare commits

..

No commits in common. 'master' and '2023.06.21' have entirely different histories.

@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting that yt-dlp is broken on a **supported** site - label: I'm reporting that yt-dlp is broken on a **supported** site
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **2023.06.21** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -61,18 +61,19 @@ body:
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version 2023.06.21 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: 2023.06.21, Current version: 2023.06.21
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (2023.06.21)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell
validations: validations:

@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a new site support request - label: I'm reporting a new site support request
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **2023.06.21** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -73,18 +73,19 @@ body:
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version 2023.06.21 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: 2023.06.21, Current version: 2023.06.21
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (2023.06.21)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell
validations: validations:

@ -18,7 +18,7 @@ body:
options: options:
- label: I'm requesting a site-specific feature - label: I'm requesting a site-specific feature
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **2023.06.21** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -69,18 +69,19 @@ body:
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version 2023.06.21 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: 2023.06.21, Current version: 2023.06.21
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (2023.06.21)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell
validations: validations:

@ -18,7 +18,7 @@ body:
options: options:
- label: I'm reporting a bug unrelated to a specific site - label: I'm reporting a bug unrelated to a specific site
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **2023.06.21** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -54,18 +54,19 @@ body:
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version 2023.06.21 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: 2023.06.21, Current version: 2023.06.21
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (2023.06.21)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell
validations: validations:

@ -20,7 +20,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **2023.06.21** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true required: true
@ -50,17 +50,18 @@ body:
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version 2023.06.21 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: 2023.06.21, Current version: 2023.06.21
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (2023.06.21)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell

@ -26,7 +26,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **2023.06.21** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true required: true
@ -56,17 +56,18 @@ body:
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version 2023.06.21 [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: 2023.06.21, Current version: 2023.06.21
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (2023.06.21)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm reporting that yt-dlp is broken on a **supported** site - label: I'm reporting that yt-dlp is broken on a **supported** site
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm reporting a new site support request - label: I'm reporting a new site support request
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm requesting a site-specific feature - label: I'm requesting a site-specific feature
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

@ -12,7 +12,7 @@ body:
options: options:
- label: I'm reporting a bug unrelated to a specific site - label: I'm reporting a bug unrelated to a specific site
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true

@ -14,7 +14,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
required: true required: true

@ -20,7 +20,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - label: I've verified that I'm running yt-dlp version **%(version)s** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates - label: I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
required: true required: true

@ -28,6 +28,7 @@ Fixes #
### Before submitting a *pull request* make sure you have: ### Before submitting a *pull request* make sure you have:
- [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions) - [ ] At least skimmed through [contributing guidelines](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions) including [yt-dlp coding conventions](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#yt-dlp-coding-conventions)
- [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests - [ ] [Searched](https://github.com/yt-dlp/yt-dlp/search?q=is%3Apr&type=Issues) the bugtracker for similar pull requests
- [ ] Checked the code with [flake8](https://pypi.python.org/pypi/flake8) and [ran relevant tests](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#developer-instructions)
### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply: ### In order to be accepted and merged into yt-dlp each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check all of the following options that apply:
- [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/) - [ ] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
@ -39,4 +40,10 @@ Fixes #
- [ ] Core bug fix/improvement - [ ] Core bug fix/improvement
- [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes)) - [ ] New feature (It is strongly [recommended to open an issue first](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#adding-new-feature-or-making-overarching-changes))
<!-- Do NOT edit/remove anything below this! -->
</details><details><summary>Copilot Summary</summary>
copilot:all
</details> </details>

10
.github/banner.svg vendored

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 24 KiB

@ -12,9 +12,6 @@ on:
unix: unix:
default: true default: true
type: boolean type: boolean
linux_static:
default: true
type: boolean
linux_arm: linux_arm:
default: true default: true
type: boolean type: boolean
@ -30,10 +27,9 @@ on:
windows32: windows32:
default: true default: true
type: boolean type: boolean
origin: meta_files:
required: false default: true
default: '' type: boolean
type: string
secrets: secrets:
GPG_SIGNING_KEY: GPG_SIGNING_KEY:
required: false required: false
@ -41,22 +37,16 @@ on:
workflow_dispatch: workflow_dispatch:
inputs: inputs:
version: version:
description: | description: Version tag (YYYY.MM.DD[.REV])
VERSION: yyyy.mm.dd[.rev] or rev
required: true required: true
type: string type: string
channel: channel:
description: | description: Update channel (stable/nightly/...)
SOURCE of this build's updates: stable/nightly/master/<repo>
required: true required: true
default: stable default: stable
type: string type: string
unix: unix:
description: yt-dlp, yt-dlp.tar.gz description: yt-dlp, yt-dlp.tar.gz, yt-dlp_linux, yt-dlp_linux.zip
default: true
type: boolean
linux_static:
description: yt-dlp_linux
default: true default: true
type: boolean type: boolean
linux_arm: linux_arm:
@ -79,103 +69,87 @@ on:
description: yt-dlp_x86.exe description: yt-dlp_x86.exe
default: true default: true
type: boolean type: boolean
origin: meta_files:
description: Origin description: SHA2-256SUMS, SHA2-512SUMS, _update_spec
required: false default: true
default: 'current repo' type: boolean
type: choice
options:
- 'current repo'
permissions: permissions:
contents: read contents: read
jobs: jobs:
process:
runs-on: ubuntu-latest
outputs:
origin: ${{ steps.process_origin.outputs.origin }}
steps:
- name: Process origin
id: process_origin
run: |
echo "origin=${{ inputs.origin == 'current repo' && github.repository || inputs.origin }}" | tee "$GITHUB_OUTPUT"
unix: unix:
needs: process
if: inputs.unix if: inputs.unix
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: - uses: actions/setup-python@v4
fetch-depth: 0 # Needed for changelog
- uses: actions/setup-python@v5
with: with:
python-version: "3.10" python-version: "3.10"
- uses: conda-incubator/setup-miniconda@v2
with:
miniforge-variant: Mambaforge
use-mamba: true
channels: conda-forge
auto-update-conda: true
activate-environment: ""
auto-activate-base: false
- name: Install Requirements - name: Install Requirements
run: | run: |
sudo apt -y install zip pandoc man sed sudo apt-get -y install zip pandoc man sed
python -m pip install -U pip setuptools wheel
python -m pip install -U Pyinstaller -r requirements.txt
reqs=$(mktemp)
cat > $reqs << EOF
python=3.10.*
pyinstaller
cffi
brotli-python
EOF
sed '/^brotli.*/d' requirements.txt >> $reqs
mamba create -n build --file $reqs
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/update_changelog.py -vv
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build Unix platform-independent binary - name: Build Unix platform-independent binary
run: | run: |
make all tar make all tar
- name: Verify --update-to - name: Build Unix standalone binary
if: vars.UPDATE_TO_VERIFICATION shell: bash -l {0}
run: | run: |
chmod +x ./yt-dlp unset LD_LIBRARY_PATH # Harmful; set by setup-python
cp ./yt-dlp ./yt-dlp_downgraded conda activate build
version="$(./yt-dlp --version)" python pyinst.py --onedir
./yt-dlp_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04 (cd ./dist/yt-dlp_linux && zip -r ../yt-dlp_linux.zip .)
downgraded_version="$(./yt-dlp_downgraded --version)" python pyinst.py
[[ "$version" != "$downgraded_version" ]] mv ./dist/yt-dlp_linux ./yt-dlp_linux
- name: Upload artifacts mv ./dist/yt-dlp_linux.zip ./yt-dlp_linux.zip
uses: actions/upload-artifact@v4
with:
name: build-bin-${{ github.job }}
path: |
yt-dlp
yt-dlp.tar.gz
compression-level: 0
linux_static:
needs: process
if: inputs.linux_static
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build static executable
env:
channel: ${{ inputs.channel }}
origin: ${{ needs.process.outputs.origin }}
version: ${{ inputs.version }}
run: |
mkdir ~/build
cd bundle/docker
docker compose up --build static
sudo chown "${USER}:docker" ~/build/yt-dlp_linux
- name: Verify --update-to - name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION if: vars.UPDATE_TO_VERIFICATION
run: | run: |
chmod +x ~/build/yt-dlp_linux binaries=("yt-dlp" "yt-dlp_linux")
cp ~/build/yt-dlp_linux ~/build/yt-dlp_linux_downgraded for binary in "${binaries[@]}"; do
version="$(~/build/yt-dlp_linux --version)" chmod +x ./${binary}
~/build/yt-dlp_linux_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04 cp ./${binary} ./${binary}_downgraded
downgraded_version="$(~/build/yt-dlp_linux_downgraded --version)" version="$(./${binary} --version)"
[[ "$version" != "$downgraded_version" ]] ./${binary}_downgraded -v --update-to yt-dlp/yt-dlp@2023.03.04
downgraded_version="$(./${binary}_downgraded --version)"
[[ "$version" != "$downgraded_version" ]]
done
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-bin-${{ github.job }}
path: | path: |
~/build/yt-dlp_linux yt-dlp
compression-level: 0 yt-dlp.tar.gz
yt-dlp_linux
yt-dlp_linux.zip
linux_arm: linux_arm:
needs: process
if: inputs.linux_arm if: inputs.linux_arm
permissions: permissions:
contents: read contents: read
@ -188,7 +162,7 @@ jobs:
- aarch64 - aarch64
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
path: ./repo path: ./repo
- name: Virtualized Install, Prepare & Build - name: Virtualized Install, Prepare & Build
@ -203,18 +177,17 @@ jobs:
dockerRunArgs: --volume "${PWD}/repo:/repo" dockerRunArgs: --volume "${PWD}/repo:/repo"
install: | # Installing Python 3.10 from the Deadsnakes repo raises errors install: | # Installing Python 3.10 from the Deadsnakes repo raises errors
apt update apt update
apt -y install zlib1g-dev libffi-dev python3.8 python3.8-dev python3.8-distutils python3-pip apt -y install zlib1g-dev python3.8 python3.8-dev python3.8-distutils python3-pip
python3.8 -m pip install -U pip setuptools wheel python3.8 -m pip install -U pip setuptools wheel
# Cannot access any files from the repo directory at this stage # Cannot access requirements.txt from the repo directory at this stage
python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi secretstorage cffi python3.8 -m pip install -U Pyinstaller mutagen pycryptodomex websockets brotli certifi
run: | run: |
cd repo cd repo
python3.8 devscripts/install_deps.py -o --include build python3.8 -m pip install -U Pyinstaller -r requirements.txt # Cached version may be out of date
python3.8 devscripts/install_deps.py --include pyinstaller --include secretstorage # Cached version may be out of date python3.8 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3.8 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}"
python3.8 devscripts/make_lazy_extractors.py python3.8 devscripts/make_lazy_extractors.py
python3.8 -m bundle.pyinstaller python3.8 pyinst.py
if ${{ vars.UPDATE_TO_VERIFICATION && 'true' || 'false' }}; then if ${{ vars.UPDATE_TO_VERIFICATION && 'true' || 'false' }}; then
arch="${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}" arch="${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}"
@ -227,84 +200,34 @@ jobs:
fi fi
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-bin-linux_${{ matrix.architecture }}
path: | # run-on-arch-action designates armv7l as armv7 path: | # run-on-arch-action designates armv7l as armv7
repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }} repo/dist/yt-dlp_linux_${{ (matrix.architecture == 'armv7' && 'armv7l') || matrix.architecture }}
compression-level: 0
macos: macos:
needs: process
if: inputs.macos if: inputs.macos
permissions: runs-on: macos-11
contents: read
actions: write # For cleaning up cache
runs-on: macos-12
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
# NB: Building universal2 does not work with python from actions/setup-python # NB: Building universal2 does not work with python from actions/setup-python
- name: Restore cached requirements
id: restore-cache
uses: actions/cache/restore@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MINS: 1
with:
path: |
~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}
- name: Install Requirements - name: Install Requirements
run: | run: |
brew install coreutils brew install coreutils
python3 -m venv ~/yt-dlp-build-venv python3 -m pip install -U --user pip setuptools wheel
source ~/yt-dlp-build-venv/bin/activate
python3 devscripts/install_deps.py -o --include build
python3 devscripts/install_deps.py --print --include pyinstaller > requirements.txt
# We need to ignore wheels otherwise we break universal2 builds # We need to ignore wheels otherwise we break universal2 builds
python3 -m pip install -U --no-binary :all: -r requirements.txt python3 -m pip install -U --user --no-binary :all: Pyinstaller -r requirements.txt
# We need to fuse our own universal2 wheels for curl_cffi
python3 -m pip install -U delocate
mkdir curl_cffi_whls curl_cffi_universal2
python3 devscripts/install_deps.py --print -o --include curl-cffi > requirements.txt
for platform in "macosx_11_0_arm64" "macosx_11_0_x86_64"; do
python3 -m pip download \
--only-binary=:all: \
--platform "${platform}" \
-d curl_cffi_whls \
-r requirements.txt
done
( # Overwrite x86_64-only libs with fat/universal2 libs or else Pyinstaller will do the opposite
# See https://github.com/yt-dlp/yt-dlp/pull/10069
cd curl_cffi_whls
mkdir -p curl_cffi/.dylibs
python_libdir=$(python3 -c 'import sys; from pathlib import Path; print(Path(sys.path[1]).parent)')
for dylib in lib{ssl,crypto}.3.dylib; do
cp "${python_libdir}/${dylib}" "curl_cffi/.dylibs/${dylib}"
for wheel in curl_cffi*macos*x86_64.whl; do
zip "${wheel}" "curl_cffi/.dylibs/${dylib}"
done
done
)
python3 -m delocate.cmd.delocate_fuse curl_cffi_whls/curl_cffi*.whl -w curl_cffi_universal2
python3 -m delocate.cmd.delocate_fuse curl_cffi_whls/cffi*.whl -w curl_cffi_universal2
for wheel in curl_cffi_universal2/*cffi*.whl; do
mv -n -- "${wheel}" "${wheel/x86_64/universal2}"
done
python3 -m pip install --force-reinstall -U curl_cffi_universal2/*cffi*.whl
- name: Prepare - name: Prepare
run: | run: |
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/make_lazy_extractors.py python3 devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
source ~/yt-dlp-build-venv/bin/activate python3 pyinst.py --target-architecture universal2 --onedir
python3 -m bundle.pyinstaller --target-architecture universal2 --onedir
(cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .) (cd ./dist/yt-dlp_macos && zip -r ../yt-dlp_macos.zip .)
python3 -m bundle.pyinstaller --target-architecture universal2 python3 pyinst.py --target-architecture universal2
- name: Verify --update-to - name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION if: vars.UPDATE_TO_VERIFICATION
@ -317,39 +240,18 @@ jobs:
[[ "$version" != "$downgraded_version" ]] [[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-bin-${{ github.job }}
path: | path: |
dist/yt-dlp_macos dist/yt-dlp_macos
dist/yt-dlp_macos.zip dist/yt-dlp_macos.zip
compression-level: 0
- name: Cleanup cache
if: steps.restore-cache.outputs.cache-hit == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
cache_key: cache-reqs-${{ github.job }}
repository: ${{ github.repository }}
branch: ${{ github.ref }}
run: |
gh extension install actions/gh-actions-cache
gh actions-cache delete "${cache_key}" -R "${repository}" -B "${branch}" --confirm
- name: Cache requirements
uses: actions/cache/save@v4
with:
path: |
~/yt-dlp-build-venv
key: cache-reqs-${{ github.job }}
macos_legacy: macos_legacy:
needs: process
if: inputs.macos_legacy if: inputs.macos_legacy
runs-on: macos-12 runs-on: macos-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Install Python - name: Install Python
# We need the official Python, because the GA ones only support newer macOS versions # We need the official Python, because the GA ones only support newer macOS versions
env: env:
@ -359,22 +261,22 @@ jobs:
# Hack to get the latest patch version. Uncomment if needed # Hack to get the latest patch version. Uncomment if needed
#brew install python@3.10 #brew install python@3.10
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 ) #export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
curl "https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg" -o "python.pkg" curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
sudo installer -pkg python.pkg -target / sudo installer -pkg python.pkg -target /
python3 --version python3 --version
- name: Install Requirements - name: Install Requirements
run: | run: |
brew install coreutils brew install coreutils
python3 devscripts/install_deps.py --user -o --include build python3 -m pip install -U --user pip setuptools wheel
python3 devscripts/install_deps.py --user --include pyinstaller python3 -m pip install -U --user Pyinstaller -r requirements.txt
- name: Prepare - name: Prepare
run: | run: |
python3 devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python3 devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python3 devscripts/make_lazy_extractors.py python3 devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
python3 -m bundle.pyinstaller python3 pyinst.py
mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy mv dist/yt-dlp_macos dist/yt-dlp_macos_legacy
- name: Verify --update-to - name: Verify --update-to
@ -388,48 +290,36 @@ jobs:
[[ "$version" != "$downgraded_version" ]] [[ "$version" != "$downgraded_version" ]]
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-bin-${{ github.job }}
path: | path: |
dist/yt-dlp_macos_legacy dist/yt-dlp_macos_legacy
compression-level: 0
windows: windows:
needs: process
if: inputs.windows if: inputs.windows
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v5 - uses: actions/setup-python@v4
with: # 3.8 is used for Win7 support with: # 3.8 is used for Win7 support
python-version: "3.8" python-version: "3.8"
- name: Install Requirements - name: Install Requirements
run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds run: | # Custom pyinstaller built with https://github.com/yt-dlp/pyinstaller-builds
python devscripts/install_deps.py -o --include build python -m pip install -U pip setuptools wheel py2exe
python devscripts/install_deps.py --include curl-cffi pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/x86_64/pyinstaller-6.7.0-py3-none-any.whl"
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
python -m bundle.pyinstaller python setup.py py2exe
python -m bundle.pyinstaller --onedir
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_real.exe
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
- name: Install Requirements (py2exe)
run: |
python devscripts/install_deps.py --include py2exe
- name: Build (py2exe)
run: |
python -m bundle.py2exe
Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe Move-Item ./dist/yt-dlp.exe ./dist/yt-dlp_min.exe
Move-Item ./dist/yt-dlp_real.exe ./dist/yt-dlp.exe python pyinst.py
python pyinst.py --onedir
Compress-Archive -Path ./dist/yt-dlp/* -DestinationPath ./dist/yt-dlp_win.zip
- name: Verify --update-to - name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION if: vars.UPDATE_TO_VERIFICATION
@ -445,39 +335,35 @@ jobs:
} }
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-bin-${{ github.job }}
path: | path: |
dist/yt-dlp.exe dist/yt-dlp.exe
dist/yt-dlp_min.exe dist/yt-dlp_min.exe
dist/yt-dlp_win.zip dist/yt-dlp_win.zip
compression-level: 0
windows32: windows32:
needs: process
if: inputs.windows32 if: inputs.windows32
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v5 - uses: actions/setup-python@v4
with: with: # 3.7 is used for Vista support. See https://github.com/yt-dlp/yt-dlp/issues/390
python-version: "3.8" python-version: "3.7"
architecture: "x86" architecture: "x86"
- name: Install Requirements - name: Install Requirements
run: | run: |
python devscripts/install_deps.py -o --include build python -m pip install -U pip setuptools wheel
python devscripts/install_deps.py pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-5.8.0-py3-none-any.whl" -r requirements.txt
python -m pip install -U "https://yt-dlp.github.io/Pyinstaller-Builds/i686/pyinstaller-6.7.0-py3-none-any.whl"
- name: Prepare - name: Prepare
run: | run: |
python devscripts/update-version.py -c "${{ inputs.channel }}" -r "${{ needs.process.outputs.origin }}" "${{ inputs.version }}" python devscripts/update-version.py -c ${{ inputs.channel }} ${{ inputs.version }}
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
- name: Build - name: Build
run: | run: |
python -m bundle.pyinstaller python pyinst.py
- name: Verify --update-to - name: Verify --update-to
if: vars.UPDATE_TO_VERIFICATION if: vars.UPDATE_TO_VERIFICATION
@ -493,19 +379,15 @@ jobs:
} }
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-bin-${{ github.job }}
path: | path: |
dist/yt-dlp_x86.exe dist/yt-dlp_x86.exe
compression-level: 0
meta_files: meta_files:
if: always() && !cancelled() if: inputs.meta_files && always() && !cancelled()
needs: needs:
- process
- unix - unix
- linux_static
- linux_arm - linux_arm
- macos - macos
- macos_legacy - macos_legacy
@ -513,37 +395,19 @@ jobs:
- windows32 - windows32
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/download-artifact@v4 - uses: actions/download-artifact@v3
with:
path: artifact
pattern: build-bin-*
merge-multiple: true
- name: Make SHA2-SUMS files - name: Make SHA2-SUMS files
run: | run: |
cd ./artifact/ cd ./artifact/
# make sure SHA sums are also printed to stdout sha256sum * > ../SHA2-256SUMS
sha256sum -- * | tee ../SHA2-256SUMS sha512sum * > ../SHA2-512SUMS
sha512sum -- * | tee ../SHA2-512SUMS
# also print as permanent annotations to the summary page
while read -r shasum; do
echo "::notice title=${shasum##* }::sha256: ${shasum% *}"
done < ../SHA2-256SUMS
- name: Make Update spec - name: Make Update spec
run: | run: |
cat >> _update_spec << EOF cat >> _update_spec << EOF
# This file is used for regulating self-update # This file is used for regulating self-update
lock 2022.08.18.36 .+ Python 3\.6 lock 2022.08.18.36 .+ Python 3.6
lock 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6
lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
EOF EOF
- name: Sign checksum files - name: Sign checksum files
@ -557,11 +421,8 @@ jobs:
done done
- name: Upload artifacts - name: Upload artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v3
with: with:
name: build-${{ github.job }}
path: | path: |
_update_spec
SHA*SUMS* SHA*SUMS*
compression-level: 0 _update_spec
overwrite: true

@ -1,65 +0,0 @@
name: "CodeQL"
on:
push:
branches: [ 'master', 'gh-pages', 'release' ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ 'master' ]
schedule:
- cron: '59 11 * * 5'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Use only 'java' to analyze code written in Java, Kotlin or both
# Use only 'javascript' to analyze code written in JavaScript, TypeScript or both
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, Go, Java, or Swift).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

@ -1,32 +1,8 @@
name: Core Tests name: Core Tests
on: on: [push, pull_request]
push:
paths:
- .github/**
- devscripts/**
- test/**
- yt_dlp/**.py
- '!yt_dlp/extractor/*.py'
- yt_dlp/extractor/__init__.py
- yt_dlp/extractor/common.py
- yt_dlp/extractor/extractors.py
pull_request:
paths:
- .github/**
- devscripts/**
- test/**
- yt_dlp/**.py
- '!yt_dlp/extractor/*.py'
- yt_dlp/extractor/__init__.py
- yt_dlp/extractor/common.py
- yt_dlp/extractor/extractors.py
permissions: permissions:
contents: read contents: read
concurrency:
group: core-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs: jobs:
tests: tests:
name: Core Tests name: Core Tests
@ -36,26 +12,27 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
# CPython 3.8 is in quick-test # CPython 3.11 is in quick-test
python-version: ['3.9', '3.10', '3.11', '3.12', pypy-3.8, pypy-3.10] python-version: ['3.8', '3.9', '3.10', pypy-3.7, pypy-3.8]
run-tests-ext: [sh]
include: include:
# atleast one of each CPython/PyPy tests must be in windows # atleast one of each CPython/PyPy tests must be in windows
- os: windows-latest - os: windows-latest
python-version: '3.8' python-version: '3.7'
- os: windows-latest run-tests-ext: bat
python-version: '3.12'
- os: windows-latest - os: windows-latest
python-version: pypy-3.9 python-version: pypy-3.9
run-tests-ext: bat
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5 uses: actions/setup-python@v4
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install test requirements - name: Install pytest
run: python3 ./devscripts/install_deps.py --include test --include curl-cffi run: pip install pytest
- name: Run tests - name: Run tests
continue-on-error: False continue-on-error: False
run: | run: |
python3 -m yt_dlp -v || true # Print debug head python3 -m yt_dlp -v || true # Print debug head
python3 ./devscripts/run_tests.py core ./devscripts/run_tests.${{ matrix.run-tests-ext }} core

@ -9,16 +9,16 @@ jobs:
if: "contains(github.event.head_commit.message, 'ci run dl')" if: "contains(github.event.head_commit.message, 'ci run dl')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v5 uses: actions/setup-python@v4
with: with:
python-version: 3.9 python-version: 3.9
- name: Install test requirements - name: Install test requirements
run: python3 ./devscripts/install_deps.py --include dev run: pip install pytest
- name: Run tests - name: Run tests
continue-on-error: true continue-on-error: true
run: python3 ./devscripts/run_tests.py download run: ./devscripts/run_tests.sh download
full: full:
name: Full Download Tests name: Full Download Tests
@ -28,21 +28,24 @@ jobs:
fail-fast: true fail-fast: true
matrix: matrix:
os: [ubuntu-latest] os: [ubuntu-latest]
python-version: ['3.10', '3.11', '3.12', pypy-3.8, pypy-3.10] python-version: ['3.7', '3.10', 3.11-dev, pypy-3.7, pypy-3.8]
run-tests-ext: [sh]
include: include:
# atleast one of each CPython/PyPy tests must be in windows # atleast one of each CPython/PyPy tests must be in windows
- os: windows-latest - os: windows-latest
python-version: '3.8' python-version: '3.8'
run-tests-ext: bat
- os: windows-latest - os: windows-latest
python-version: pypy-3.9 python-version: pypy-3.9
run-tests-ext: bat
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5 uses: actions/setup-python@v4
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install test requirements - name: Install pytest
run: python3 ./devscripts/install_deps.py --include dev run: pip install pytest
- name: Run tests - name: Run tests
continue-on-error: true continue-on-error: true
run: python3 ./devscripts/run_tests.py download run: ./devscripts/run_tests.${{ matrix.run-tests-ext }} download

@ -0,0 +1,20 @@
name: Potential Duplicates
on:
issues:
types: [opened, edited]
jobs:
run:
runs-on: ubuntu-latest
steps:
- uses: wow-actions/potential-duplicates@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
label: potential-duplicate
state: all
threshold: 0.3
comment: |
This issue is potentially a duplicate of one of the following issues:
{{#issues}}
- #{{ number }} ({{ accuracy }}%)
{{/issues}}

@ -0,0 +1,97 @@
name: Publish
on:
workflow_call:
inputs:
channel:
default: stable
required: true
type: string
version:
required: true
type: string
target_commitish:
required: true
type: string
prerelease:
default: false
required: true
type: boolean
secrets:
ARCHIVE_REPO_TOKEN:
required: false
permissions:
contents: write
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/download-artifact@v3
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Generate release notes
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20should%20I%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp#installation "Installation instructions") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/yt-dlp/yt-dlp/tree/2023.03.04#readme "Documentation") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
${{ inputs.channel != 'nightly' && '"[![Nightly](https://img.shields.io/badge/Get%20nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\")"' || '' }} \
> ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/yt-dlp/yt-dlp#release-files)
---
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
printf '%s\n\n' '**This is an automated nightly pre-release build**' >> ./NIGHTLY_NOTES
cat ./RELEASE_NOTES >> ./NIGHTLY_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ inputs.target_commitish }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Archive nightly release
env:
GH_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
GH_REPO: ${{ vars.ARCHIVE_REPO }}
if: |
inputs.channel == 'nightly' && env.GH_TOKEN != '' && env.GH_REPO != ''
run: |
gh release create \
--notes-file ARCHIVE_NOTES \
--title "yt-dlp nightly ${{ inputs.version }}" \
${{ inputs.version }} \
artifact/*
- name: Prune old nightly release
if: inputs.channel == 'nightly' && !vars.ARCHIVE_REPO
env:
GH_TOKEN: ${{ github.token }}
run: |
gh release delete --yes --cleanup-tag "nightly" || true
git tag --delete "nightly" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release${{ inputs.channel == 'nightly' && ' (nightly)' || '' }}
env:
GH_TOKEN: ${{ github.token }}
if: (inputs.channel == 'nightly' && !vars.ARCHIVE_REPO) || inputs.channel != 'nightly'
run: |
gh release create \
--notes-file ${{ inputs.channel == 'nightly' && 'NIGHTLY_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ inputs.target_commitish }} \
--title "yt-dlp ${{ inputs.channel == 'nightly' && 'nightly ' || '' }}${{ inputs.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ inputs.channel == 'nightly' && '"nightly"' || inputs.version }} \
artifact/*

@ -9,31 +9,27 @@ jobs:
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- name: Set up Python 3.8 - name: Set up Python 3.11
uses: actions/setup-python@v5 uses: actions/setup-python@v4
with: with:
python-version: '3.8' python-version: '3.11'
- name: Install test requirements - name: Install test requirements
run: python3 ./devscripts/install_deps.py --include test run: pip install pytest pycryptodomex
- name: Run tests - name: Run tests
run: | run: |
python3 -m yt_dlp -v || true python3 -m yt_dlp -v || true
python3 ./devscripts/run_tests.py core ./devscripts/run_tests.sh core
check: flake8:
name: Code check name: Linter
if: "!contains(github.event.head_commit.message, 'ci skip all')" if: "!contains(github.event.head_commit.message, 'ci skip all')"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
- uses: actions/setup-python@v5 - uses: actions/setup-python@v4
with: - name: Install flake8
python-version: '3.8' run: pip install flake8
- name: Install dev dependencies
run: python3 ./devscripts/install_deps.py -o --include static-analysis
- name: Make lazy extractors - name: Make lazy extractors
run: python3 ./devscripts/make_lazy_extractors.py run: python devscripts/make_lazy_extractors.py
- name: Run ruff - name: Run flake8
run: ruff check --output-format github . run: flake8 .
- name: Run autopep8
run: autopep8 --diff .

@ -1,30 +0,0 @@
name: Release (master)
on:
push:
branches:
- master
paths:
- "yt_dlp/**.py"
- "!yt_dlp/version.py"
- "bundle/*.py"
- "pyproject.toml"
- "Makefile"
- ".github/workflows/build.yml"
concurrency:
group: release-master
permissions:
contents: read
jobs:
release:
if: vars.BUILD_MASTER != ''
uses: ./.github/workflows/release.yml
with:
prerelease: true
source: master
permissions:
contents: write
packages: write # For package cache
actions: write # For cleaning up cache
id-token: write # mandatory for trusted publishing
secrets: inherit

@ -1,43 +1,52 @@
name: Release (nightly) name: Release (nightly)
on: on:
schedule: push:
- cron: '23 23 * * *' branches:
- master
paths:
- "yt_dlp/**.py"
- "!yt_dlp/version.py"
concurrency:
group: release-nightly
cancel-in-progress: true
permissions: permissions:
contents: read contents: read
jobs: jobs:
check_nightly: prepare:
if: vars.BUILD_NIGHTLY != '' if: vars.BUILD_NIGHTLY != ''
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
commit: ${{ steps.check_for_new_commits.outputs.commit }} version: ${{ steps.get_version.outputs.version }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: - name: Get version
fetch-depth: 0 id: get_version
- name: Check for new commits
id: check_for_new_commits
run: | run: |
relevant_files=( python devscripts/update-version.py "$(date -u +"%H%M%S")" | grep -Po "version=\d+(\.\d+){3}" >> "$GITHUB_OUTPUT"
"yt_dlp/*.py"
':!yt_dlp/version.py'
"bundle/*.py"
"pyproject.toml"
"Makefile"
".github/workflows/build.yml"
)
echo "commit=$(git log --format=%H -1 --since="24 hours ago" -- "${relevant_files[@]}")" | tee "$GITHUB_OUTPUT"
release: build:
needs: [check_nightly] needs: prepare
if: ${{ needs.check_nightly.outputs.commit }} uses: ./.github/workflows/build.yml
uses: ./.github/workflows/release.yml
with: with:
prerelease: true version: ${{ needs.prepare.outputs.version }}
source: nightly channel: nightly
permissions:
contents: read
packages: write # For package cache
secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish:
needs: [prepare, build]
uses: ./.github/workflows/publish.yml
secrets:
ARCHIVE_REPO_TOKEN: ${{ secrets.ARCHIVE_REPO_TOKEN }}
permissions: permissions:
contents: write contents: write
packages: write # For package cache with:
actions: write # For cleaning up cache channel: nightly
id-token: write # mandatory for trusted publishing prerelease: true
secrets: inherit version: ${{ needs.prepare.outputs.version }}
target_commitish: ${{ github.sha }}

@ -1,45 +1,14 @@
name: Release name: Release
on: on:
workflow_call:
inputs:
prerelease:
required: false
default: true
type: boolean
source:
required: false
default: ''
type: string
target:
required: false
default: ''
type: string
version:
required: false
default: ''
type: string
workflow_dispatch: workflow_dispatch:
inputs: inputs:
source: version:
description: | description: Version tag (YYYY.MM.DD[.REV])
SOURCE of this release's updates:
channel, repo, tag, or channel/repo@tag
(default: <current_repo>)
required: false
default: ''
type: string
target:
description: |
TARGET to publish this release to:
channel, tag, or channel@tag
(default: <source> if writable else <current_repo>[@source_tag])
required: false required: false
default: '' default: ''
type: string type: string
version: channel:
description: | description: Update channel (stable/nightly/...)
VERSION: yyyy.mm.dd[.rev] or rev
(default: auto-generated)
required: false required: false
default: '' default: ''
type: string type: string
@ -57,153 +26,51 @@ jobs:
contents: write contents: write
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
channel: ${{ steps.setup_variables.outputs.channel }} channel: ${{ steps.set_channel.outputs.channel }}
version: ${{ steps.setup_variables.outputs.version }} version: ${{ steps.update_version.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
target_repo_token: ${{ steps.setup_variables.outputs.target_repo_token }}
target_tag: ${{ steps.setup_variables.outputs.target_tag }}
pypi_project: ${{ steps.setup_variables.outputs.pypi_project }}
pypi_suffix: ${{ steps.setup_variables.outputs.pypi_suffix }}
head_sha: ${{ steps.get_target.outputs.head_sha }} head_sha: ${{ steps.get_target.outputs.head_sha }}
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@v5 - uses: actions/setup-python@v4
with: with:
python-version: "3.10" python-version: "3.10"
- name: Process inputs - name: Set channel
id: process_inputs id: set_channel
run: | run: |
cat << EOF CHANNEL="${{ github.repository == 'yt-dlp/yt-dlp' && 'stable' || github.repository }}"
::group::Inputs echo "channel=${{ inputs.channel || '$CHANNEL' }}" > "$GITHUB_OUTPUT"
prerelease=${{ inputs.prerelease }}
source=${{ inputs.source }}
target=${{ inputs.target }}
version=${{ inputs.version }}
::endgroup::
EOF
IFS='@' read -r source_repo source_tag <<<"${{ inputs.source }}"
IFS='@' read -r target_repo target_tag <<<"${{ inputs.target }}"
cat << EOF >> "$GITHUB_OUTPUT"
source_repo=${source_repo}
source_tag=${source_tag}
target_repo=${target_repo}
target_tag=${target_tag}
EOF
- name: Setup variables - name: Update version
id: setup_variables id: update_version
env:
source_repo: ${{ steps.process_inputs.outputs.source_repo }}
source_tag: ${{ steps.process_inputs.outputs.source_tag }}
target_repo: ${{ steps.process_inputs.outputs.target_repo }}
target_tag: ${{ steps.process_inputs.outputs.target_tag }}
run: | run: |
# unholy bash monstrosity (sincere apologies) REVISION="${{ vars.PUSH_VERSION_COMMIT == '' && '$(date -u +"%H%M%S")' || '' }}"
fallback_token () { REVISION="${{ inputs.prerelease && '$(date -u +"%H%M%S")' || '$REVISION' }}"
if ${{ !secrets.ARCHIVE_REPO_TOKEN }}; then python devscripts/update-version.py ${{ inputs.version || '$REVISION' }} | \
echo "::error::Repository access secret ${target_repo_token^^} not found" grep -Po "version=\d+\.\d+\.\d+(\.\d+)?" >> "$GITHUB_OUTPUT"
exit 1
fi
target_repo_token=ARCHIVE_REPO_TOKEN
return 0
}
source_is_channel=0
[[ "${source_repo}" == 'stable' ]] && source_repo='yt-dlp/yt-dlp'
if [[ -z "${source_repo}" ]]; then
source_repo='${{ github.repository }}'
elif [[ '${{ vars[format('{0}_archive_repo', env.source_repo)] }}' ]]; then
source_is_channel=1
source_channel='${{ vars[format('{0}_archive_repo', env.source_repo)] }}'
elif [[ -z "${source_tag}" && "${source_repo}" != */* ]]; then
source_tag="${source_repo}"
source_repo='${{ github.repository }}'
fi
resolved_source="${source_repo}"
if [[ "${source_tag}" ]]; then
resolved_source="${resolved_source}@${source_tag}"
elif [[ "${source_repo}" == 'yt-dlp/yt-dlp' ]]; then
resolved_source='stable'
fi
revision="${{ (inputs.prerelease || !vars.PUSH_VERSION_COMMIT) && '$(date -u +"%H%M%S")' || '' }}"
version="$(
python devscripts/update-version.py \
-c "${resolved_source}" -r "${{ github.repository }}" ${{ inputs.version || '$revision' }} | \
grep -Po "version=\K\d+\.\d+\.\d+(\.\d+)?")"
if [[ "${target_repo}" ]]; then
if [[ -z "${target_tag}" ]]; then
if [[ '${{ vars[format('{0}_archive_repo', env.target_repo)] }}' ]]; then
target_tag="${source_tag:-${version}}"
else
target_tag="${target_repo}"
target_repo='${{ github.repository }}'
fi
fi
if [[ "${target_repo}" != '${{ github.repository}}' ]]; then
target_repo='${{ vars[format('{0}_archive_repo', env.target_repo)] }}'
target_repo_token='${{ env.target_repo }}_archive_repo_token'
${{ !!secrets[format('{0}_archive_repo_token', env.target_repo)] }} || fallback_token
pypi_project='${{ vars[format('{0}_pypi_project', env.target_repo)] }}'
pypi_suffix='${{ vars[format('{0}_pypi_suffix', env.target_repo)] }}'
fi
else
target_tag="${source_tag:-${version}}"
if ((source_is_channel)); then
target_repo="${source_channel}"
target_repo_token='${{ env.source_repo }}_archive_repo_token'
${{ !!secrets[format('{0}_archive_repo_token', env.source_repo)] }} || fallback_token
pypi_project='${{ vars[format('{0}_pypi_project', env.source_repo)] }}'
pypi_suffix='${{ vars[format('{0}_pypi_suffix', env.source_repo)] }}'
else
target_repo='${{ github.repository }}'
fi
fi
if [[ "${target_repo}" == '${{ github.repository }}' ]] && ${{ !inputs.prerelease }}; then
pypi_project='${{ vars.PYPI_PROJECT }}'
fi
echo "::group::Output variables"
cat << EOF | tee -a "$GITHUB_OUTPUT"
channel=${resolved_source}
version=${version}
target_repo=${target_repo}
target_repo_token=${target_repo_token}
target_tag=${target_tag}
pypi_project=${pypi_project}
pypi_suffix=${pypi_suffix}
EOF
echo "::endgroup::"
- name: Update documentation - name: Update documentation
env:
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
!inputs.prerelease && env.target_repo == github.repository
run: | run: |
python devscripts/update_changelog.py -vv
make doc make doc
sed '/### /Q' Changelog.md >> ./CHANGELOG
echo '### ${{ steps.update_version.outputs.version }}' >> ./CHANGELOG
python ./devscripts/make_changelog.py -vv -c >> ./CHANGELOG
echo >> ./CHANGELOG
grep -Poz '(?s)### \d+\.\d+\.\d+.+' 'Changelog.md' | head -n -1 >> ./CHANGELOG
cat ./CHANGELOG > Changelog.md
- name: Push to release - name: Push to release
id: push_release id: push_release
env: if: ${{ !inputs.prerelease }}
version: ${{ steps.setup_variables.outputs.version }}
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
!inputs.prerelease && env.target_repo == github.repository
run: | run: |
git config --global user.name "github-actions[bot]" git config --global user.name github-actions
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com" git config --global user.email github-actions@example.com
git add -u git add -u
git commit -m "Release ${{ env.version }}" \ git commit -m "Release ${{ steps.update_version.outputs.version }}" \
-m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl" -m "Created by: ${{ github.event.sender.login }}" -m ":ci skip all :ci run dl"
git push origin --force ${{ github.event.ref }}:release git push origin --force ${{ github.event.ref }}:release
@ -213,10 +80,7 @@ jobs:
echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT" echo "head_sha=$(git rev-parse HEAD)" >> "$GITHUB_OUTPUT"
- name: Update master - name: Update master
env: if: vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease
target_repo: ${{ steps.setup_variables.outputs.target_repo }}
if: |
vars.PUSH_VERSION_COMMIT != '' && !inputs.prerelease && env.target_repo == github.repository
run: git push origin ${{ github.event.ref }} run: git push origin ${{ github.event.ref }}
build: build:
@ -225,160 +89,75 @@ jobs:
with: with:
version: ${{ needs.prepare.outputs.version }} version: ${{ needs.prepare.outputs.version }}
channel: ${{ needs.prepare.outputs.channel }} channel: ${{ needs.prepare.outputs.channel }}
origin: ${{ needs.prepare.outputs.target_repo }}
permissions: permissions:
contents: read contents: read
packages: write # For package cache packages: write # For package cache
actions: write # For cleaning up cache
secrets: secrets:
GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }} GPG_SIGNING_KEY: ${{ secrets.GPG_SIGNING_KEY }}
publish_pypi: publish_pypi_homebrew:
needs: [prepare, build] needs: [prepare, build]
if: ${{ needs.prepare.outputs.pypi_project }}
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions:
id-token: write # mandatory for trusted publishing
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: - uses: actions/setup-python@v4
fetch-depth: 0
- uses: actions/setup-python@v5
with: with:
python-version: "3.10" python-version: "3.10"
- name: Install Requirements - name: Install Requirements
run: | run: |
sudo apt -y install pandoc man sudo apt-get -y install pandoc man
python devscripts/install_deps.py -o --include build python -m pip install -U pip setuptools wheel twine
python -m pip install -U -r requirements.txt
- name: Prepare - name: Prepare
env:
version: ${{ needs.prepare.outputs.version }}
suffix: ${{ needs.prepare.outputs.pypi_suffix }}
channel: ${{ needs.prepare.outputs.channel }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
pypi_project: ${{ needs.prepare.outputs.pypi_project }}
run: | run: |
python devscripts/update-version.py -c "${{ env.channel }}" -r "${{ env.target_repo }}" -s "${{ env.suffix }}" "${{ env.version }}" python devscripts/update-version.py ${{ needs.prepare.outputs.version }}
python devscripts/update_changelog.py -vv
python devscripts/make_lazy_extractors.py python devscripts/make_lazy_extractors.py
sed -i -E '0,/(name = ")[^"]+(")/s//\1${{ env.pypi_project }}\2/' pyproject.toml
- name: Build - name: Build and publish on PyPI
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
if: env.TWINE_PASSWORD != '' && !inputs.prerelease
run: | run: |
rm -rf dist/* rm -rf dist/*
make pypi-files make pypi-files
printf '%s\n\n' \
'Official repository: <https://github.com/yt-dlp/yt-dlp>' \
'**PS**: Some links in this document will not work since this is a copy of the README.md from Github' > ./README.md.new
cat ./README.md >> ./README.md.new && mv -f ./README.md.new ./README.md
python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update" python devscripts/set-variant.py pip -M "You installed yt-dlp with pip or using the wheel from PyPi; Use that to update"
make clean-cache python setup.py sdist bdist_wheel
python -m build --no-isolation . twine upload dist/*
- name: Publish to PyPI - name: Checkout Homebrew repository
uses: pypa/gh-action-pypi-publish@release/v1 env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
uses: actions/checkout@v3
with: with:
verbose: true repository: yt-dlp/homebrew-taps
path: taps
ssh-key: ${{ secrets.BREW_TOKEN }}
- name: Update Homebrew Formulae
env:
BREW_TOKEN: ${{ secrets.BREW_TOKEN }}
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
if: env.BREW_TOKEN != '' && env.PYPI_TOKEN != '' && !inputs.prerelease
run: |
python devscripts/update-formulae.py taps/Formula/yt-dlp.rb "${{ needs.prepare.outputs.version }}"
git -C taps/ config user.name github-actions
git -C taps/ config user.email github-actions@example.com
git -C taps/ commit -am 'yt-dlp: ${{ needs.prepare.outputs.version }}'
git -C taps/ push
publish: publish:
needs: [prepare, build] needs: [prepare, build]
uses: ./.github/workflows/publish.yml
permissions: permissions:
contents: write contents: write
runs-on: ubuntu-latest with:
channel: ${{ needs.prepare.outputs.channel }}
steps: prerelease: ${{ inputs.prerelease }}
- uses: actions/checkout@v4 version: ${{ needs.prepare.outputs.version }}
with: target_commitish: ${{ needs.prepare.outputs.head_sha }}
fetch-depth: 0
- uses: actions/download-artifact@v4
with:
path: artifact
pattern: build-*
merge-multiple: true
- uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Generate release notes
env:
head_sha: ${{ needs.prepare.outputs.head_sha }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
run: |
printf '%s' \
'[![Installation](https://img.shields.io/badge/-Which%20file%20to%20download%3F-white.svg?style=for-the-badge)]' \
'(https://github.com/${{ github.repository }}#installation "Installation instructions") ' \
'[![Discord](https://img.shields.io/discord/807245652072857610?color=blue&labelColor=555555&label=&logo=discord&style=for-the-badge)]' \
'(https://discord.gg/H5MNcFW63r "Discord") ' \
'[![Donate](https://img.shields.io/badge/_-Donate-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)]' \
'(https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators "Donate") ' \
'[![Documentation](https://img.shields.io/badge/-Docs-brightgreen.svg?style=for-the-badge&logo=GitBook&labelColor=555555)]' \
'(https://github.com/${{ github.repository }}' \
'${{ env.target_repo == github.repository && format('/tree/{0}', env.target_tag) || '' }}#readme "Documentation") ' \
${{ env.target_repo == 'yt-dlp/yt-dlp' && '\
"[![Nightly](https://img.shields.io/badge/Nightly%20builds-purple.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest \"Nightly builds\") " \
"[![Master](https://img.shields.io/badge/Master%20builds-lightblue.svg?style=for-the-badge)]" \
"(https://github.com/yt-dlp/yt-dlp-master-builds/releases/latest \"Master builds\")"' || '' }} > ./RELEASE_NOTES
printf '\n\n' >> ./RELEASE_NOTES
cat >> ./RELEASE_NOTES << EOF
#### A description of the various files are in the [README](https://github.com/${{ github.repository }}#release-files)
---
$(python ./devscripts/make_changelog.py -vv --collapsible)
EOF
printf '%s\n\n' '**This is a pre-release build**' >> ./PRERELEASE_NOTES
cat ./RELEASE_NOTES >> ./PRERELEASE_NOTES
printf '%s\n\n' 'Generated from: https://github.com/${{ github.repository }}/commit/${{ env.head_sha }}' >> ./ARCHIVE_NOTES
cat ./RELEASE_NOTES >> ./ARCHIVE_NOTES
- name: Publish to archive repo
env:
GH_TOKEN: ${{ secrets[needs.prepare.outputs.target_repo_token] }}
GH_REPO: ${{ needs.prepare.outputs.target_repo }}
version: ${{ needs.prepare.outputs.version }}
channel: ${{ needs.prepare.outputs.channel }}
if: |
inputs.prerelease && env.GH_TOKEN != '' && env.GH_REPO != '' && env.GH_REPO != github.repository
run: |
title="${{ startswith(env.GH_REPO, 'yt-dlp/') && 'yt-dlp ' || '' }}${{ env.channel }}"
gh release create \
--notes-file ARCHIVE_NOTES \
--title "${title} ${{ env.version }}" \
${{ env.version }} \
artifact/*
- name: Prune old release
env:
GH_TOKEN: ${{ github.token }}
version: ${{ needs.prepare.outputs.version }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
if: |
env.target_repo == github.repository && env.target_tag != env.version
run: |
gh release delete --yes --cleanup-tag "${{ env.target_tag }}" || true
git tag --delete "${{ env.target_tag }}" || true
sleep 5 # Enough time to cover deletion race condition
- name: Publish release
env:
GH_TOKEN: ${{ github.token }}
version: ${{ needs.prepare.outputs.version }}
target_repo: ${{ needs.prepare.outputs.target_repo }}
target_tag: ${{ needs.prepare.outputs.target_tag }}
head_sha: ${{ needs.prepare.outputs.head_sha }}
if: |
env.target_repo == github.repository
run: |
title="${{ github.repository == 'yt-dlp/yt-dlp' && 'yt-dlp ' || '' }}"
title+="${{ env.target_tag != env.version && format('{0} ', env.target_tag) || '' }}"
gh release create \
--notes-file ${{ inputs.prerelease && 'PRERELEASE_NOTES' || 'RELEASE_NOTES' }} \
--target ${{ env.head_sha }} \
--title "${title}${{ env.version }}" \
${{ inputs.prerelease && '--prerelease' || '' }} \
${{ env.target_tag }} \
artifact/*

5
.gitignore vendored

@ -33,7 +33,6 @@ cookies
*.gif *.gif
*.jpeg *.jpeg
*.jpg *.jpg
*.lrc
*.m4a *.m4a
*.m4v *.m4v
*.mhtml *.mhtml
@ -41,7 +40,6 @@ cookies
*.mov *.mov
*.mp3 *.mp3
*.mp4 *.mp4
*.mpg
*.mpga *.mpga
*.oga *.oga
*.ogg *.ogg
@ -49,7 +47,6 @@ cookies
*.png *.png
*.sbv *.sbv
*.srt *.srt
*.ssa
*.swf *.swf
*.swp *.swp
*.tt *.tt
@ -67,7 +64,7 @@ cookies
# Python # Python
*.pyc *.pyc
*.pyo *.pyo
.*_cache .pytest_cache
wine-py2exe/ wine-py2exe/
py2exe.log py2exe.log
build/ build/

@ -1,14 +0,0 @@
repos:
- repo: local
hooks:
- id: linter
name: Apply linter fixes
entry: ruff check --fix .
language: system
types: [python]
require_serial: true
- id: format
name: Apply formatting fixes
entry: autopep8 --in-place .
language: system
types: [python]

@ -1,9 +0,0 @@
repos:
- repo: local
hooks:
- id: fix
name: Apply code fixes
entry: hatch fmt
language: system
types: [python]
require_serial: true

@ -79,7 +79,7 @@ Before reporting any issue, type `yt-dlp -U`. This should report that you're up-
### Is the issue already documented? ### Is the issue already documented?
Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subscribe to it to be notified when there is any progress. Unless you have something useful to add to the conversation, please refrain from commenting. Make sure that someone has not already opened the issue you're trying to open. Search at the top of the window or browse the [GitHub Issues](https://github.com/yt-dlp/yt-dlp/search?type=Issues) of this repository. If there is an issue, subcribe to it to be notified when there is any progress. Unless you have something useful to add to the converation, please refrain from commenting.
Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here. Additionally, it is also helpful to see if the issue has already been documented in the [youtube-dl issue tracker](https://github.com/ytdl-org/youtube-dl/issues). If similar issues have already been reported in youtube-dl (but not in our issue tracker), links to them can be included in your issue report here.
@ -134,59 +134,27 @@ We follow [youtube-dl's policy](https://github.com/ytdl-org/youtube-dl#can-you-a
# DEVELOPER INSTRUCTIONS # DEVELOPER INSTRUCTIONS
Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases), get them via [the other installation methods](README.md#installation) or directly run it using `python -m yt_dlp`. Most users do not need to build yt-dlp and can [download the builds](https://github.com/yt-dlp/yt-dlp/releases) or get them via [the other installation methods](README.md#installation).
`yt-dlp` uses [`hatch`](<https://hatch.pypa.io>) as a project management tool. To run yt-dlp as a developer, you don't need to build anything either. Simply execute
You can easily install it using [`pipx`](<https://pipx.pypa.io>) via `pipx install hatch`, or else via `pip` or your package manager of choice. Make sure you are using at least version `1.10.0`, otherwise some functionality might not work as expected.
If you plan on contributing to `yt-dlp`, best practice is to start by running the following command: python -m yt_dlp
```shell To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work:
$ hatch run setup
```
The above command will install a `pre-commit` hook so that required checks/fixes (linting, formatting) will run automatically before each commit. If any code needs to be linted or formatted, then the commit will be blocked and the necessary changes will be made; you should review all edits and re-commit the fixed version.
After this you can use `hatch shell` to enable a virtual environment that has `yt-dlp` and its development dependencies installed. python -m unittest discover
python test/test_download.py
In addition, the following script commands can be used to run simple tasks such as linting or testing (without having to run `hatch shell` first): nosetests
* `hatch fmt`: Automatically fix linter violations and apply required code formatting changes pytest
* See `hatch fmt --help` for more info
* `hatch test`: Run extractor or core tests
* See `hatch test --help` for more info
See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases. See item 6 of [new extractor tutorial](#adding-support-for-a-new-site) for how to run extractor specific test cases.
While it is strongly recommended to use `hatch` for yt-dlp development, if you are unable to do so, alternatively you can manually create a virtual environment and use the following commands:
```shell
# To only install development dependencies:
$ python -m devscripts.install_deps --include dev
# Or, for an editable install plus dev dependencies:
$ python -m pip install -e ".[default,dev]"
# To setup the pre-commit hook:
$ pre-commit install
# To be used in place of `hatch test`:
$ python -m devscripts.run_tests
# To be used in place of `hatch fmt`:
$ ruff check --fix .
$ autopep8 --in-place .
# To only check code instead of applying fixes:
$ ruff check .
$ autopep8 --diff .
```
If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile). If you want to create a build of yt-dlp yourself, you can follow the instructions [here](README.md#compile).
## Adding new feature or making overarching changes ## Adding new feature or making overarching changes
Before you start writing code for implementing a new feature, open an issue explaining your feature request and at least one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright. Before you start writing code for implementing a new feature, open an issue explaining your feature request and atleast one use case. This allows the maintainers to decide whether such a feature is desired for the project in the first place, and will provide an avenue to discuss some implementation details. If you open a pull request for a new feature without discussing with us first, do not be surprised when we ask for large changes to the code, or even reject it outright.
The same applies for changes to the documentation, code style, or overarching changes to the architecture The same applies for changes to the documentation, code style, or overarching changes to the architecture
@ -200,16 +168,12 @@ After you have ensured this site is distributing its content legally, you can fo
1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork) 1. [Fork this repository](https://github.com/yt-dlp/yt-dlp/fork)
1. Check out the source code with: 1. Check out the source code with:
```shell git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
$ git clone git@github.com:YOUR_GITHUB_USERNAME/yt-dlp.git
```
1. Start a new git branch with 1. Start a new git branch with
```shell cd yt-dlp
$ cd yt-dlp git checkout -b yourextractor
$ git checkout -b yourextractor
```
1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`: 1. Start with this simple template and save it to `yt_dlp/extractor/yourextractor.py`:
@ -223,21 +187,15 @@ After you have ensured this site is distributing its content legally, you can fo
'url': 'https://yourextractor.com/watch/42', 'url': 'https://yourextractor.com/watch/42',
'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)', 'md5': 'TODO: md5 sum of the first 10241 bytes of the video file (use --test)',
'info_dict': { 'info_dict': {
# For videos, only the 'id' and 'ext' fields are required to RUN the test:
'id': '42', 'id': '42',
'ext': 'mp4', 'ext': 'mp4',
# Then if the test run fails, it will output the missing/incorrect fields. 'title': 'Video title goes here',
# Properties can be added as: 'thumbnail': r're:^https?://.*\.jpg$',
# * A value, e.g. # TODO more properties, either as:
# 'title': 'Video title goes here', # * A value
# * MD5 checksum; start the string with 'md5:', e.g. # * MD5 checksum; start the string with md5:
# 'description': 'md5:098f6bcd4621d373cade4e832627b4f6', # * A regular expression; start the string with re:
# * A regular expression; start the string with 're:', e.g. # * Any Python type, e.g. int or float
# 'thumbnail': r're:^https?://.*\.jpg$',
# * A count of elements in a list; start the string with 'count:', e.g.
# 'tags': 'count:10',
# * Any Python type, e.g.
# 'view_count': int,
} }
}] }]
@ -256,33 +214,27 @@ After you have ensured this site is distributing its content legally, you can fo
# TODO more properties (see yt_dlp/extractor/common.py) # TODO more properties (see yt_dlp/extractor/common.py)
} }
``` ```
1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`. Also note that when adding a parenthesized import group, the last import in the group must have a trailing comma in order for this formatting to be respected by our code formatter. 1. Add an import in [`yt_dlp/extractor/_extractors.py`](yt_dlp/extractor/_extractors.py). Note that the class name must end with `IE`.
1. Run `hatch test YourExtractor`. This *may fail* at first, but you can continually re-run it until you're done. Upon failure, it will output the missing fields and/or correct values which you can copy. If you decide to add more than one test, the tests will then be named `YourExtractor`, `YourExtractor_1`, `YourExtractor_2`, etc. Note that tests with an `only_matching` key in the test's dict are not included in the count. You can also run all the tests in one go with `YourExtractor_all` 1. Run `python test/test_download.py TestDownload.test_YourExtractor` (note that `YourExtractor` doesn't end with `IE`). This *should fail* at first, but you can continually re-run it until you're done. If you decide to add more than one test, the tests will then be named `TestDownload.test_YourExtractor`, `TestDownload.test_YourExtractor_1`, `TestDownload.test_YourExtractor_2`, etc. Note that tests with `only_matching` key in test's dict are not counted in. You can also run all the tests in one go with `TestDownload.test_YourExtractor_all`
1. Make sure you have at least one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running. 1. Make sure you have atleast one test for your extractor. Even if all videos covered by the extractor are expected to be inaccessible for automated testing, tests should still be added with a `skip` parameter indicating why the particular test is disabled from running.
1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L119-L440). Add tests and code for as many as you want. 1. Have a look at [`yt_dlp/extractor/common.py`](yt_dlp/extractor/common.py) for possible helper methods and a [detailed description of what your extractor should and may return](yt_dlp/extractor/common.py#L91-L426). Add tests and code for as many as you want.
1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions), passes [ruff](https://docs.astral.sh/ruff/tutorial/#getting-started) code checks and is properly formatted: 1. Make sure your code follows [yt-dlp coding conventions](#yt-dlp-coding-conventions) and check the code with [flake8](https://flake8.pycqa.org/en/latest/index.html#quickstart):
```shell
$ hatch fmt --check
```
You can use `hatch fmt` to automatically fix problems. Rules that the linter/formatter enforces should not be disabled with `# noqa` unless a maintainer requests it. The only exception allowed is for old/printf-style string formatting in GraphQL query templates (use `# noqa: UP031`). $ flake8 yt_dlp/extractor/yourextractor.py
1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.8 and above. Backward compatibility is not required for even older versions of Python. 1. Make sure your code works under all [Python](https://www.python.org/) versions supported by yt-dlp, namely CPython and PyPy for Python 3.7 and above. Backward compatibility is not required for even older versions of Python.
1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this: 1. When the tests pass, [add](https://git-scm.com/docs/git-add) the new files, [commit](https://git-scm.com/docs/git-commit) them and [push](https://git-scm.com/docs/git-push) the result, like this:
```shell $ git add yt_dlp/extractor/_extractors.py
$ git add yt_dlp/extractor/_extractors.py $ git add yt_dlp/extractor/yourextractor.py
$ git add yt_dlp/extractor/yourextractor.py $ git commit -m '[yourextractor] Add extractor'
$ git commit -m '[yourextractor] Add extractor' $ git push origin yourextractor
$ git push origin yourextractor
```
1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it. 1. Finally, [create a pull request](https://help.github.com/articles/creating-a-pull-request). We'll then review and merge it.
In any case, thank you very much for your contributions! In any case, thank you very much for your contributions!
**Tip:** To test extractors that require login information, create a file `test/local_parameters.json` and add `"usenetrc": true` or your `username`&`password` or `cookiefile`/`cookiesfrombrowser` in it: **Tip:** To test extractors that require login information, create a file `test/local_parameters.json` and add `"usenetrc": true` or your username and password in it:
```json ```json
{ {
"username": "your user name", "username": "your user name",
@ -299,7 +251,7 @@ Extractors are very fragile by nature since they depend on the layout of the sou
### Mandatory and optional metafields ### Mandatory and optional metafields
For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L119-L440) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp: For extraction to work yt-dlp relies on metadata your extractor extracts and provides to yt-dlp expressed by an [information dictionary](yt_dlp/extractor/common.py#L91-L426) or simply *info dict*. Only the following meta fields in the *info dict* are considered mandatory for a successful extraction process by yt-dlp:
- `id` (media identifier) - `id` (media identifier)
- `title` (media title) - `title` (media title)
@ -309,7 +261,7 @@ The aforementioned metafields are the critical data that the extraction does not
For pornographic sites, appropriate `age_limit` must also be returned. For pornographic sites, appropriate `age_limit` must also be returned.
The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract useful information with `--ignore-no-formats-error` - e.g. when the video is a live stream that has not started yet. The extractor is allowed to return the info dict without url or formats in some special cases if it allows the user to extract usefull information with `--ignore-no-formats-error` - e.g. when the video is a live stream that has not started yet.
[Any field](yt_dlp/extractor/common.py#219-L426) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields. [Any field](yt_dlp/extractor/common.py#219-L426) apart from the aforementioned ones are considered **optional**. That means that extraction should be **tolerant** to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and **future-proof** in order not to break the extraction of general purpose mandatory fields.
@ -744,7 +696,7 @@ formats = [
### Use convenience conversion and parsing functions ### Use convenience conversion and parsing functions
Wrap all extracted numeric data into safe functions from [`yt_dlp/utils/`](yt_dlp/utils/): `int_or_none`, `float_or_none`. Use them for string to number conversions as well. Wrap all extracted numeric data into safe functions from [`yt_dlp/utils.py`](yt_dlp/utils.py): `int_or_none`, `float_or_none`. Use them for string to number conversions as well.
Use `url_or_none` for safe URL processing. Use `url_or_none` for safe URL processing.
@ -752,7 +704,7 @@ Use `traverse_obj` and `try_call` (superseeds `dict_get` and `try_get`) for safe
Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction. Use `unified_strdate` for uniform `upload_date` or any `YYYYMMDD` meta field extraction, `unified_timestamp` for uniform `timestamp` extraction, `parse_filesize` for `filesize` extraction, `parse_count` for count meta fields extraction, `parse_resolution`, `parse_duration` for `duration` extraction, `parse_age_limit` for `age_limit` extraction.
Explore [`yt_dlp/utils/`](yt_dlp/utils/) for more useful convenience functions. Explore [`yt_dlp/utils.py`](yt_dlp/utils.py) for more useful convenience functions.
#### Examples #### Examples

@ -2,6 +2,7 @@ pukkandan (owner)
shirt-dev (collaborator) shirt-dev (collaborator)
coletdjnz/colethedj (collaborator) coletdjnz/colethedj (collaborator)
Ashish0804 (collaborator) Ashish0804 (collaborator)
nao20010128nao/Lesmiscore (collaborator)
bashonly (collaborator) bashonly (collaborator)
Grub4K (collaborator) Grub4K (collaborator)
h-h-h-h h-h-h-h
@ -454,193 +455,3 @@ vampirefrog
vidiot720 vidiot720
viktor-enzell viktor-enzell
zhgwn zhgwn
barthelmannk
berkanteber
OverlordQ
rexlambert22
Ti4eeT4e
AmanSal1
bbilly1
meliber
nnoboa
rdamas
RfadnjdExt
urectanc
nao20010128nao/Lesmiscore
04-pasha-04
aaruni96
aky-01
AmirAflak
ApoorvShah111
at-wat
davinkevin
demon071
denhotte
FinnRG
fireattack
Frankgoji
GD-Slime
hatsomatt
ifan-t
kshitiz305
kylegustavo
mabdelfattah
nathantouze
niemands
Rajeshwaran2001
RedDeffender
Rohxn16
sb0stn
SevenLives
simon300000
snixon
soundchaser128
szabyg
trainman261
trislee
wader
Yalab7
zhallgato
zhong-yiyu
Zprokkel
AS6939
drzraf
handlerug
jiru
madewokherd
xofe
awalgarg
midnightveil
naginatana
Riteo
1100101
aniolpages
bartbroere
CrendKing
Esokrates
HitomaruKonpaku
LoserFox
peci1
saintliao
shubhexists
SirElderling
almx
elivinsky
starius
TravisDupes
amir16yp
Fymyte
Ganesh910
hashFactory
kclauhk
Kyraminol
lstrojny
middlingphys
NickCis
nicodato
prettykool
S-Aarab
sonmezberkay
TSRBerry
114514ns
agibson-fl
alard
alien-developers
antonkesy
ArnauvGilotra
Arthurszzz
Bibhav48
Bl4Cc4t
boredzo
Caesim404
chkuendig
chtk
Danish-H
dasidiot
diman8
divStar
DmitryScaletta
feederbox826
gmes78
gonzalezjo
hui1601
infanf
jazz1611
jingtra
jkmartindale
johnvictorfs
llistochek
marcdumais
martinxyz
michal-repo
mrmedieval
nbr23
Nicals
Noor-5
NurTasin
pompos02
Pranaxcau
pwaldhauer
RaduManole
RalphORama
rrgomes
ruiminggu
rvsit
sefidel
shmohawk
Snack-X
src-tinkerer
stilor
syntaxsurge
t-nil
ufukk
vista-narvas
x11x
xpadev-net
Xpl0itU
YoshichikaAAA
zhijinwuu
alb
hruzgar
kasper93
leoheitmannruiz
luiso1979
nipotan
Offert4324
sta1us
Tomoka1
trwstin
alexhuot1
clienthax
DaPotato69
emqi
hugohaa
imanoreotwe
JakeFinley96
lostfictions
minamotorin
ocococococ
Podiumnoche
RasmusAntons
roeniss
shoxie007
Szpachlarz
The-MAGI
TuxCoder
voidful
vtexier
WyohKnott
trueauracoral
ASertacAkkaya
axpauls
chilinux
hafeoz
JSubelj
jucor
megumintyan
mgedmin
Niluge-KiWi
peisenwang
TheZ3ro
tippfehlr
varunchopra

File diff suppressed because it is too large Load Diff

@ -29,7 +29,6 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* Improved plugin architecture * Improved plugin architecture
* Rewrote the networking infrastructure, implemented support for `requests`
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements * YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc * Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc * Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
@ -45,26 +44,28 @@ You can also find lists of all [contributors of yt-dlp](CONTRIBUTORS) and [autho
* Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc * Improved/fixed support for HiDive, HotStar, Hungama, LBRY, LinkedInLearning, Mxplayer, SonyLiv, TV2, Vimeo, VLive etc
## [bashonly](https://github.com/bashonly) ## [Lesmiscore](https://github.com/Lesmiscore)
* `--update-to`, self-updater rewrite, automated/nightly/master releases **Bitcoin**: bc1qfd02r007cutfdjwjmyy9w23rjvtls6ncve7r3s
* `--cookies-from-browser` support for Firefox containers, external downloader cookie handling overhaul **Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
* Added support for new websites like Dacast, Kick, NBCStations, Triller, VideoKen, Weverse, WrestleUniverse etc
* Improved/fixed support for Anvato, Brightcove, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
* Download live from start to end for YouTube
* Added support for new websites AbemaTV, mildom, PixivSketch, skeb, radiko, voicy, mirrativ, openrec, whowatch, damtomo, 17.live, mixch etc
* Improved/fixed support for fc2, YahooJapanNews, tver, iwara etc
## [Grub4K](https://github.com/Grub4K)
[![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K) [![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) ## [bashonly](https://github.com/bashonly)
* `--update-to`, self-updater rewrite, automated/nightly/master releases * `--update-to`, automated release, nightly builds
* Reworked internals like `traverse_obj`, various core refactors and bugs fixes * `--cookies-from-browser` support for Firefox containers
* Implemented proper progress reporting for parallel downloads * Added support for new websites Genius, Kick, NBCStations, Triller, VideoKen etc
* Improved/fixed/added Bundestag, crunchyroll, pr0gramm, Twitter, WrestleUniverse etc * Improved/fixed support for Anvato, Brightcove, Instagram, ParamountPlus, Reddit, SlidesLive, TikTok, Twitter, Vimeo etc
## [sepro](https://github.com/seproDev) ## [Grub4K](https://github.com/Grub4K)
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/Grub4K) [![gh-sponsor](https://img.shields.io/badge/_-Github-white.svg?logo=github&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/Grub4K)
* UX improvements: Warn when ffmpeg is missing, warn when double-clicking exe * `--update-to`, automated release, nightly builds
* Code cleanup: Remove dead extractors, mark extractors as broken, enable/apply ruff rules * Rework internals like `traverse_obj`, various core refactors and bugs fixes
* Improved/fixed/added ArdMediathek, DRTV, Floatplane, MagentaMusik, Naver, Nebula, OnDemandKorea, Vbox7 etc * Helped fix crunchyroll, Twitter, wrestleuniverse, wistia, slideslive etc

@ -0,0 +1,10 @@
include AUTHORS
include Changelog.md
include LICENSE
include README.md
include completions/*/*
include supportedsites.md
include yt-dlp.1
include requirements.txt
recursive-include devscripts *
recursive-include test *

@ -2,32 +2,29 @@ all: lazy-extractors yt-dlp doc pypi-files
clean: clean-test clean-dist clean: clean-test clean-dist
clean-all: clean clean-cache clean-all: clean clean-cache
completions: completion-bash completion-fish completion-zsh completions: completion-bash completion-fish completion-zsh
doc: README.md CONTRIBUTING.md CONTRIBUTORS issuetemplates supportedsites doc: README.md CONTRIBUTING.md issuetemplates supportedsites
ot: offlinetest ot: offlinetest
tar: yt-dlp.tar.gz tar: yt-dlp.tar.gz
# Keep this list in sync with pyproject.toml includes/artifacts # Keep this list in sync with MANIFEST.in
# intended use: when building a source distribution, # intended use: when building a source distribution,
# make pypi-files && python3 -m build -sn . # make pypi-files && python setup.py sdist
pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites \ pypi-files: AUTHORS Changelog.md LICENSE README.md README.txt supportedsites \
completions yt-dlp.1 pyproject.toml setup.cfg devscripts/* test/* completions yt-dlp.1 requirements.txt setup.cfg devscripts/* test/*
.PHONY: all clean clean-all clean-test clean-dist clean-cache \ .PHONY: all clean install test tar pypi-files completions ot offlinetest codetest supportedsites
completions completion-bash completion-fish completion-zsh \
doc issuetemplates supportedsites ot offlinetest codetest test \
tar pypi-files lazy-extractors install uninstall
clean-test: clean-test:
rm -rf test/testdata/sigs/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \ rm -rf test/testdata/sigs/player-*.js tmp/ *.annotations.xml *.aria2 *.description *.dump *.frag \
*.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \ *.frag.aria2 *.frag.urls *.info.json *.live_chat.json *.meta *.part* *.tmp *.temp *.unknown_video *.ytdl \
*.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.lrc *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 *.mp4 \ *.3gp *.ape *.ass *.avi *.desktop *.f4v *.flac *.flv *.gif *.jpeg *.jpg *.m4a *.m4v *.mhtml *.mkv *.mov *.mp3 \
*.mpg *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.ssa *.swf *.swp *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp *.mp4 *.mpga *.oga *.ogg *.opus *.png *.sbv *.srt *.swf *.swp *.tt *.ttml *.url *.vtt *.wav *.webloc *.webm *.webp
clean-dist: clean-dist:
rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \ rm -rf yt-dlp.1.temp.md yt-dlp.1 README.txt MANIFEST build/ dist/ .coverage cover/ yt-dlp.tar.gz completions/ \
yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS yt_dlp/extractor/lazy_extractors.py *.spec CONTRIBUTING.md.tmp yt-dlp yt-dlp.exe yt_dlp.egg-info/ AUTHORS .mailmap
clean-cache: clean-cache:
find . \( \ find . \( \
-type d -name ".*_cache" -o -type d -name __pycache__ -o -name "*.pyc" -o -name "*.class" \ -type d -name .pytest_cache -o -type d -name __pycache__ -o -name "*.pyc" -o -name "*.class" \
\) -prune -exec rm -rf {} \; \) -prune -exec rm -rf {} \;
completion-bash: completions/bash/yt-dlp completion-bash: completions/bash/yt-dlp
@ -40,15 +37,12 @@ BINDIR ?= $(PREFIX)/bin
MANDIR ?= $(PREFIX)/man MANDIR ?= $(PREFIX)/man
SHAREDIR ?= $(PREFIX)/share SHAREDIR ?= $(PREFIX)/share
PYTHON ?= /usr/bin/env python3 PYTHON ?= /usr/bin/env python3
GNUTAR ?= tar
# set markdown input format to "markdown-smart" for pandoc version 2+ and to "markdown" for pandoc prior to version 2 # set SYSCONFDIR to /etc if PREFIX=/usr or PREFIX=/usr/local
PANDOC_VERSION_CMD = pandoc -v 2>/dev/null | head -n1 | cut -d' ' -f2 | head -c1 SYSCONFDIR = $(shell if [ $(PREFIX) = /usr -o $(PREFIX) = /usr/local ]; then echo /etc; else echo $(PREFIX)/etc; fi)
PANDOC_VERSION != $(PANDOC_VERSION_CMD)
PANDOC_VERSION ?= $(shell $(PANDOC_VERSION_CMD)) # set markdown input format to "markdown-smart" for pandoc version 2 and to "markdown" for pandoc prior to version 2
MARKDOWN_CMD = if [ "$(PANDOC_VERSION)" = "1" -o "$(PANDOC_VERSION)" = "0" ]; then echo markdown; else echo markdown-smart; fi MARKDOWN = $(shell if [ `pandoc -v | head -n1 | cut -d" " -f2 | head -c1` = "2" ]; then echo markdown-smart; else echo markdown; fi)
MARKDOWN != $(MARKDOWN_CMD)
MARKDOWN ?= $(shell $(MARKDOWN_CMD))
install: lazy-extractors yt-dlp yt-dlp.1 completions install: lazy-extractors yt-dlp yt-dlp.1 completions
mkdir -p $(DESTDIR)$(BINDIR) mkdir -p $(DESTDIR)$(BINDIR)
@ -70,38 +64,33 @@ uninstall:
rm -f $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish rm -f $(DESTDIR)$(SHAREDIR)/fish/vendor_completions.d/yt-dlp.fish
codetest: codetest:
ruff check . flake8 .
autopep8 --diff .
test: test:
$(PYTHON) -m pytest -Werror $(PYTHON) -m pytest
$(MAKE) codetest $(MAKE) codetest
offlinetest: codetest offlinetest: codetest
$(PYTHON) -m pytest -Werror -m "not download" $(PYTHON) -m pytest -k "not download"
CODE_FOLDERS_CMD = find yt_dlp -type f -name '__init__.py' | sed 's,/__init__.py,,' | grep -v '/__' | sort # XXX: This is hard to maintain
CODE_FOLDERS != $(CODE_FOLDERS_CMD) CODE_FOLDERS = yt_dlp yt_dlp/downloader yt_dlp/extractor yt_dlp/postprocessor yt_dlp/compat yt_dlp/compat/urllib yt_dlp/utils yt_dlp/dependencies
CODE_FOLDERS ?= $(shell $(CODE_FOLDERS_CMD)) yt-dlp: yt_dlp/*.py yt_dlp/*/*.py
CODE_FILES_CMD = for f in $(CODE_FOLDERS) ; do echo "$$f" | sed 's,$$,/*.py,' ; done
CODE_FILES != $(CODE_FILES_CMD)
CODE_FILES ?= $(shell $(CODE_FILES_CMD))
yt-dlp: $(CODE_FILES)
mkdir -p zip mkdir -p zip
for d in $(CODE_FOLDERS) ; do \ for d in $(CODE_FOLDERS) ; do \
mkdir -p zip/$$d ;\ mkdir -p zip/$$d ;\
cp -pPR $$d/*.py zip/$$d/ ;\ cp -pPR $$d/*.py zip/$$d/ ;\
done done
(cd zip && touch -t 200001010101 $(CODE_FILES)) touch -t 200001010101 zip/yt_dlp/*.py zip/yt_dlp/*/*.py
mv zip/yt_dlp/__main__.py zip/ mv zip/yt_dlp/__main__.py zip/
(cd zip && zip -q ../yt-dlp $(CODE_FILES) __main__.py) cd zip ; zip -q ../yt-dlp yt_dlp/*.py yt_dlp/*/*.py __main__.py
rm -rf zip rm -rf zip
echo '#!$(PYTHON)' > yt-dlp echo '#!$(PYTHON)' > yt-dlp
cat yt-dlp.zip >> yt-dlp cat yt-dlp.zip >> yt-dlp
rm yt-dlp.zip rm yt-dlp.zip
chmod a+x yt-dlp chmod a+x yt-dlp
README.md: $(CODE_FILES) devscripts/make_readme.py README.md: yt_dlp/*.py yt_dlp/*/*.py devscripts/make_readme.py
COLUMNS=80 $(PYTHON) yt_dlp/__main__.py --ignore-config --help | $(PYTHON) devscripts/make_readme.py COLUMNS=80 $(PYTHON) yt_dlp/__main__.py --ignore-config --help | $(PYTHON) devscripts/make_readme.py
CONTRIBUTING.md: README.md devscripts/make_contributing.py CONTRIBUTING.md: README.md devscripts/make_contributing.py
@ -126,48 +115,41 @@ yt-dlp.1: README.md devscripts/prepare_manpage.py
pandoc -s -f $(MARKDOWN) -t man yt-dlp.1.temp.md -o yt-dlp.1 pandoc -s -f $(MARKDOWN) -t man yt-dlp.1.temp.md -o yt-dlp.1
rm -f yt-dlp.1.temp.md rm -f yt-dlp.1.temp.md
completions/bash/yt-dlp: $(CODE_FILES) devscripts/bash-completion.in completions/bash/yt-dlp: yt_dlp/*.py yt_dlp/*/*.py devscripts/bash-completion.in
mkdir -p completions/bash mkdir -p completions/bash
$(PYTHON) devscripts/bash-completion.py $(PYTHON) devscripts/bash-completion.py
completions/zsh/_yt-dlp: $(CODE_FILES) devscripts/zsh-completion.in completions/zsh/_yt-dlp: yt_dlp/*.py yt_dlp/*/*.py devscripts/zsh-completion.in
mkdir -p completions/zsh mkdir -p completions/zsh
$(PYTHON) devscripts/zsh-completion.py $(PYTHON) devscripts/zsh-completion.py
completions/fish/yt-dlp.fish: $(CODE_FILES) devscripts/fish-completion.in completions/fish/yt-dlp.fish: yt_dlp/*.py yt_dlp/*/*.py devscripts/fish-completion.in
mkdir -p completions/fish mkdir -p completions/fish
$(PYTHON) devscripts/fish-completion.py $(PYTHON) devscripts/fish-completion.py
_EXTRACTOR_FILES_CMD = find yt_dlp/extractor -name '*.py' -and -not -name 'lazy_extractors.py' _EXTRACTOR_FILES = $(shell find yt_dlp/extractor -name '*.py' -and -not -name 'lazy_extractors.py')
_EXTRACTOR_FILES != $(_EXTRACTOR_FILES_CMD)
_EXTRACTOR_FILES ?= $(shell $(_EXTRACTOR_FILES_CMD))
yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES) yt_dlp/extractor/lazy_extractors.py: devscripts/make_lazy_extractors.py devscripts/lazy_load_template.py $(_EXTRACTOR_FILES)
$(PYTHON) devscripts/make_lazy_extractors.py $@ $(PYTHON) devscripts/make_lazy_extractors.py $@
yt-dlp.tar.gz: all yt-dlp.tar.gz: all
@$(GNUTAR) -czf yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \ @tar -czf yt-dlp.tar.gz --transform "s|^|yt-dlp/|" --owner 0 --group 0 \
--exclude '*.DS_Store' \ --exclude '*.DS_Store' \
--exclude '*.kate-swp' \ --exclude '*.kate-swp' \
--exclude '*.pyc' \ --exclude '*.pyc' \
--exclude '*.pyo' \ --exclude '*.pyo' \
--exclude '*~' \ --exclude '*~' \
--exclude '__pycache__' \ --exclude '__pycache__' \
--exclude '.*_cache' \ --exclude '.pytest_cache' \
--exclude '.git' \ --exclude '.git' \
-- \ -- \
README.md supportedsites.md Changelog.md LICENSE \ README.md supportedsites.md Changelog.md LICENSE \
CONTRIBUTING.md Collaborators.md CONTRIBUTORS AUTHORS \ CONTRIBUTING.md Collaborators.md CONTRIBUTORS AUTHORS \
Makefile yt-dlp.1 README.txt completions .gitignore \ Makefile MANIFEST.in yt-dlp.1 README.txt completions \
setup.cfg yt-dlp yt_dlp pyproject.toml devscripts test setup.py setup.cfg yt-dlp yt_dlp requirements.txt \
devscripts test
AUTHORS: Changelog.md
@if [ -d '.git' ] && command -v git > /dev/null ; then \ AUTHORS: .mailmap
echo 'Generating $@ from git commit history' ; \ git shortlog -s -n | cut -f2 | sort > AUTHORS
git shortlog -s -n HEAD | cut -f2 | sort > $@ ; \
fi .mailmap:
git shortlog -s -e -n | awk '!(out[$$NF]++) { $$1="";sub(/^[ \t]+/,""); print}' > .mailmap
CONTRIBUTORS: Changelog.md
@if [ -d '.git' ] && command -v git > /dev/null ; then \
echo 'Updating $@ from git commit history' ; \
$(PYTHON) devscripts/make_changelog.py -v -c > /dev/null ; \
fi

File diff suppressed because it is too large Load Diff

@ -1,10 +0,0 @@
services:
static:
build: static
environment:
channel: ${channel}
origin: ${origin}
version: ${version}
volumes:
- ~/build:/build
- ../..:/yt-dlp

@ -1,21 +0,0 @@
FROM alpine:3.19 as base
RUN apk --update add --no-cache \
build-base \
python3 \
pipx \
;
RUN pipx install pyinstaller
# Requires above step to prepare the shared venv
RUN ~/.local/share/pipx/shared/bin/python -m pip install -U wheel
RUN apk --update add --no-cache \
scons \
patchelf \
binutils \
;
RUN pipx install staticx
WORKDIR /yt-dlp
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh

@ -1,13 +0,0 @@
#!/bin/ash
set -e
source ~/.local/share/pipx/venvs/pyinstaller/bin/activate
python -m devscripts.install_deps --include secretstorage
python -m devscripts.make_lazy_extractors
python devscripts/update-version.py -c "${channel}" -r "${origin}" "${version}"
python -m bundle.pyinstaller
deactivate
source ~/.local/share/pipx/venvs/staticx/bin/activate
staticx /yt-dlp/dist/yt-dlp_linux /build/yt-dlp_linux
deactivate

@ -1,59 +0,0 @@
#!/usr/bin/env python3
# Allow execution from anywhere
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import warnings
from py2exe import freeze
from devscripts.utils import read_version
VERSION = read_version()
def main():
warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'It is recommended to run "pyinst.py" to build using pyinstaller instead')
freeze(
console=[{
'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp',
'icon_resources': [(1, 'devscripts/logo.ico')],
}],
version_info={
'version': VERSION,
'description': 'A feature-rich command-line audio/video downloader',
'comments': 'Official repository: <https://github.com/yt-dlp/yt-dlp>',
'product_name': 'yt-dlp',
'product_version': VERSION,
},
options={
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': [
# py2exe cannot import Crypto
'Crypto',
'Cryptodome',
# requests >=2.32.0 breaks py2exe builds due to certifi dependency
'requests',
'urllib3',
],
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
# Modules that are only imported dynamically must be added here
'includes': ['yt_dlp.compat._legacy', 'yt_dlp.compat._deprecated',
'yt_dlp.utils._legacy', 'yt_dlp.utils._deprecated'],
},
zipfile=None,
)
if __name__ == '__main__':
main()

Binary file not shown.

Binary file not shown.

@ -0,0 +1 @@
# Empty file needed to make devscripts.utils properly importable from outside

@ -9,8 +9,8 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import yt_dlp import yt_dlp
BASH_COMPLETION_FILE = 'completions/bash/yt-dlp' BASH_COMPLETION_FILE = "completions/bash/yt-dlp"
BASH_COMPLETION_TEMPLATE = 'devscripts/bash-completion.in' BASH_COMPLETION_TEMPLATE = "devscripts/bash-completion.in"
def build_completion(opt_parser): def build_completion(opt_parser):
@ -21,9 +21,9 @@ def build_completion(opt_parser):
opts_flag.append(option.get_opt_string()) opts_flag.append(option.get_opt_string())
with open(BASH_COMPLETION_TEMPLATE) as f: with open(BASH_COMPLETION_TEMPLATE) as f:
template = f.read() template = f.read()
with open(BASH_COMPLETION_FILE, 'w') as f: with open(BASH_COMPLETION_FILE, "w") as f:
# just using the special char # just using the special char
filled_template = template.replace('{{flags}}', ' '.join(opts_flag)) filled_template = template.replace("{{flags}}", " ".join(opts_flag))
f.write(filled_template) f.write(filled_template)

@ -1,12 +1,12 @@
[ [
{ {
"action": "add", "action": "add",
"when": "29cb20bd563c02671b31dd840139e93dd37150a1", "when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
"short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`" "short": "[priority] **A new release type has been added!**\n * [`nightly`](https://github.com/yt-dlp/yt-dlp/releases/tag/nightly) builds will be made after each push, containing the latest fixes (but also possibly bugs).\n * When using `--update`/`-U`, a release binary will only update to its current channel (either `stable` or `nightly`).\n * The `--update-to` option has been added allowing the user more control over program upgrades (or downgrades).\n * `--update-to` can change the release channel (`stable`, `nightly`) and also upgrade or downgrade to specific tags.\n * **Usage**: `--update-to CHANNEL`, `--update-to TAG`, `--update-to CHANNEL@TAG`"
}, },
{ {
"action": "add", "action": "add",
"when": "5038f6d713303e0967d002216e7a88652401c22a", "when": "776d1c3f0c9b00399896dd2e40e78e9a43218109",
"short": "[priority] **YouTube throttling fixes!**" "short": "[priority] **YouTube throttling fixes!**"
}, },
{ {
@ -35,150 +35,5 @@
"when": "8417f26b8a819cd7ffcd4e000ca3e45033e670fb", "when": "8417f26b8a819cd7ffcd4e000ca3e45033e670fb",
"short": "Add option `--color` (#6904)", "short": "Add option `--color` (#6904)",
"authors": ["Grub4K"] "authors": ["Grub4K"]
},
{
"action": "change",
"when": "b4e0d75848e9447cee2cd3646ce54d4744a7ff56",
"short": "Improve `--download-sections`\n - Support negative time-ranges\n - Add `*from-url` to obey time-ranges in URL",
"authors": ["pukkandan"]
},
{
"action": "change",
"when": "1e75d97db21152acc764b30a688e516f04b8a142",
"short": "[extractor/youtube] Add `ios` to default clients used\n - IOS is affected neither by 403 nor by nsig so helps mitigate them preemptively\n - IOS also has higher bit-rate 'premium' formats though they are not labeled as such",
"authors": ["pukkandan"]
},
{
"action": "change",
"when": "f2ff0f6f1914b82d4a51681a72cc0828115dcb4a",
"short": "[extractor/motherless] Add gallery support, fix groups (#7211)",
"authors": ["rexlambert22", "Ti4eeT4e"]
},
{
"action": "change",
"when": "a4486bfc1dc7057efca9dd3fe70d7fa25c56f700",
"short": "[misc] Revert \"Add automatic duplicate issue detection\"",
"authors": ["pukkandan"]
},
{
"action": "add",
"when": "1ceb657bdd254ad961489e5060f2ccc7d556b729",
"short": "[priority] Security: [[CVE-2023-35934](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-35934)] Fix [Cookie leak](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-v8mc-9377-rwjj)\n - `--add-header Cookie:` is deprecated and auto-scoped to input URL domains\n - Cookies are scoped when passed to external downloaders\n - Add `cookies` field to info.json and deprecate `http_headers.Cookie`"
},
{
"action": "change",
"when": "b03fa7834579a01cc5fba48c0e73488a16683d48",
"short": "[ie/twitter] Revert 92315c03774cfabb3a921884326beb4b981f786b",
"authors": ["pukkandan"]
},
{
"action": "change",
"when": "fcd6a76adc49d5cd8783985c7ce35384b72e545f",
"short": "[test] Add tests for socks proxies (#7908)",
"authors": ["coletdjnz"]
},
{
"action": "change",
"when": "4bf912282a34b58b6b35d8f7e6be535770c89c76",
"short": "[rh:urllib] Remove dot segments during URL normalization (#7662)",
"authors": ["coletdjnz"]
},
{
"action": "change",
"when": "59e92b1f1833440bb2190f847eb735cf0f90bc85",
"short": "[rh:urllib] Simplify gzip decoding (#7611)",
"authors": ["Grub4K"]
},
{
"action": "add",
"when": "c1d71d0d9f41db5e4306c86af232f5f6220a130b",
"short": "[priority] **The minimum *recommended* Python version has been raised to 3.8**\nSince Python 3.7 has reached end-of-life, support for it will be dropped soon. [Read more](https://github.com/yt-dlp/yt-dlp/issues/7803)"
},
{
"action": "add",
"when": "61bdf15fc7400601c3da1aa7a43917310a5bf391",
"short": "[priority] Security: [[CVE-2023-40581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40581)] [Prevent RCE when using `--exec` with `%q` on Windows](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-42h4-v29r-42qg)\n - The shell escape function is now using `\"\"` instead of `\\\"`.\n - `utils.Popen` has been patched to properly quote commands."
},
{
"action": "change",
"when": "8a8b54523addf46dfd50ef599761a81bc22362e6",
"short": "[rh:requests] Add handler for `requests` HTTP library (#3668)\n\n\tAdds support for HTTPS proxies and persistent connections (keep-alive)",
"authors": ["bashonly", "coletdjnz", "Grub4K"]
},
{
"action": "add",
"when": "1d03633c5a1621b9f3a756f0a4f9dc61fab3aeaa",
"short": "[priority] **The release channels have been adjusted!**\n\t* [`master`](https://github.com/yt-dlp/yt-dlp-master-builds) builds are made after each push, containing the latest fixes (but also possibly bugs). This was previously the `nightly` channel.\n\t* [`nightly`](https://github.com/yt-dlp/yt-dlp-nightly-builds) builds are now made once a day, if there were any changes."
},
{
"action": "add",
"when": "f04b5bedad7b281bee9814686bba1762bae092eb",
"short": "[priority] Security: [[CVE-2023-46121](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-46121)] Patch [Generic Extractor MITM Vulnerability via Arbitrary Proxy Injection](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-3ch3-jhc6-5r8x)\n\t- Disallow smuggling of arbitrary `http_headers`; extractors now only use specific headers"
},
{
"action": "change",
"when": "15f22b4880b6b3f71f350c64d70976ae65b9f1ca",
"short": "[webvtt] Allow spaces before newlines for CueBlock (#7681)",
"authors": ["TSRBerry"]
},
{
"action": "change",
"when": "4ce57d3b873c2887814cbec03d029533e82f7db5",
"short": "[ie] Support multi-period MPD streams (#6654)",
"authors": ["alard", "pukkandan"]
},
{
"action": "change",
"when": "aa7e9ae4f48276bd5d0173966c77db9484f65a0a",
"short": "[ie/xvideos] Support new URL format (#9502)",
"authors": ["sta1us"]
},
{
"action": "remove",
"when": "22e4dfacb61f62dfbb3eb41b31c7b69ba1059b80"
},
{
"action": "change",
"when": "e3a3ed8a981d9395c4859b6ef56cd02bc3148db2",
"short": "[cleanup:ie] No `from` stdlib imports in extractors",
"authors": ["pukkandan"]
},
{
"action": "add",
"when": "9590cc6b4768e190183d7d071a6c78170889116a",
"short": "[priority] Security: [[CVE-2024-22423](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-22423)] [Prevent RCE when using `--exec` with `%q` on Windows](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-hjq6-52gw-2g7p)\n - The shell escape function now properly escapes `%`, `\\` and `\\n`.\n - `utils.Popen` has been patched accordingly."
},
{
"action": "change",
"when": "41ba4a808b597a3afed78c89675a30deb6844450",
"short": "[ie/tiktok] Extract via mobile API only if extractor-arg is passed (#9938)",
"authors": ["bashonly"]
},
{
"action": "remove",
"when": "6e36d17f404556f0e3a43f441c477a71a91877d9"
},
{
"action": "change",
"when": "beaf832c7a9d57833f365ce18f6115b88071b296",
"short": "[ie/soundcloud] Add `formats` extractor-arg (#10004)",
"authors": ["bashonly", "Grub4K"]
},
{
"action": "change",
"when": "5c019f6328ad40d66561eac3c4de0b3cd070d0f6",
"short": "[cleanup] Misc (#9765)",
"authors": ["bashonly", "Grub4K", "seproDev"]
},
{
"action": "change",
"when": "e6a22834df1776ec4e486526f6df2bf53cb7e06f",
"short": "[ie/orf:on] Add `prefer_segments_playlist` extractor-arg (#10314)",
"authors": ["seproDev"]
},
{
"action": "add",
"when": "6aaf96a3d6e7d0d426e97e11a2fcf52fda00e733",
"short": "[priority] Security: [[CVE-2024-10123](https://nvd.nist.gov/vuln/detail/CVE-2024-10123)] [Properly sanitize file-extension to prevent file system modification and RCE](https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-79w7-vh3h-8g4j)\n - Unsafe extensions are now blocked from being downloaded"
} }
] ]

@ -1,5 +1,3 @@
#!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys

@ -1,81 +0,0 @@
#!/usr/bin/env python3
# Allow execution from anywhere
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import argparse
import re
import subprocess
from pathlib import Path
from devscripts.tomlparse import parse_toml
from devscripts.utils import read_file
def parse_args():
parser = argparse.ArgumentParser(description='Install dependencies for yt-dlp')
parser.add_argument(
'input', nargs='?', metavar='TOMLFILE', default=Path(__file__).parent.parent / 'pyproject.toml',
help='input file (default: %(default)s)')
parser.add_argument(
'-e', '--exclude', metavar='DEPENDENCY', action='append',
help='exclude a dependency')
parser.add_argument(
'-i', '--include', metavar='GROUP', action='append',
help='include an optional dependency group')
parser.add_argument(
'-o', '--only-optional', action='store_true',
help='only install optional dependencies')
parser.add_argument(
'-p', '--print', action='store_true',
help='only print requirements to stdout')
parser.add_argument(
'-u', '--user', action='store_true',
help='install with pip as --user')
return parser.parse_args()
def main():
args = parse_args()
project_table = parse_toml(read_file(args.input))['project']
recursive_pattern = re.compile(rf'{project_table["name"]}\[(?P<group_name>[\w-]+)\]')
optional_groups = project_table['optional-dependencies']
excludes = args.exclude or []
def yield_deps(group):
for dep in group:
if mobj := recursive_pattern.fullmatch(dep):
yield from optional_groups.get(mobj.group('group_name'), [])
else:
yield dep
targets = []
if not args.only_optional: # `-o` should exclude 'dependencies' and the 'default' group
targets.extend(project_table['dependencies'])
if 'default' not in excludes: # `--exclude default` should exclude entire 'default' group
targets.extend(yield_deps(optional_groups['default']))
for include in filter(None, map(optional_groups.get, args.include or [])):
targets.extend(yield_deps(include))
targets = [t for t in targets if re.match(r'[\w-]+', t).group(0).lower() not in excludes]
if args.print:
for target in targets:
print(target)
return
pip_args = [sys.executable, '-m', 'pip', 'install', '-U']
if args.user:
pip_args.append('--user')
pip_args.extend(targets)
return subprocess.call(pip_args)
if __name__ == '__main__':
sys.exit(main())

@ -6,7 +6,6 @@ from ..utils import (
age_restricted, age_restricted,
bug_reports_message, bug_reports_message,
classproperty, classproperty,
variadic,
write_string, write_string,
) )

@ -31,55 +31,57 @@ class CommitGroup(enum.Enum):
EXTRACTOR = 'Extractor' EXTRACTOR = 'Extractor'
DOWNLOADER = 'Downloader' DOWNLOADER = 'Downloader'
POSTPROCESSOR = 'Postprocessor' POSTPROCESSOR = 'Postprocessor'
NETWORKING = 'Networking'
MISC = 'Misc.' MISC = 'Misc.'
@classmethod
@property
def ignorable_prefixes(cls):
return ('core', 'downloader', 'extractor', 'misc', 'postprocessor', 'upstream')
@classmethod @classmethod
@lru_cache @lru_cache
def subgroup_lookup(cls): def commit_lookup(cls):
return { return {
name: group name: group
for group, names in { for group, names in {
cls.PRIORITY: {'priority'},
cls.CORE: {
'aes',
'cache',
'compat_utils',
'compat',
'cookies',
'core',
'dependencies',
'jsinterp',
'outtmpl',
'plugins',
'update',
'upstream',
'utils',
},
cls.MISC: { cls.MISC: {
'build', 'build',
'ci',
'cleanup', 'cleanup',
'devscripts', 'devscripts',
'docs', 'docs',
'misc',
'test', 'test',
}, },
cls.NETWORKING: { cls.EXTRACTOR: {'extractor'},
'rh', cls.DOWNLOADER: {'downloader'},
}, cls.POSTPROCESSOR: {'postprocessor'},
}.items() }.items()
for name in names for name in names
} }
@classmethod @classmethod
@lru_cache def get(cls, value):
def group_lookup(cls): result = cls.commit_lookup().get(value)
result = { if result:
'fd': cls.DOWNLOADER, logger.debug(f'Mapped {value!r} => {result.name}')
'ie': cls.EXTRACTOR,
'pp': cls.POSTPROCESSOR,
'upstream': cls.CORE,
}
result.update({item.name.lower(): item for item in iter(cls)})
return result return result
@classmethod
def get(cls, value: str) -> tuple[CommitGroup | None, str | None]:
group, _, subgroup = (group.strip().lower() for group in value.partition('/'))
result = cls.group_lookup().get(group)
if not result:
if subgroup:
return None, value
subgroup = group
result = cls.subgroup_lookup().get(subgroup)
return result, subgroup or None
@dataclass @dataclass
class Commit: class Commit:
@ -200,12 +202,8 @@ class Changelog:
return sorted_items return sorted_items
def format_single_change(self, info: CommitInfo): def format_single_change(self, info):
message, sep, rest = info.message.partition('\n') message = self._format_message_link(info.message, info.commit.hash)
if '[' not in message:
# If the message doesn't already contain markdown links, try to add a link to the commit
message = self._format_message_link(message, info.commit.hash)
if info.issues: if info.issues:
message = f'{message} ({self._format_issues(info.issues)})' message = f'{message} ({self._format_issues(info.issues)})'
@ -221,12 +219,12 @@ class Changelog:
message = f'{message} (With fixes in {fix_message})' message = f'{message} (With fixes in {fix_message})'
return message if not sep else f'{message}{sep}{rest}' return message
def _format_message_link(self, message, commit_hash): def _format_message_link(self, message, hash):
assert message or commit_hash, 'Improperly defined commit message or override' assert message or hash, 'Improperly defined commit message or override'
message = message if message else commit_hash[:HASH_LENGTH] message = message if message else hash[:HASH_LENGTH]
return f'[{message}]({self.repo_url}/commit/{commit_hash})' if commit_hash else message return f'[{message}]({self.repo_url}/commit/{hash})' if hash else message
def _format_issues(self, issues): def _format_issues(self, issues):
return ', '.join(f'[#{issue}]({self.repo_url}/issues/{issue})' for issue in issues) return ', '.join(f'[#{issue}]({self.repo_url}/issues/{issue})' for issue in issues)
@ -247,13 +245,12 @@ class CommitRange:
AUTHOR_INDICATOR_RE = re.compile(r'Authored by:? ', re.IGNORECASE) AUTHOR_INDICATOR_RE = re.compile(r'Authored by:? ', re.IGNORECASE)
MESSAGE_RE = re.compile(r''' MESSAGE_RE = re.compile(r'''
(?:\[(?P<prefix>[^\]]+)\]\ )? (?:\[(?P<prefix>[^\]]+)\]\ )?
(?:(?P<sub_details>`?[\w.-]+`?): )? (?:(?P<sub_details>`?[^:`]+`?): )?
(?P<message>.+?) (?P<message>.+?)
(?:\ \((?P<issues>\#\d+(?:,\ \#\d+)*)\))? (?:\ \((?P<issues>\#\d+(?:,\ \#\d+)*)\))?
''', re.VERBOSE | re.DOTALL) ''', re.VERBOSE | re.DOTALL)
EXTRACTOR_INDICATOR_RE = re.compile(r'(?:Fix|Add)\s+Extractors?', re.IGNORECASE) EXTRACTOR_INDICATOR_RE = re.compile(r'(?:Fix|Add)\s+Extractors?', re.IGNORECASE)
REVERT_RE = re.compile(r'(?:\[[^\]]+\]\s+)?(?i:Revert)\s+([\da-f]{40})') FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert)\s+([\da-f]{40})')
FIXES_RE = re.compile(r'(?i:Fix(?:es)?(?:\s+bugs?)?(?:\s+in|\s+for)?|Revert|Improve)\s+([\da-f]{40})')
UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)') UPSTREAM_MERGE_RE = re.compile(r'Update to ytdl-commit-([\da-f]+)')
def __init__(self, start, end, default_author=None): def __init__(self, start, end, default_author=None):
@ -280,7 +277,7 @@ class CommitRange:
self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}', self.COMMAND, 'log', f'--format=%H%n%s%n%b%n{self.COMMIT_SEPARATOR}',
f'{self._start}..{self._end}' if self._start else self._end).stdout f'{self._start}..{self._end}' if self._start else self._end).stdout
commits, reverts = {}, {} commits = {}
fixes = defaultdict(list) fixes = defaultdict(list)
lines = iter(result.splitlines(False)) lines = iter(result.splitlines(False))
for i, commit_hash in enumerate(lines): for i, commit_hash in enumerate(lines):
@ -301,11 +298,6 @@ class CommitRange:
logger.debug(f'Reached Release commit, breaking: {commit}') logger.debug(f'Reached Release commit, breaking: {commit}')
break break
revert_match = self.REVERT_RE.fullmatch(commit.short)
if revert_match:
reverts[revert_match.group(1)] = commit
continue
fix_match = self.FIXES_RE.search(commit.short) fix_match = self.FIXES_RE.search(commit.short)
if fix_match: if fix_match:
commitish = fix_match.group(1) commitish = fix_match.group(1)
@ -313,13 +305,6 @@ class CommitRange:
commits[commit.hash] = commit commits[commit.hash] = commit
for commitish, revert_commit in reverts.items():
reverted = commits.pop(commitish, None)
if reverted:
logger.debug(f'{commitish} fully reverted {reverted}')
else:
commits[revert_commit.hash] = revert_commit
for commitish, fix_commits in fixes.items(): for commitish, fix_commits in fixes.items():
if commitish in commits: if commitish in commits:
hashes = ', '.join(commit.hash[:HASH_LENGTH] for commit in fix_commits) hashes = ', '.join(commit.hash[:HASH_LENGTH] for commit in fix_commits)
@ -335,7 +320,7 @@ class CommitRange:
for override in overrides: for override in overrides:
when = override.get('when') when = override.get('when')
if when and when not in self and when != self._start: if when and when not in self and when != self._start:
logger.debug(f'Ignored {when!r} override') logger.debug(f'Ignored {when!r}, not in commits {self._start!r}')
continue continue
override_hash = override.get('hash') or when override_hash = override.get('hash') or when
@ -356,14 +341,14 @@ class CommitRange:
logger.info(f'CHANGE {self._commits[commit.hash]} -> {commit}') logger.info(f'CHANGE {self._commits[commit.hash]} -> {commit}')
self._commits[commit.hash] = commit self._commits[commit.hash] = commit
self._commits = dict(reversed(self._commits.items())) self._commits = {key: value for key, value in reversed(self._commits.items())}
def groups(self): def groups(self):
group_dict = defaultdict(list) group_dict = defaultdict(list)
for commit in self: for commit in self:
upstream_re = self.UPSTREAM_MERGE_RE.search(commit.short) upstream_re = self.UPSTREAM_MERGE_RE.search(commit.short)
if upstream_re: if upstream_re:
commit.short = f'[upstream] Merged with youtube-dl {upstream_re.group(1)}' commit.short = f'[core/upstream] Merged with youtube-dl {upstream_re.group(1)}'
match = self.MESSAGE_RE.fullmatch(commit.short) match = self.MESSAGE_RE.fullmatch(commit.short)
if not match: if not match:
@ -390,9 +375,9 @@ class CommitRange:
if not group: if not group:
if self.EXTRACTOR_INDICATOR_RE.search(commit.short): if self.EXTRACTOR_INDICATOR_RE.search(commit.short):
group = CommitGroup.EXTRACTOR group = CommitGroup.EXTRACTOR
logger.error(f'Assuming [ie] group for {commit.short!r}')
else: else:
group = CommitGroup.CORE group = CommitGroup.POSTPROCESSOR
logger.warning(f'Failed to map {commit.short!r}, selected {group.name.lower()}')
commit_info = CommitInfo( commit_info = CommitInfo(
details, sub_details, message.strip(), details, sub_details, message.strip(),
@ -408,20 +393,25 @@ class CommitRange:
if not prefix: if not prefix:
return CommitGroup.CORE, None, () return CommitGroup.CORE, None, ()
prefix, *sub_details = prefix.split(':') prefix, _, details = prefix.partition('/')
prefix = prefix.strip()
details = details.strip()
group, details = CommitGroup.get(prefix) group = CommitGroup.get(prefix.lower())
if group is CommitGroup.PRIORITY and details: if group is CommitGroup.PRIORITY:
details = details.partition('/')[2].strip() prefix, _, details = details.partition('/')
if details and '/' in details: if not details and prefix and prefix not in CommitGroup.ignorable_prefixes:
logger.error(f'Prefix is overnested, using first part: {prefix}') logger.debug(f'Replaced details with {prefix!r}')
details = details.partition('/')[0].strip() details = prefix or None
if details == 'common': if details == 'common':
details = None details = None
elif group is CommitGroup.NETWORKING and details == 'rh':
details = 'Request Handler' if details:
details, *sub_details = details.split(':')
else:
sub_details = []
return group, details, sub_details return group, details, sub_details
@ -445,32 +435,7 @@ def get_new_contributors(contributors_path, commits):
return sorted(new_contributors, key=str.casefold) return sorted(new_contributors, key=str.casefold)
def create_changelog(args): if __name__ == '__main__':
logging.basicConfig(
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
commits = CommitRange(None, args.commitish, args.default_author)
if not args.no_override:
if args.override_path.exists():
overrides = json.loads(read_file(args.override_path))
commits.apply_overrides(overrides)
else:
logger.warning(f'File {args.override_path.as_posix()} does not exist')
logger.info(f'Loaded {len(commits)} commits')
new_contributors = get_new_contributors(args.contributors_path, commits)
if new_contributors:
if args.contributors:
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
logger.info(f'New contributors: {", ".join(new_contributors)}')
return Changelog(commits.groups(), args.repo, args.collapsible)
def create_parser():
import argparse import argparse
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
@ -502,9 +467,27 @@ def create_parser():
parser.add_argument( parser.add_argument(
'--collapsible', action='store_true', '--collapsible', action='store_true',
help='make changelog collapsible (default: %(default)s)') help='make changelog collapsible (default: %(default)s)')
args = parser.parse_args()
return parser logging.basicConfig(
datefmt='%Y-%m-%d %H-%M-%S', format='{asctime} | {levelname:<8} | {message}',
level=logging.WARNING - 10 * args.verbosity, style='{', stream=sys.stderr)
commits = CommitRange(None, args.commitish, args.default_author)
if __name__ == '__main__': if not args.no_override:
print(create_changelog(create_parser().parse_args())) if args.override_path.exists():
overrides = json.loads(read_file(args.override_path))
commits.apply_overrides(overrides)
else:
logger.warning(f'File {args.override_path.as_posix()} does not exist')
logger.info(f'Loaded {len(commits)} commits')
new_contributors = get_new_contributors(args.contributors_path, commits)
if new_contributors:
if args.contributors:
write_file(args.contributors_path, '\n'.join(new_contributors) + '\n', mode='a')
logger.info(f'New contributors: {", ".join(new_contributors)}')
print(Changelog(commits.groups(), args.repo, args.collapsible))

@ -9,7 +9,12 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import re import re
from devscripts.utils import get_filename_args, read_file, write_file from devscripts.utils import (
get_filename_args,
read_file,
read_version,
write_file,
)
VERBOSE_TMPL = ''' VERBOSE_TMPL = '''
- type: checkboxes - type: checkboxes
@ -30,18 +35,19 @@ VERBOSE_TMPL = '''
description: | description: |
It should start like this: It should start like this:
placeholder: | placeholder: |
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=BaW_jenozKc'] [debug] Command-line config: ['-vU', 'test:youtube']
[debug] Portable config "yt-dlp.conf": ['-i']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8 [debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@... from yt-dlp/yt-dlp [b634ba742] (win_exe) [debug] yt-dlp version %(version)s [9d339c4] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0 [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1 [debug] exe versions: ffmpeg N-106550-g072101bd52-20220410 (fdk,setts), ffprobe N-106624-g391ce570c8-20220415, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {} [debug] Proxy map: {}
[debug] Request Handlers: urllib, requests [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Loaded 1893 extractors Latest version: %(version)s, Current version: %(version)s
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest yt-dlp is up to date (%(version)s)
yt-dlp is up to date (nightly@... from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=BaW_jenozKc
<more lines> <more lines>
render: shell render: shell
validations: validations:
@ -60,7 +66,7 @@ NO_SKIP = '''
def main(): def main():
fields = {'no_skip': NO_SKIP} fields = {'version': read_version(), 'no_skip': NO_SKIP}
fields['verbose'] = VERBOSE_TMPL % fields fields['verbose'] = VERBOSE_TMPL % fields
fields['verbose_optional'] = re.sub(r'(\n\s+validations:)?\n\s+required: true', '', fields['verbose']) fields['verbose_optional'] = re.sub(r'(\n\s+validations:)?\n\s+required: true', '', fields['verbose'])

@ -51,7 +51,7 @@ PATCHES = (
), ),
( # Headings ( # Headings
r'(?m)^ (\w.+\n)( (?=\w))?', r'(?m)^ (\w.+\n)( (?=\w))?',
r'## \1', r'## \1'
), ),
( # Fixup `--date` formatting ( # Fixup `--date` formatting
rf'(?m)( --date DATE.+({delim}[^\[]+)*)\[.+({delim}.+)*$', rf'(?m)( --date DATE.+({delim}[^\[]+)*)\[.+({delim}.+)*$',
@ -61,26 +61,26 @@ PATCHES = (
), ),
( # Do not split URLs ( # Do not split URLs
rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s', rf'({delim[:-1]})? (?P<label>\[\S+\] )?(?P<url>https?({delim})?:({delim})?/({delim})?/(({delim})?\S+)+)\s',
lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n')), lambda mobj: ''.join((delim, mobj.group('label') or '', re.sub(r'\s+', '', mobj.group('url')), '\n'))
), ),
( # Do not split "words" ( # Do not split "words"
rf'(?m)({delim}\S+)+$', rf'(?m)({delim}\S+)+$',
lambda mobj: ''.join((delim, mobj.group(0).replace(delim, ''))), lambda mobj: ''.join((delim, mobj.group(0).replace(delim, '')))
), ),
( # Allow overshooting last line ( # Allow overshooting last line
rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})', rf'(?m)^(?P<prev>.+)${delim}(?P<current>.+)$(?!{delim})',
lambda mobj: (mobj.group().replace(delim, ' ') lambda mobj: (mobj.group().replace(delim, ' ')
if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT if len(mobj.group()) - len(delim) + 1 <= max_width + ALLOWED_OVERSHOOT
else mobj.group()), else mobj.group())
), ),
( # Avoid newline when a space is available b/w switch and description ( # Avoid newline when a space is available b/w switch and description
DISABLE_PATCH, # This creates issues with prepare_manpage DISABLE_PATCH, # This creates issues with prepare_manpage
r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim), r'(?m)^(\s{4}-.{%d})(%s)' % (switch_col_width - 6, delim),
r'\1 ', r'\1 '
), ),
( # Replace brackets with a Markdown link ( # Replace brackets with a Markdown link
r'SponsorBlock API \((http.+)\)', r'SponsorBlock API \((http.+)\)',
r'[SponsorBlock API](\1)', r'[SponsorBlock API](\1)'
), ),
) )

@ -24,7 +24,7 @@ PREFIX = r'''%yt-dlp(1)
# NAME # NAME
yt\-dlp \- A feature\-rich command\-line audio/video downloader yt\-dlp \- A youtube-dl fork with additional features and patches
# SYNOPSIS # SYNOPSIS
@ -43,27 +43,6 @@ def filter_excluded_sections(readme):
'', readme) '', readme)
def _convert_code_blocks(readme):
current_code_block = None
for line in readme.splitlines(True):
if current_code_block:
if line == current_code_block:
current_code_block = None
yield '\n'
else:
yield f' {line}'
elif line.startswith('```'):
current_code_block = line.count('`') * '`' + '\n'
yield '\n'
else:
yield line
def convert_code_blocks(readme):
return ''.join(_convert_code_blocks(readme))
def move_sections(readme): def move_sections(readme):
MOVE_TAG_TEMPLATE = '<!-- MANPAGE: MOVE "%s" SECTION HERE -->' MOVE_TAG_TEMPLATE = '<!-- MANPAGE: MOVE "%s" SECTION HERE -->'
sections = re.findall(r'(?m)^%s$' % ( sections = re.findall(r'(?m)^%s$' % (
@ -86,10 +65,8 @@ def move_sections(readme):
def filter_options(readme): def filter_options(readme):
section = re.search(r'(?sm)^# USAGE AND OPTIONS\n.+?(?=^# )', readme).group(0) section = re.search(r'(?sm)^# USAGE AND OPTIONS\n.+?(?=^# )', readme).group(0)
section_new = section.replace('*', R'\*')
options = '# OPTIONS\n' options = '# OPTIONS\n'
for line in section_new.split('\n')[1:]: for line in section.split('\n')[1:]:
mobj = re.fullmatch(r'''(?x) mobj = re.fullmatch(r'''(?x)
\s{4}(?P<opt>-(?:,\s|[^\s])+) \s{4}(?P<opt>-(?:,\s|[^\s])+)
(?:\s(?P<meta>(?:[^\s]|\s(?!\s))+))? (?:\s(?P<meta>(?:[^\s]|\s(?!\s))+))?
@ -109,7 +86,7 @@ def filter_options(readme):
return readme.replace(section, options, 1) return readme.replace(section, options, 1)
TRANSFORM = compose_functions(filter_excluded_sections, convert_code_blocks, move_sections, filter_options) TRANSFORM = compose_functions(filter_excluded_sections, move_sections, filter_options)
def main(): def main():

@ -0,0 +1,17 @@
@setlocal
@echo off
cd /d %~dp0..
if ["%~1"]==[""] (
set "test_set="test""
) else if ["%~1"]==["core"] (
set "test_set="-m not download""
) else if ["%~1"]==["download"] (
set "test_set="-m "download""
) else (
echo.Invalid test type "%~1". Use "core" ^| "download"
exit /b 1
)
set PYTHONWARNINGS=error
pytest %test_set%

@ -1,75 +0,0 @@
#!/usr/bin/env python3
import argparse
import functools
import os
import re
import shlex
import subprocess
import sys
from pathlib import Path
fix_test_name = functools.partial(re.compile(r'IE(_all|_\d+)?$').sub, r'\1')
def parse_args():
parser = argparse.ArgumentParser(description='Run selected yt-dlp tests')
parser.add_argument(
'test', help='a extractor tests, or one of "core" or "download"', nargs='*')
parser.add_argument(
'-k', help='run a test matching EXPRESSION. Same as "pytest -k"', metavar='EXPRESSION')
parser.add_argument(
'--pytest-args', help='arguments to passthrough to pytest')
return parser.parse_args()
def run_tests(*tests, pattern=None, ci=False):
run_core = 'core' in tests or (not pattern and not tests)
run_download = 'download' in tests
tests = list(map(fix_test_name, tests))
pytest_args = args.pytest_args or os.getenv('HATCH_TEST_ARGS', '')
arguments = ['pytest', '-Werror', '--tb=short', *shlex.split(pytest_args)]
if ci:
arguments.append('--color=yes')
if pattern:
arguments.extend(['-k', pattern])
if run_core:
arguments.extend(['-m', 'not download'])
elif run_download:
arguments.extend(['-m', 'download'])
else:
arguments.extend(
f'test/test_download.py::TestDownload::test_{test}' for test in tests)
print(f'Running {arguments}', flush=True)
try:
return subprocess.call(arguments)
except FileNotFoundError:
pass
arguments = [sys.executable, '-Werror', '-m', 'unittest']
if pattern:
arguments.extend(['-k', pattern])
if run_core:
print('"pytest" needs to be installed to run core tests', file=sys.stderr, flush=True)
return 1
elif run_download:
arguments.append('test.test_download')
else:
arguments.extend(
f'test.test_download.TestDownload.test_{test}' for test in tests)
print(f'Running {arguments}', flush=True)
return subprocess.call(arguments)
if __name__ == '__main__':
try:
args = parse_args()
os.chdir(Path(__file__).parent.parent)
sys.exit(run_tests(*args.test, pattern=args.k, ci=bool(os.getenv('CI'))))
except KeyboardInterrupt:
pass

@ -0,0 +1,14 @@
#!/usr/bin/env sh
if [ -z "$1" ]; then
test_set='test'
elif [ "$1" = 'core' ]; then
test_set="-m not download"
elif [ "$1" = 'download' ]; then
test_set="-m download"
else
echo 'Invalid test type "'"$1"'". Use "core" | "download"'
exit 1
fi
python3 -bb -Werror -m pytest "$test_set"

@ -30,7 +30,7 @@ def property_setter(name, value):
opts = parse_options() opts = parse_options()
transform = compose_functions( transform = compose_functions(
property_setter('VARIANT', opts.variant), property_setter('VARIANT', opts.variant),
property_setter('UPDATE_HINT', opts.update_message), property_setter('UPDATE_HINT', opts.update_message)
) )
write_file(VERSION_FILE, transform(read_file(VERSION_FILE))) write_file(VERSION_FILE, transform(read_file(VERSION_FILE)))

@ -1,189 +0,0 @@
#!/usr/bin/env python3
"""
Simple parser for spec compliant toml files
A simple toml parser for files that comply with the spec.
Should only be used to parse `pyproject.toml` for `install_deps.py`.
IMPORTANT: INVALID FILES OR MULTILINE STRINGS ARE NOT SUPPORTED!
"""
from __future__ import annotations
import datetime as dt
import json
import re
WS = r'(?:[\ \t]*)'
STRING_RE = re.compile(r'"(?:\\.|[^\\"\n])*"|\'[^\'\n]*\'')
SINGLE_KEY_RE = re.compile(rf'{STRING_RE.pattern}|[A-Za-z0-9_-]+')
KEY_RE = re.compile(rf'{WS}(?:{SINGLE_KEY_RE.pattern}){WS}(?:\.{WS}(?:{SINGLE_KEY_RE.pattern}){WS})*')
EQUALS_RE = re.compile(rf'={WS}')
WS_RE = re.compile(WS)
_SUBTABLE = rf'(?P<subtable>^\[(?P<is_list>\[)?(?P<path>{KEY_RE.pattern})\]\]?)'
EXPRESSION_RE = re.compile(rf'^(?:{_SUBTABLE}|{KEY_RE.pattern}=)', re.MULTILINE)
LIST_WS_RE = re.compile(rf'{WS}((#[^\n]*)?\n{WS})*')
LEFTOVER_VALUE_RE = re.compile(r'[^,}\]\t\n#]+')
def parse_key(value: str):
for match in SINGLE_KEY_RE.finditer(value):
if match[0][0] == '"':
yield json.loads(match[0])
elif match[0][0] == '\'':
yield match[0][1:-1]
else:
yield match[0]
def get_target(root: dict, paths: list[str], is_list=False):
target = root
for index, key in enumerate(paths, 1):
use_list = is_list and index == len(paths)
result = target.get(key)
if result is None:
result = [] if use_list else {}
target[key] = result
if isinstance(result, dict):
target = result
elif use_list:
target = {}
result.append(target)
else:
target = result[-1]
assert isinstance(target, dict)
return target
def parse_enclosed(data: str, index: int, end: str, ws_re: re.Pattern):
index += 1
if match := ws_re.match(data, index):
index = match.end()
while data[index] != end:
index = yield True, index
if match := ws_re.match(data, index):
index = match.end()
if data[index] == ',':
index += 1
if match := ws_re.match(data, index):
index = match.end()
assert data[index] == end
yield False, index + 1
def parse_value(data: str, index: int):
if data[index] == '[':
result = []
indices = parse_enclosed(data, index, ']', LIST_WS_RE)
valid, index = next(indices)
while valid:
index, value = parse_value(data, index)
result.append(value)
valid, index = indices.send(index)
return index, result
if data[index] == '{':
result = {}
indices = parse_enclosed(data, index, '}', WS_RE)
valid, index = next(indices)
while valid:
valid, index = indices.send(parse_kv_pair(data, index, result))
return index, result
if match := STRING_RE.match(data, index):
return match.end(), json.loads(match[0]) if match[0][0] == '"' else match[0][1:-1]
match = LEFTOVER_VALUE_RE.match(data, index)
assert match
value = match[0].strip()
for func in [
int,
float,
dt.time.fromisoformat,
dt.date.fromisoformat,
dt.datetime.fromisoformat,
{'true': True, 'false': False}.get,
]:
try:
value = func(value)
break
except Exception:
pass
return match.end(), value
def parse_kv_pair(data: str, index: int, target: dict):
match = KEY_RE.match(data, index)
if not match:
return None
*keys, key = parse_key(match[0])
match = EQUALS_RE.match(data, match.end())
assert match
index = match.end()
index, value = parse_value(data, index)
get_target(target, keys)[key] = value
return index
def parse_toml(data: str):
root = {}
target = root
index = 0
while True:
match = EXPRESSION_RE.search(data, index)
if not match:
break
if match.group('subtable'):
index = match.end()
path, is_list = match.group('path', 'is_list')
target = get_target(root, list(parse_key(path)), bool(is_list))
continue
index = parse_kv_pair(data, match.start(), target)
assert index is not None
return root
def main():
import argparse
from pathlib import Path
parser = argparse.ArgumentParser()
parser.add_argument('infile', type=Path, help='The TOML file to read as input')
args = parser.parse_args()
with args.infile.open('r', encoding='utf-8') as file:
data = file.read()
def default(obj):
if isinstance(obj, (dt.date, dt.time, dt.datetime)):
return obj.isoformat()
print(json.dumps(parse_toml(data), default=default))
if __name__ == '__main__':
main()

@ -0,0 +1,39 @@
#!/usr/bin/env python3
"""
Usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version>
version can be either 0-aligned (yt-dlp version) or normalized (PyPi version)
"""
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
import re
import urllib.request
from devscripts.utils import read_file, write_file
filename, version = sys.argv[1:]
normalized_version = '.'.join(str(int(x)) for x in version.split('.'))
pypi_release = json.loads(urllib.request.urlopen(
'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version
).read().decode())
tarball_file = next(x for x in pypi_release['urls'] if x['filename'].endswith('.tar.gz'))
sha256sum = tarball_file['digests']['sha256']
url = tarball_file['url']
formulae_text = read_file(filename)
formulae_text = re.sub(r'sha256 "[0-9a-f]*?"', 'sha256 "%s"' % sha256sum, formulae_text, count=1)
formulae_text = re.sub(r'url "[^"]*?"', 'url "%s"' % url, formulae_text, count=1)
write_file(filename, formulae_text)

@ -9,22 +9,22 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import argparse import argparse
import contextlib import contextlib
import datetime as dt
import sys import sys
from datetime import datetime
from devscripts.utils import read_version, run_process, write_file from devscripts.utils import read_version, run_process, write_file
def get_new_version(version, revision): def get_new_version(version, revision):
if not version: if not version:
version = dt.datetime.now(dt.timezone.utc).strftime('%Y.%m.%d') version = datetime.utcnow().strftime('%Y.%m.%d')
if revision: if revision:
assert revision.isdecimal(), 'Revision must be a number' assert revision.isdigit(), 'Revision must be a number'
else: else:
old_version = read_version().split('.') old_version = read_version().split('.')
if version.split('.') == old_version[:3]: if version.split('.') == old_version[:3]:
revision = str(int(([*old_version, 0])[3]) + 1) revision = str(int((old_version + [0])[3]) + 1)
return f'{version}.{revision}' if revision else version return f'{version}.{revision}' if revision else version
@ -46,10 +46,6 @@ VARIANT = None
UPDATE_HINT = None UPDATE_HINT = None
CHANNEL = {channel!r} CHANNEL = {channel!r}
ORIGIN = {origin!r}
_pkg_version = {package_version!r}
''' '''
if __name__ == '__main__': if __name__ == '__main__':
@ -57,12 +53,6 @@ if __name__ == '__main__':
parser.add_argument( parser.add_argument(
'-c', '--channel', default='stable', '-c', '--channel', default='stable',
help='Select update channel (default: %(default)s)') help='Select update channel (default: %(default)s)')
parser.add_argument(
'-r', '--origin', default='local',
help='Select origin/repository (default: %(default)s)')
parser.add_argument(
'-s', '--suffix', default='',
help='Add an alphanumeric suffix to the package version, e.g. "dev"')
parser.add_argument( parser.add_argument(
'-o', '--output', default='yt_dlp/version.py', '-o', '--output', default='yt_dlp/version.py',
help='The output file to write to (default: %(default)s)') help='The output file to write to (default: %(default)s)')
@ -76,7 +66,6 @@ if __name__ == '__main__':
args.version if args.version and '.' in args.version args.version if args.version and '.' in args.version
else get_new_version(None, args.version)) else get_new_version(None, args.version))
write_file(args.output, VERSION_TEMPLATE.format( write_file(args.output, VERSION_TEMPLATE.format(
version=version, git_head=git_head, channel=args.channel, origin=args.origin, version=version, git_head=git_head, channel=args.channel))
package_version=f'{version}{args.suffix}'))
print(f'version={version} ({args.channel}), head={git_head}') print(f'version={version} ({args.channel}), head={git_head}')

@ -1,26 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from pathlib import Path
from devscripts.make_changelog import create_changelog, create_parser
from devscripts.utils import read_file, read_version, write_file
# Always run after devscripts/update-version.py, and run before `make doc|pypi-files|tar|all`
if __name__ == '__main__':
parser = create_parser()
parser.description = 'Update an existing changelog file with an entry for a new release'
parser.add_argument(
'--changelog-path', type=Path, default=Path(__file__).parent.parent / 'Changelog.md',
help='path to the Changelog file')
args = parser.parse_args()
new_entry = create_changelog(args)
header, sep, changelog = read_file(args.changelog_path).partition('\n### ')
write_file(args.changelog_path, f'{header}{sep}{read_version()}\n{new_entry}\n{sep}{changelog}')

@ -13,11 +13,10 @@ def write_file(fname, content, mode='w'):
return f.write(content) return f.write(content)
def read_version(fname='yt_dlp/version.py', varname='__version__'): def read_version(fname='yt_dlp/version.py'):
"""Get the version without importing the package""" """Get the version without importing the package"""
items = {} exec(compile(read_file(fname), fname, 'exec'))
exec(compile(read_file(fname), fname, 'exec'), items) return locals()['__version__']
return items[varname]
def get_filename_args(has_infile=False, default_outfile=None): def get_filename_args(has_infile=False, default_outfile=None):

@ -9,15 +9,15 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import yt_dlp import yt_dlp
ZSH_COMPLETION_FILE = 'completions/zsh/_yt-dlp' ZSH_COMPLETION_FILE = "completions/zsh/_yt-dlp"
ZSH_COMPLETION_TEMPLATE = 'devscripts/zsh-completion.in' ZSH_COMPLETION_TEMPLATE = "devscripts/zsh-completion.in"
def build_completion(opt_parser): def build_completion(opt_parser):
opts = [opt for group in opt_parser.option_groups opts = [opt for group in opt_parser.option_groups
for opt in group.option_list] for opt in group.option_list]
opts_file = [opt for opt in opts if opt.metavar == 'FILE'] opts_file = [opt for opt in opts if opt.metavar == "FILE"]
opts_dir = [opt for opt in opts if opt.metavar == 'DIR'] opts_dir = [opt for opt in opts if opt.metavar == "DIR"]
fileopts = [] fileopts = []
for opt in opts_file: for opt in opts_file:
@ -38,11 +38,11 @@ def build_completion(opt_parser):
with open(ZSH_COMPLETION_TEMPLATE) as f: with open(ZSH_COMPLETION_TEMPLATE) as f:
template = f.read() template = f.read()
template = template.replace('{{fileopts}}', '|'.join(fileopts)) template = template.replace("{{fileopts}}", "|".join(fileopts))
template = template.replace('{{diropts}}', '|'.join(diropts)) template = template.replace("{{diropts}}", "|".join(diropts))
template = template.replace('{{flags}}', ' '.join(flags)) template = template.replace("{{flags}}", " ".join(flags))
with open(ZSH_COMPLETION_FILE, 'w') as f: with open(ZSH_COMPLETION_FILE, "w") as f:
f.write(template) f.write(template)

@ -4,7 +4,7 @@
import os import os
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
import platform import platform
@ -68,7 +68,7 @@ def exe(onedir):
'dist/', 'dist/',
onedir and f'{name}/', onedir and f'{name}/',
name, name,
OS_NAME == 'win32' and '.exe', OS_NAME == 'win32' and '.exe'
))) )))
@ -113,7 +113,7 @@ def windows_set_version(exe, version):
), ),
kids=[ kids=[
StringFileInfo([StringTable('040904B0', [ StringFileInfo([StringTable('040904B0', [
StringStruct('Comments', f'yt-dlp{suffix} Command Line Interface'), StringStruct('Comments', 'yt-dlp%s Command Line Interface' % suffix),
StringStruct('CompanyName', 'https://github.com/yt-dlp'), StringStruct('CompanyName', 'https://github.com/yt-dlp'),
StringStruct('FileDescription', 'yt-dlp%s' % (MACHINE and f' ({MACHINE})')), StringStruct('FileDescription', 'yt-dlp%s' % (MACHINE and f' ({MACHINE})')),
StringStruct('FileVersion', version), StringStruct('FileVersion', version),
@ -123,8 +123,8 @@ def windows_set_version(exe, version):
StringStruct('ProductName', f'yt-dlp{suffix}'), StringStruct('ProductName', f'yt-dlp{suffix}'),
StringStruct( StringStruct(
'ProductVersion', f'{version}{suffix} on Python {platform.python_version()}'), 'ProductVersion', f'{version}{suffix} on Python {platform.python_version()}'),
])]), VarFileInfo([VarStruct('Translation', [0, 1200])]), ])]), VarFileInfo([VarStruct('Translation', [0, 1200])])
], ]
)) ))

@ -1,384 +1,5 @@
[build-system] [build-system]
requires = ["hatchling"] build-backend = 'setuptools.build_meta'
build-backend = "hatchling.build" # https://github.com/yt-dlp/yt-dlp/issues/5941
# https://github.com/pypa/distutils/issues/17
[project] requires = ['setuptools > 50']
name = "yt-dlp"
maintainers = [
{name = "pukkandan", email = "pukkandan.ytdlp@gmail.com"},
{name = "Grub4K", email = "contact@grub4k.xyz"},
{name = "bashonly", email = "bashonly@protonmail.com"},
{name = "coletdjnz", email = "coletdjnz@protonmail.com"},
]
description = "A feature-rich command-line audio/video downloader"
readme = "README.md"
requires-python = ">=3.8"
keywords = [
"youtube-dl",
"video-downloader",
"youtube-downloader",
"sponsorblock",
"youtube-dlc",
"yt-dlp",
]
license = {file = "LICENSE"}
classifiers = [
"Topic :: Multimedia :: Video",
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Programming Language :: Python",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"License :: OSI Approved :: The Unlicense (Unlicense)",
"Operating System :: OS Independent",
]
dynamic = ["version"]
dependencies = [
"brotli; implementation_name=='cpython'",
"brotlicffi; implementation_name!='cpython'",
"certifi",
"mutagen",
"pycryptodomex",
"requests>=2.32.2,<3",
"urllib3>=1.26.17,<3",
"websockets>=12.0",
]
[project.optional-dependencies]
default = []
curl-cffi = ["curl-cffi==0.5.10; implementation_name=='cpython'"]
secretstorage = [
"cffi",
"secretstorage",
]
build = [
"build",
"hatchling",
"pip",
"setuptools",
"wheel",
]
dev = [
"pre-commit",
"yt-dlp[static-analysis]",
"yt-dlp[test]",
]
static-analysis = [
"autopep8~=2.0",
"ruff~=0.5.0",
]
test = [
"pytest~=8.1",
]
pyinstaller = [
"pyinstaller>=6.7.0", # for compat with setuptools>=70
]
py2exe = [
"py2exe>=0.12",
]
[project.urls]
Documentation = "https://github.com/yt-dlp/yt-dlp#readme"
Repository = "https://github.com/yt-dlp/yt-dlp"
Tracker = "https://github.com/yt-dlp/yt-dlp/issues"
Funding = "https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators"
[project.scripts]
yt-dlp = "yt_dlp:main"
[project.entry-points.pyinstaller40]
hook-dirs = "yt_dlp.__pyinstaller:get_hook_dirs"
[tool.hatch.build.targets.sdist]
include = [
"/yt_dlp",
"/devscripts",
"/test",
"/.gitignore", # included by default, needed for auto-excludes
"/Changelog.md",
"/LICENSE", # included as license
"/pyproject.toml", # included by default
"/README.md", # included as readme
"/setup.cfg",
"/supportedsites.md",
]
artifacts = [
"/yt_dlp/extractor/lazy_extractors.py",
"/completions",
"/AUTHORS", # included by default
"/README.txt",
"/yt-dlp.1",
]
[tool.hatch.build.targets.wheel]
packages = ["yt_dlp"]
artifacts = ["/yt_dlp/extractor/lazy_extractors.py"]
[tool.hatch.build.targets.wheel.shared-data]
"completions/bash/yt-dlp" = "share/bash-completion/completions/yt-dlp"
"completions/zsh/_yt-dlp" = "share/zsh/site-functions/_yt-dlp"
"completions/fish/yt-dlp.fish" = "share/fish/vendor_completions.d/yt-dlp.fish"
"README.txt" = "share/doc/yt_dlp/README.txt"
"yt-dlp.1" = "share/man/man1/yt-dlp.1"
[tool.hatch.version]
path = "yt_dlp/version.py"
pattern = "_pkg_version = '(?P<version>[^']+)'"
[tool.hatch.envs.default]
features = ["curl-cffi", "default"]
dependencies = ["pre-commit"]
path = ".venv"
installer = "uv"
[tool.hatch.envs.default.scripts]
setup = "pre-commit install --config .pre-commit-hatch.yaml"
yt-dlp = "python -Werror -Xdev -m yt_dlp {args}"
[tool.hatch.envs.hatch-static-analysis]
detached = true
features = ["static-analysis"]
dependencies = [] # override hatch ruff version
config-path = "pyproject.toml"
[tool.hatch.envs.hatch-static-analysis.scripts]
format-check = "autopep8 --diff {args:.}"
format-fix = "autopep8 --in-place {args:.}"
lint-check = "ruff check {args:.}"
lint-fix = "ruff check --fix {args:.}"
[tool.hatch.envs.hatch-test]
features = ["test"]
dependencies = [
"pytest-randomly~=3.15",
"pytest-rerunfailures~=14.0",
"pytest-xdist[psutil]~=3.5",
]
[tool.hatch.envs.hatch-test.scripts]
run = "python -m devscripts.run_tests {args}"
run-cov = "echo Code coverage not implemented && exit 1"
[[tool.hatch.envs.hatch-test.matrix]]
python = [
"3.8",
"3.9",
"3.10",
"3.11",
"3.12",
"pypy3.8",
"pypy3.9",
"pypy3.10",
]
[tool.ruff]
line-length = 120
[tool.ruff.lint]
ignore = [
"E402", # module-import-not-at-top-of-file
"E501", # line-too-long
"E731", # lambda-assignment
"E741", # ambiguous-variable-name
"UP036", # outdated-version-block
"B006", # mutable-argument-default
"B008", # function-call-in-default-argument
"B011", # assert-false
"B017", # assert-raises-exception
"B023", # function-uses-loop-variable (false positives)
"B028", # no-explicit-stacklevel
"B904", # raise-without-from-inside-except
"C401", # unnecessary-generator-set
"C402", # unnecessary-generator-dict
"PIE790", # unnecessary-placeholder
"SIM102", # collapsible-if
"SIM108", # if-else-block-instead-of-if-exp
"SIM112", # uncapitalized-environment-variables
"SIM113", # enumerate-for-loop
"SIM114", # if-with-same-arms
"SIM115", # open-file-with-context-handler
"SIM117", # multiple-with-statements
"SIM223", # expr-and-false
"SIM300", # yoda-conditions
"TD001", # invalid-todo-tag
"TD002", # missing-todo-author
"TD003", # missing-todo-link
"PLE0604", # invalid-all-object (false positives)
"PLE0643", # potential-index-error (false positives)
"PLW0603", # global-statement
"PLW1510", # subprocess-run-without-check
"PLW2901", # redefined-loop-name
"RUF001", # ambiguous-unicode-character-string
"RUF012", # mutable-class-default
"RUF100", # unused-noqa (flake8 has slightly different behavior)
]
select = [
"E", # pycodestyle Error
"W", # pycodestyle Warning
"F", # Pyflakes
"I", # isort
"Q", # flake8-quotes
"N803", # invalid-argument-name
"N804", # invalid-first-argument-name-for-class-method
"UP", # pyupgrade
"B", # flake8-bugbear
"A", # flake8-builtins
"COM", # flake8-commas
"C4", # flake8-comprehensions
"FA", # flake8-future-annotations
"ISC", # flake8-implicit-str-concat
"ICN003", # banned-import-from
"PIE", # flake8-pie
"T20", # flake8-print
"RSE", # flake8-raise
"RET504", # unnecessary-assign
"SIM", # flake8-simplify
"TID251", # banned-api
"TD", # flake8-todos
"PLC", # Pylint Convention
"PLE", # Pylint Error
"PLW", # Pylint Warning
"RUF", # Ruff-specific rules
]
[tool.ruff.lint.per-file-ignores]
"devscripts/lazy_load_template.py" = [
"F401", # unused-import
]
"!yt_dlp/extractor/**.py" = [
"I", # isort
"ICN003", # banned-import-from
"T20", # flake8-print
"A002", # builtin-argument-shadowing
"C408", # unnecessary-collection-call
]
"yt_dlp/jsinterp.py" = [
"UP031", # printf-string-formatting
]
[tool.ruff.lint.isort]
known-first-party = [
"bundle",
"devscripts",
"test",
]
relative-imports-order = "closest-to-furthest"
[tool.ruff.lint.flake8-quotes]
docstring-quotes = "double"
multiline-quotes = "single"
inline-quotes = "single"
avoid-escape = false
[tool.ruff.lint.pep8-naming]
classmethod-decorators = [
"yt_dlp.utils.classproperty",
]
[tool.ruff.lint.flake8-import-conventions]
banned-from = [
"base64",
"datetime",
"functools",
"glob",
"hashlib",
"itertools",
"json",
"math",
"os",
"pathlib",
"random",
"re",
"string",
"sys",
"time",
"urllib.parse",
"uuid",
"xml",
]
[tool.ruff.lint.flake8-tidy-imports.banned-api]
"yt_dlp.compat.compat_str".msg = "Use `str` instead."
"yt_dlp.compat.compat_b64decode".msg = "Use `base64.b64decode` instead."
"yt_dlp.compat.compat_urlparse".msg = "Use `urllib.parse` instead."
"yt_dlp.compat.compat_parse_qs".msg = "Use `urllib.parse.parse_qs` instead."
"yt_dlp.compat.compat_urllib_parse_unquote".msg = "Use `urllib.parse.unquote` instead."
"yt_dlp.compat.compat_urllib_parse_urlencode".msg = "Use `urllib.parse.urlencode` instead."
"yt_dlp.compat.compat_urllib_parse_urlparse".msg = "Use `urllib.parse.urlparse` instead."
"yt_dlp.compat.compat_shlex_quote".msg = "Use `yt_dlp.utils.shell_quote` instead."
"yt_dlp.utils.error_to_compat_str".msg = "Use `str` instead."
[tool.autopep8]
max_line_length = 120
recursive = true
exit-code = true
jobs = 0
select = [
"E101",
"E112",
"E113",
"E115",
"E116",
"E117",
"E121",
"E122",
"E123",
"E124",
"E125",
"E126",
"E127",
"E128",
"E129",
"E131",
"E201",
"E202",
"E203",
"E211",
"E221",
"E222",
"E223",
"E224",
"E225",
"E226",
"E227",
"E228",
"E231",
"E241",
"E242",
"E251",
"E252",
"E261",
"E262",
"E265",
"E266",
"E271",
"E272",
"E273",
"E274",
"E275",
"E301",
"E302",
"E303",
"E304",
"E305",
"E306",
"E502",
"E701",
"E702",
"E704",
"W391",
"W504",
]
[tool.pytest.ini_options]
addopts = "-ra -v --strict-markers"
markers = [
"download",
]

@ -0,0 +1,6 @@
mutagen
pycryptodomex
websockets
brotli; platform_python_implementation=='CPython'
brotlicffi; platform_python_implementation!='CPython'
certifi

@ -1,9 +1,14 @@
[wheel]
universal = true
[flake8] [flake8]
exclude = build,venv,.tox,.git,.pytest_cache exclude = build,venv,.tox,.git,.pytest_cache
ignore = E402,E501,E731,E741,W503 ignore = E402,E501,E731,E741,W503
max_line_length = 120 max_line_length = 120
per_file_ignores = per_file_ignores =
devscripts/lazy_load_template.py: F401 devscripts/lazy_load_template.py: F401
yt_dlp/utils/__init__.py: F401, F403
[autoflake] [autoflake]
@ -14,9 +19,15 @@ remove-duplicate-keys = true
remove-unused-variables = true remove-unused-variables = true
[tool:pytest]
addopts = -ra -v --strict-markers
markers =
download
[tox:tox] [tox:tox]
skipsdist = true skipsdist = true
envlist = py{38,39,310,311,312},pypy{38,39,310} envlist = py{36,37,38,39,310,311},pypy{36,37,38,39}
skip_missing_interpreters = true skip_missing_interpreters = true
[testenv] # tox [testenv] # tox
@ -29,7 +40,7 @@ setenv =
[isort] [isort]
py_version = 38 py_version = 37
multi_line_output = VERTICAL_HANGING_INDENT multi_line_output = VERTICAL_HANGING_INDENT
line_length = 80 line_length = 80
reverse_relative = true reverse_relative = true

@ -0,0 +1,175 @@
#!/usr/bin/env python3
# Allow execution from anywhere
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
import subprocess
import warnings
try:
from setuptools import Command, find_packages, setup
setuptools_available = True
except ImportError:
from distutils.core import Command, setup
setuptools_available = False
from devscripts.utils import read_file, read_version
VERSION = read_version()
DESCRIPTION = 'A youtube-dl fork with additional features and patches'
LONG_DESCRIPTION = '\n\n'.join((
'Official repository: <https://github.com/yt-dlp/yt-dlp>',
'**PS**: Some links in this document will not work since this is a copy of the README.md from Github',
read_file('README.md')))
REQUIREMENTS = read_file('requirements.txt').splitlines()
def packages():
if setuptools_available:
return find_packages(exclude=('youtube_dl', 'youtube_dlc', 'test', 'ytdlp_plugins', 'devscripts'))
return [
'yt_dlp', 'yt_dlp.extractor', 'yt_dlp.downloader', 'yt_dlp.postprocessor', 'yt_dlp.compat',
]
def py2exe_params():
warnings.warn(
'py2exe builds do not support pycryptodomex and needs VC++14 to run. '
'It is recommended to run "pyinst.py" to build using pyinstaller instead')
return {
'console': [{
'script': './yt_dlp/__main__.py',
'dest_base': 'yt-dlp',
'icon_resources': [(1, 'devscripts/logo.ico')],
}],
'version_info': {
'version': VERSION,
'description': DESCRIPTION,
'comments': LONG_DESCRIPTION.split('\n')[0],
'product_name': 'yt-dlp',
'product_version': VERSION,
},
'options': {
'bundle_files': 0,
'compressed': 1,
'optimize': 2,
'dist_dir': './dist',
'excludes': ['Crypto', 'Cryptodome'], # py2exe cannot import Crypto
'dll_excludes': ['w9xpopen.exe', 'crypt32.dll'],
# Modules that are only imported dynamically must be added here
'includes': ['yt_dlp.compat._legacy'],
},
'zipfile': None,
}
def build_params():
files_spec = [
('share/bash-completion/completions', ['completions/bash/yt-dlp']),
('share/zsh/site-functions', ['completions/zsh/_yt-dlp']),
('share/fish/vendor_completions.d', ['completions/fish/yt-dlp.fish']),
('share/doc/yt_dlp', ['README.txt']),
('share/man/man1', ['yt-dlp.1'])
]
data_files = []
for dirname, files in files_spec:
resfiles = []
for fn in files:
if not os.path.exists(fn):
warnings.warn(f'Skipping file {fn} since it is not present. Try running " make pypi-files " first')
else:
resfiles.append(fn)
data_files.append((dirname, resfiles))
params = {'data_files': data_files}
if setuptools_available:
params['entry_points'] = {
'console_scripts': ['yt-dlp = yt_dlp:main'],
'pyinstaller40': ['hook-dirs = yt_dlp.__pyinstaller:get_hook_dirs'],
}
else:
params['scripts'] = ['yt-dlp']
return params
class build_lazy_extractors(Command):
description = 'Build the extractor lazy loading module'
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
if self.dry_run:
print('Skipping build of lazy extractors in dry run mode')
return
subprocess.run([sys.executable, 'devscripts/make_lazy_extractors.py'])
def main():
if sys.argv[1:2] == ['py2exe']:
params = py2exe_params()
try:
from py2exe import freeze
except ImportError:
import py2exe # noqa: F401
warnings.warn('You are using an outdated version of py2exe. Support for this version will be removed in the future')
params['console'][0].update(params.pop('version_info'))
params['options'] = {'py2exe': params.pop('options')}
else:
return freeze(**params)
else:
params = build_params()
setup(
name='yt-dlp',
version=VERSION,
maintainer='pukkandan',
maintainer_email='pukkandan.ytdlp@gmail.com',
description=DESCRIPTION,
long_description=LONG_DESCRIPTION,
long_description_content_type='text/markdown',
url='https://github.com/yt-dlp/yt-dlp',
packages=packages(),
install_requires=REQUIREMENTS,
python_requires='>=3.7',
project_urls={
'Documentation': 'https://github.com/yt-dlp/yt-dlp#readme',
'Source': 'https://github.com/yt-dlp/yt-dlp',
'Tracker': 'https://github.com/yt-dlp/yt-dlp/issues',
'Funding': 'https://github.com/yt-dlp/yt-dlp/blob/master/Collaborators.md#collaborators',
},
classifiers=[
'Topic :: Multimedia :: Video',
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
'Programming Language :: Python',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: Implementation',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'License :: Public Domain',
'Operating System :: OS Independent',
],
cmdclass={'build_lazy_extractors': build_lazy_extractors},
**params
)
main()

File diff suppressed because it is too large Load Diff

@ -1,64 +0,0 @@
import inspect
import pytest
from yt_dlp.networking import RequestHandler
from yt_dlp.networking.common import _REQUEST_HANDLERS
from yt_dlp.utils._utils import _YDLLogger as FakeLogger
@pytest.fixture
def handler(request):
RH_KEY = getattr(request, 'param', None)
if not RH_KEY:
return
if inspect.isclass(RH_KEY) and issubclass(RH_KEY, RequestHandler):
handler = RH_KEY
elif RH_KEY in _REQUEST_HANDLERS:
handler = _REQUEST_HANDLERS[RH_KEY]
else:
pytest.skip(f'{RH_KEY} request handler is not available')
class HandlerWrapper(handler):
RH_KEY = handler.RH_KEY
def __init__(self, **kwargs):
super().__init__(logger=FakeLogger, **kwargs)
return HandlerWrapper
@pytest.fixture(autouse=True)
def skip_handler(request, handler):
"""usage: pytest.mark.skip_handler('my_handler', 'reason')"""
for marker in request.node.iter_markers('skip_handler'):
if marker.args[0] == handler.RH_KEY:
pytest.skip(marker.args[1] if len(marker.args) > 1 else '')
@pytest.fixture(autouse=True)
def skip_handler_if(request, handler):
"""usage: pytest.mark.skip_handler_if('my_handler', lambda request: True, 'reason')"""
for marker in request.node.iter_markers('skip_handler_if'):
if marker.args[0] == handler.RH_KEY and marker.args[1](request):
pytest.skip(marker.args[2] if len(marker.args) > 2 else '')
@pytest.fixture(autouse=True)
def skip_handlers_if(request, handler):
"""usage: pytest.mark.skip_handlers_if(lambda request, handler: True, 'reason')"""
for marker in request.node.iter_markers('skip_handlers_if'):
if handler and marker.args[0](request, handler):
pytest.skip(marker.args[1] if len(marker.args) > 1 else '')
def pytest_configure(config):
config.addinivalue_line(
'markers', 'skip_handler(handler): skip test for the given handler',
)
config.addinivalue_line(
'markers', 'skip_handler_if(handler): skip test for the given handler if condition is true',
)
config.addinivalue_line(
'markers', 'skip_handlers_if(handler): skip test for handlers when the condition is true',
)

@ -10,14 +10,14 @@ import types
import yt_dlp.extractor import yt_dlp.extractor
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_os_name from yt_dlp.compat import compat_os_name
from yt_dlp.utils import preferredencoding, try_call, write_string, find_available_port from yt_dlp.utils import preferredencoding, write_string
if 'pytest' in sys.modules: if 'pytest' in sys.modules:
import pytest import pytest
is_download_test = pytest.mark.download is_download_test = pytest.mark.download
else: else:
def is_download_test(test_class): def is_download_test(testClass):
return test_class return testClass
def get_params(override=None): def get_params(override=None):
@ -45,10 +45,10 @@ def try_rm(filename):
def report_warning(message, *args, **kwargs): def report_warning(message, *args, **kwargs):
""" '''
Print the message to stderr, it will be prefixed with 'WARNING:' Print the message to stderr, it will be prefixed with 'WARNING:'
If stderr is a tty file the 'WARNING:' will be colored If stderr is a tty file the 'WARNING:' will be colored
""" '''
if sys.stderr.isatty() and compat_os_name != 'nt': if sys.stderr.isatty() and compat_os_name != 'nt':
_msg_header = '\033[0;33mWARNING:\033[0m' _msg_header = '\033[0;33mWARNING:\033[0m'
else: else:
@ -138,14 +138,15 @@ def expect_value(self, got, expected, field):
elif isinstance(expected, list) and isinstance(got, list): elif isinstance(expected, list) and isinstance(got, list):
self.assertEqual( self.assertEqual(
len(expected), len(got), len(expected), len(got),
f'Expect a list of length {len(expected)}, but got a list of length {len(got)} for field {field}') 'Expect a list of length %d, but got a list of length %d for field %s' % (
len(expected), len(got), field))
for index, (item_got, item_expected) in enumerate(zip(got, expected)): for index, (item_got, item_expected) in enumerate(zip(got, expected)):
type_got = type(item_got) type_got = type(item_got)
type_expected = type(item_expected) type_expected = type(item_expected)
self.assertEqual( self.assertEqual(
type_expected, type_got, type_expected, type_got,
f'Type mismatch for list item at index {index} for field {field}, ' 'Type mismatch for list item at index %d for field %s, expected %r, got %r' % (
f'expected {type_expected!r}, got {type_got!r}') index, field, type_expected, type_got))
expect_value(self, item_got, item_expected, field) expect_value(self, item_got, item_expected, field)
else: else:
if isinstance(expected, str) and expected.startswith('md5:'): if isinstance(expected, str) and expected.startswith('md5:'):
@ -213,23 +214,14 @@ def sanitize_got_info_dict(got_dict):
test_info_dict = { test_info_dict = {
key: sanitize(key, value) for key, value in got_dict.items() key: sanitize(key, value) for key, value in got_dict.items()
if value is not None and key not in IGNORED_FIELDS and ( if value is not None and key not in IGNORED_FIELDS and not any(
not any(key.startswith(f'{prefix}_') for prefix in IGNORED_PREFIXES) key.startswith(f'{prefix}_') for prefix in IGNORED_PREFIXES)
or key == '_old_archive_ids')
} }
# display_id may be generated from id # display_id may be generated from id
if test_info_dict.get('display_id') == test_info_dict.get('id'): if test_info_dict.get('display_id') == test_info_dict.get('id'):
test_info_dict.pop('display_id') test_info_dict.pop('display_id')
# Remove deprecated fields
for old in YoutubeDL._deprecated_multivalue_fields:
test_info_dict.pop(old, None)
# release_year may be generated from release_date
if try_call(lambda: test_info_dict['release_year'] == int(test_info_dict['release_date'][:4])):
test_info_dict.pop('release_year')
# Check url for flat entries # Check url for flat entries
if got_dict.get('_type', 'video') != 'video' and got_dict.get('url'): if got_dict.get('_type', 'video') != 'video' and got_dict.get('url'):
test_info_dict['url'] = got_dict['url'] test_info_dict['url'] = got_dict['url']
@ -245,11 +237,11 @@ def expect_info_dict(self, got_dict, expected_dict):
if expected_dict.get('ext'): if expected_dict.get('ext'):
mandatory_fields.extend(('url', 'ext')) mandatory_fields.extend(('url', 'ext'))
for key in mandatory_fields: for key in mandatory_fields:
self.assertTrue(got_dict.get(key), f'Missing mandatory field {key}') self.assertTrue(got_dict.get(key), 'Missing mandatory field %s' % key)
# Check for mandatory fields that are automatically set by YoutubeDL # Check for mandatory fields that are automatically set by YoutubeDL
if got_dict.get('_type', 'video') == 'video': if got_dict.get('_type', 'video') == 'video':
for key in ['webpage_url', 'extractor', 'extractor_key']: for key in ['webpage_url', 'extractor', 'extractor_key']:
self.assertTrue(got_dict.get(key), f'Missing field: {key}') self.assertTrue(got_dict.get(key), 'Missing field: %s' % key)
test_info_dict = sanitize_got_info_dict(got_dict) test_info_dict = sanitize_got_info_dict(got_dict)
@ -257,7 +249,7 @@ def expect_info_dict(self, got_dict, expected_dict):
if missing_keys: if missing_keys:
def _repr(v): def _repr(v):
if isinstance(v, str): if isinstance(v, str):
return "'{}'".format(v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n')) return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n')
elif isinstance(v, type): elif isinstance(v, type):
return v.__name__ return v.__name__
else: else:
@ -274,7 +266,8 @@ def expect_info_dict(self, got_dict, expected_dict):
write_string(info_dict_str.replace('\n', '\n '), out=sys.stderr) write_string(info_dict_str.replace('\n', '\n '), out=sys.stderr)
self.assertFalse( self.assertFalse(
missing_keys, missing_keys,
'Missing keys in test definition: {}'.format(', '.join(sorted(missing_keys)))) 'Missing keys in test definition: %s' % (
', '.join(sorted(missing_keys))))
def assertRegexpMatches(self, text, regexp, msg=None): def assertRegexpMatches(self, text, regexp, msg=None):
@ -283,9 +276,9 @@ def assertRegexpMatches(self, text, regexp, msg=None):
else: else:
m = re.match(regexp, text) m = re.match(regexp, text)
if not m: if not m:
note = f'Regexp didn\'t match: {regexp!r} not found' note = 'Regexp didn\'t match: %r not found' % (regexp)
if len(text) < 1000: if len(text) < 1000:
note += f' in {text!r}' note += ' in %r' % text
if msg is None: if msg is None:
msg = note msg = note
else: else:
@ -308,7 +301,7 @@ def assertLessEqual(self, got, expected, msg=None):
def assertEqual(self, got, expected, msg=None): def assertEqual(self, got, expected, msg=None):
if got != expected: if not (got == expected):
if msg is None: if msg is None:
msg = f'{got!r} not equal to {expected!r}' msg = f'{got!r} not equal to {expected!r}'
self.assertTrue(got == expected, msg) self.assertTrue(got == expected, msg)
@ -331,13 +324,3 @@ def http_server_port(httpd):
else: else:
sock = httpd.socket sock = httpd.socket
return sock.getsockname()[1] return sock.getsockname()[1]
def verify_address_availability(address):
if find_available_port(address) is None:
pytest.skip(f'Unable to bind to source address {address} (address may not exist)')
def validate_and_send(rh, req):
rh.validate(req)
return rh.send(req)

@ -262,19 +262,19 @@ class TestInfoExtractor(unittest.TestCase):
''', ''',
{ {
'chapters': [ 'chapters': [
{'title': 'Explosie Turnhout', 'start_time': 70, 'end_time': 440}, {"title": "Explosie Turnhout", "start_time": 70, "end_time": 440},
{'title': 'Jaarwisseling', 'start_time': 440, 'end_time': 1179}, {"title": "Jaarwisseling", "start_time": 440, "end_time": 1179},
{'title': 'Natuurbranden Colorado', 'start_time': 1179, 'end_time': 1263}, {"title": "Natuurbranden Colorado", "start_time": 1179, "end_time": 1263},
{'title': 'Klimaatverandering', 'start_time': 1263, 'end_time': 1367}, {"title": "Klimaatverandering", "start_time": 1263, "end_time": 1367},
{'title': 'Zacht weer', 'start_time': 1367, 'end_time': 1383}, {"title": "Zacht weer", "start_time": 1367, "end_time": 1383},
{'title': 'Financiële balans', 'start_time': 1383, 'end_time': 1484}, {"title": "Financiële balans", "start_time": 1383, "end_time": 1484},
{'title': 'Club Brugge', 'start_time': 1484, 'end_time': 1575}, {"title": "Club Brugge", "start_time": 1484, "end_time": 1575},
{'title': 'Mentale gezondheid bij topsporters', 'start_time': 1575, 'end_time': 1728}, {"title": "Mentale gezondheid bij topsporters", "start_time": 1575, "end_time": 1728},
{'title': 'Olympische Winterspelen', 'start_time': 1728, 'end_time': 1873}, {"title": "Olympische Winterspelen", "start_time": 1728, "end_time": 1873},
{'title': 'Sober oudjaar in Nederland', 'start_time': 1873, 'end_time': 2079.23}, {"title": "Sober oudjaar in Nederland", "start_time": 1873, "end_time": 2079.23}
], ],
'title': 'Het journaal - Aflevering 365 (Seizoen 2021)', 'title': 'Het journaal - Aflevering 365 (Seizoen 2021)'
}, {}, }, {}
), ),
( (
# test multiple thumbnails in a list # test multiple thumbnails in a list
@ -301,13 +301,13 @@ class TestInfoExtractor(unittest.TestCase):
'thumbnails': [{'url': 'https://www.rainews.it/cropgd/640x360/dl/img/2021/12/30/1640886376927_GettyImages.jpg'}], 'thumbnails': [{'url': 'https://www.rainews.it/cropgd/640x360/dl/img/2021/12/30/1640886376927_GettyImages.jpg'}],
}, },
{}, {},
), )
] ]
for html, expected_dict, search_json_ld_kwargs in _TESTS: for html, expected_dict, search_json_ld_kwargs in _TESTS:
expect_dict( expect_dict(
self, self,
self.ie._search_json_ld(html, None, **search_json_ld_kwargs), self.ie._search_json_ld(html, None, **search_json_ld_kwargs),
expected_dict, expected_dict
) )
def test_download_json(self): def test_download_json(self):
@ -366,7 +366,7 @@ class TestInfoExtractor(unittest.TestCase):
'height': 740, 'height': 740,
'tbr': 1500, 'tbr': 1500,
}], }],
'thumbnail': '//pics.r18.com/digital/amateur/mgmr105/mgmr105jp.jpg', 'thumbnail': '//pics.r18.com/digital/amateur/mgmr105/mgmr105jp.jpg'
}) })
# from https://www.csfd.cz/ # from https://www.csfd.cz/
@ -419,9 +419,9 @@ class TestInfoExtractor(unittest.TestCase):
'height': 1080, 'height': 1080,
}], }],
'subtitles': { 'subtitles': {
'cs': [{'url': 'https://video.csfd.cz/files/subtitles/163/344/163344115_4c388b.srt'}], 'cs': [{'url': 'https://video.csfd.cz/files/subtitles/163/344/163344115_4c388b.srt'}]
}, },
'thumbnail': 'https://img.csfd.cz/files/images/film/video/preview/163/344/163344118_748d20.png?h360', 'thumbnail': 'https://img.csfd.cz/files/images/film/video/preview/163/344/163344118_748d20.png?h360'
}) })
# from https://tamasha.com/v/Kkdjw # from https://tamasha.com/v/Kkdjw
@ -452,7 +452,7 @@ class TestInfoExtractor(unittest.TestCase):
'ext': 'mp4', 'ext': 'mp4',
'format_id': '144p', 'format_id': '144p',
'height': 144, 'height': 144,
}], }]
}) })
# from https://www.directvnow.com # from https://www.directvnow.com
@ -470,7 +470,7 @@ class TestInfoExtractor(unittest.TestCase):
'formats': [{ 'formats': [{
'ext': 'mp4', 'ext': 'mp4',
'url': 'https://cdn.directv.com/content/dam/dtv/prod/website_directvnow-international/videos/DTVN_hdr_HBO_v3.mp4', 'url': 'https://cdn.directv.com/content/dam/dtv/prod/website_directvnow-international/videos/DTVN_hdr_HBO_v3.mp4',
}], }]
}) })
# from https://www.directvnow.com # from https://www.directvnow.com
@ -488,7 +488,7 @@ class TestInfoExtractor(unittest.TestCase):
'formats': [{ 'formats': [{
'url': 'https://cdn.directv.com/content/dam/dtv/prod/website_directvnow-international/videos/DTVN_hdr_HBO_v3.mp4', 'url': 'https://cdn.directv.com/content/dam/dtv/prod/website_directvnow-international/videos/DTVN_hdr_HBO_v3.mp4',
'ext': 'mp4', 'ext': 'mp4',
}], }]
}) })
# from https://www.klarna.com/uk/ # from https://www.klarna.com/uk/
@ -547,8 +547,8 @@ class TestInfoExtractor(unittest.TestCase):
'id': 'XEgvuql4', 'id': 'XEgvuql4',
'formats': [{ 'formats': [{
'url': 'rtmp://192.138.214.154/live/sjclive', 'url': 'rtmp://192.138.214.154/live/sjclive',
'ext': 'flv', 'ext': 'flv'
}], }]
}) })
# from https://www.pornoxo.com/videos/7564/striptease-from-sexy-secretary/ # from https://www.pornoxo.com/videos/7564/striptease-from-sexy-secretary/
@ -588,8 +588,8 @@ class TestInfoExtractor(unittest.TestCase):
'thumbnail': 'https://t03.vipstreamservice.com/thumbs/pxo-full/2009-12/14/a4b2157147afe5efa93ce1978e0265289c193874e02597.flv-full-13.jpg', 'thumbnail': 'https://t03.vipstreamservice.com/thumbs/pxo-full/2009-12/14/a4b2157147afe5efa93ce1978e0265289c193874e02597.flv-full-13.jpg',
'formats': [{ 'formats': [{
'url': 'https://cdn.pornoxo.com/key=MF+oEbaxqTKb50P-w9G3nA,end=1489689259,ip=104.199.146.27/ip=104.199.146.27/speed=6573765/buffer=3.0/2009-12/4b2157147afe5efa93ce1978e0265289c193874e02597.flv', 'url': 'https://cdn.pornoxo.com/key=MF+oEbaxqTKb50P-w9G3nA,end=1489689259,ip=104.199.146.27/ip=104.199.146.27/speed=6573765/buffer=3.0/2009-12/4b2157147afe5efa93ce1978e0265289c193874e02597.flv',
'ext': 'flv', 'ext': 'flv'
}], }]
}) })
# from http://www.indiedb.com/games/king-machine/videos # from http://www.indiedb.com/games/king-machine/videos
@ -610,12 +610,12 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'formats': [{ 'formats': [{
'url': 'http://cdn.dbolical.com/cache/videos/games/1/50/49678/encode_mp4/king-machine-trailer.mp4', 'url': 'http://cdn.dbolical.com/cache/videos/games/1/50/49678/encode_mp4/king-machine-trailer.mp4',
'height': 360, 'height': 360,
'ext': 'mp4', 'ext': 'mp4'
}, { }, {
'url': 'http://cdn.dbolical.com/cache/videos/games/1/50/49678/encode720p_mp4/king-machine-trailer.mp4', 'url': 'http://cdn.dbolical.com/cache/videos/games/1/50/49678/encode720p_mp4/king-machine-trailer.mp4',
'height': 720, 'height': 720,
'ext': 'mp4', 'ext': 'mp4'
}], }]
}) })
def test_parse_m3u8_formats(self): def test_parse_m3u8_formats(self):
@ -866,7 +866,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'height': 1080, 'height': 1080,
'vcodec': 'avc1.64002a', 'vcodec': 'avc1.64002a',
}], }],
{}, {}
), ),
( (
'bipbop_16x9', 'bipbop_16x9',
@ -990,45 +990,45 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'en': [{ 'en': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/eng/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/eng/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}, { }, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/eng_forced/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/eng_forced/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}], }],
'fr': [{ 'fr': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/fra/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/fra/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}, { }, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/fra_forced/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/fra_forced/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}], }],
'es': [{ 'es': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/spa/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/spa/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}, { }, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/spa_forced/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/spa_forced/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}], }],
'ja': [{ 'ja': [{
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/jpn/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/jpn/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}, { }, {
'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/jpn_forced/prog_index.m3u8', 'url': 'https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/subtitles/jpn_forced/prog_index.m3u8',
'ext': 'vtt', 'ext': 'vtt',
'protocol': 'm3u8_native', 'protocol': 'm3u8_native'
}], }],
}, }
), ),
] ]
for m3u8_file, m3u8_url, expected_formats, expected_subs in _TEST_CASES: for m3u8_file, m3u8_url, expected_formats, expected_subs in _TEST_CASES:
with open(f'./test/testdata/m3u8/{m3u8_file}.m3u8', encoding='utf-8') as f: with open('./test/testdata/m3u8/%s.m3u8' % m3u8_file, encoding='utf-8') as f:
formats, subs = self.ie._parse_m3u8_formats_and_subtitles( formats, subs = self.ie._parse_m3u8_formats_and_subtitles(
f.read(), m3u8_url, ext='mp4') f.read(), m3u8_url, ext='mp4')
self.ie._sort_formats(formats) self.ie._sort_formats(formats)
@ -1366,14 +1366,14 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd', 'url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/manifest.mpd',
'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/', 'fragment_base_url': 'https://sdn-global-streaming-cache-3qsdn.akamaized.net/stream/3144/files/17/07/672975/3144-kZT4LWMQw6Rh7Kpd.ism/dash/',
'protocol': 'http_dash_segments', 'protocol': 'http_dash_segments',
}, }
], ]
}, },
), )
] ]
for mpd_file, mpd_url, mpd_base_url, expected_formats, expected_subtitles in _TEST_CASES: for mpd_file, mpd_url, mpd_base_url, expected_formats, expected_subtitles in _TEST_CASES:
with open(f'./test/testdata/mpd/{mpd_file}.mpd', encoding='utf-8') as f: with open('./test/testdata/mpd/%s.mpd' % mpd_file, encoding='utf-8') as f:
formats, subtitles = self.ie._parse_mpd_formats_and_subtitles( formats, subtitles = self.ie._parse_mpd_formats_and_subtitles(
compat_etree_fromstring(f.read().encode()), compat_etree_fromstring(f.read().encode()),
mpd_base_url=mpd_base_url, mpd_url=mpd_url) mpd_base_url=mpd_base_url, mpd_url=mpd_url)
@ -1408,7 +1408,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'sampling_rate': 48000, 'sampling_rate': 48000,
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video-100', 'format_id': 'video-100',
@ -1431,7 +1431,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401FDA0544EFFC2D002CBC40000003004000000C03C60CA80000000168EF32C8', 'codec_private_data': '00000001674D401FDA0544EFFC2D002CBC40000003004000000C03C60CA80000000168EF32C8',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video-326', 'format_id': 'video-326',
@ -1454,7 +1454,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401FDA0241FE23FFC3BC83BA44000003000400000300C03C60CA800000000168EF32C8', 'codec_private_data': '00000001674D401FDA0241FE23FFC3BC83BA44000003000400000300C03C60CA800000000168EF32C8',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video-698', 'format_id': 'video-698',
@ -1477,7 +1477,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401FDA0350BFB97FF06AF06AD1000003000100000300300F1832A00000000168EF32C8', 'codec_private_data': '00000001674D401FDA0350BFB97FF06AF06AD1000003000100000300300F1832A00000000168EF32C8',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video-1493', 'format_id': 'video-1493',
@ -1500,7 +1500,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401FDA011C3DE6FFF0D890D871000003000100000300300F1832A00000000168EF32C8', 'codec_private_data': '00000001674D401FDA011C3DE6FFF0D890D871000003000100000300300F1832A00000000168EF32C8',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video-4482', 'format_id': 'video-4482',
@ -1523,7 +1523,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401FDA01A816F97FFC1ABC1AB440000003004000000C03C60CA80000000168EF32C8', 'codec_private_data': '00000001674D401FDA01A816F97FFC1ABC1AB440000003004000000C03C60CA80000000168EF32C8',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}], }],
{ {
@ -1538,10 +1538,10 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'duration': 8880746666, 'duration': 8880746666,
'timescale': 10000000, 'timescale': 10000000,
'fourcc': 'TTML', 'fourcc': 'TTML',
'codec_private_data': '', 'codec_private_data': ''
}, }
}, }
], ]
}, },
), ),
( (
@ -1571,7 +1571,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'sampling_rate': 48000, 'sampling_rate': 48000,
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'audio_deu_1-224', 'format_id': 'audio_deu_1-224',
@ -1597,7 +1597,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'sampling_rate': 48000, 'sampling_rate': 48000,
'channels': 6, 'channels': 6,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-23', 'format_id': 'video_deu-23',
@ -1622,7 +1622,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '000000016742C00CDB06077E5C05A808080A00000300020000030009C0C02EE0177CC6300F142AE00000000168CA8DC8', 'codec_private_data': '000000016742C00CDB06077E5C05A808080A00000300020000030009C0C02EE0177CC6300F142AE00000000168CA8DC8',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-403', 'format_id': 'video_deu-403',
@ -1647,7 +1647,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D4014E98323B602D4040405000003000100000300320F1429380000000168EAECF2', 'codec_private_data': '00000001674D4014E98323B602D4040405000003000100000300320F1429380000000168EAECF2',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-680', 'format_id': 'video_deu-680',
@ -1672,7 +1672,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401EE981405FF2E02D4040405000000300100000030320F162D3800000000168EAECF2', 'codec_private_data': '00000001674D401EE981405FF2E02D4040405000000300100000030320F162D3800000000168EAECF2',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-1253', 'format_id': 'video_deu-1253',
@ -1698,7 +1698,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401EE981405FF2E02D4040405000000300100000030320F162D3800000000168EAECF2', 'codec_private_data': '00000001674D401EE981405FF2E02D4040405000000300100000030320F162D3800000000168EAECF2',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-2121', 'format_id': 'video_deu-2121',
@ -1723,7 +1723,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D401EECA0601BD80B50101014000003000400000300C83C58B6580000000168E93B3C80', 'codec_private_data': '00000001674D401EECA0601BD80B50101014000003000400000300C83C58B6580000000168E93B3C80',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-3275', 'format_id': 'video_deu-3275',
@ -1748,7 +1748,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D4020ECA02802DD80B501010140000003004000000C83C60C65800000000168E93B3C80', 'codec_private_data': '00000001674D4020ECA02802DD80B501010140000003004000000C83C60C65800000000168E93B3C80',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-5300', 'format_id': 'video_deu-5300',
@ -1773,7 +1773,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D4028ECA03C0113F2E02D4040405000000300100000030320F18319600000000168E93B3C80', 'codec_private_data': '00000001674D4028ECA03C0113F2E02D4040405000000300100000030320F18319600000000168E93B3C80',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}, { }, {
'format_id': 'video_deu-8079', 'format_id': 'video_deu-8079',
@ -1798,7 +1798,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'codec_private_data': '00000001674D4028ECA03C0113F2E02D4040405000000300100000030320F18319600000000168E93B3C80', 'codec_private_data': '00000001674D4028ECA03C0113F2E02D4040405000000300100000030320F18319600000000168E93B3C80',
'channels': 2, 'channels': 2,
'bits_per_sample': 16, 'bits_per_sample': 16,
'nal_unit_length_field': 4, 'nal_unit_length_field': 4
}, },
}], }],
{}, {},
@ -1806,7 +1806,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
] ]
for ism_file, ism_url, expected_formats, expected_subtitles in _TEST_CASES: for ism_file, ism_url, expected_formats, expected_subtitles in _TEST_CASES:
with open(f'./test/testdata/ism/{ism_file}.Manifest', encoding='utf-8') as f: with open('./test/testdata/ism/%s.Manifest' % ism_file, encoding='utf-8') as f:
formats, subtitles = self.ie._parse_ism_formats_and_subtitles( formats, subtitles = self.ie._parse_ism_formats_and_subtitles(
compat_etree_fromstring(f.read().encode()), ism_url=ism_url) compat_etree_fromstring(f.read().encode()), ism_url=ism_url)
self.ie._sort_formats(formats) self.ie._sort_formats(formats)
@ -1827,12 +1827,12 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
'tbr': 2148, 'tbr': 2148,
'width': 1280, 'width': 1280,
'height': 720, 'height': 720,
}], }]
), ),
] ]
for f4m_file, f4m_url, expected_formats in _TEST_CASES: for f4m_file, f4m_url, expected_formats in _TEST_CASES:
with open(f'./test/testdata/f4m/{f4m_file}.f4m', encoding='utf-8') as f: with open('./test/testdata/f4m/%s.f4m' % f4m_file, encoding='utf-8') as f:
formats = self.ie._parse_f4m_formats( formats = self.ie._parse_f4m_formats(
compat_etree_fromstring(f.read().encode()), compat_etree_fromstring(f.read().encode()),
f4m_url, None) f4m_url, None)
@ -1873,13 +1873,13 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
}, { }, {
'manifest_url': 'https://example.org/src/foo_xspf.xspf', 'manifest_url': 'https://example.org/src/foo_xspf.xspf',
'url': 'https://example.com/track3.mp3', 'url': 'https://example.com/track3.mp3',
}], }]
}], }]
), ),
] ]
for xspf_file, xspf_url, expected_entries in _TEST_CASES: for xspf_file, xspf_url, expected_entries in _TEST_CASES:
with open(f'./test/testdata/xspf/{xspf_file}.xspf', encoding='utf-8') as f: with open('./test/testdata/xspf/%s.xspf' % xspf_file, encoding='utf-8') as f:
entries = self.ie._parse_xspf( entries = self.ie._parse_xspf(
compat_etree_fromstring(f.read().encode()), compat_etree_fromstring(f.read().encode()),
xspf_file, xspf_url=xspf_url, xspf_base_url=xspf_url) xspf_file, xspf_url=xspf_url, xspf_base_url=xspf_url)
@ -1902,19 +1902,10 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
server_thread.start() server_thread.start()
(content, urlh) = self.ie._download_webpage_handle( (content, urlh) = self.ie._download_webpage_handle(
f'http://127.0.0.1:{port}/teapot', None, 'http://127.0.0.1:%d/teapot' % port, None,
expected_status=TEAPOT_RESPONSE_STATUS) expected_status=TEAPOT_RESPONSE_STATUS)
self.assertEqual(content, TEAPOT_RESPONSE_BODY) self.assertEqual(content, TEAPOT_RESPONSE_BODY)
def test_search_nextjs_data(self):
data = '<script id="__NEXT_DATA__" type="application/json">{"props":{}}</script>'
self.assertEqual(self.ie._search_nextjs_data(data, None), {'props': {}})
self.assertEqual(self.ie._search_nextjs_data('', None, fatal=False), {})
self.assertEqual(self.ie._search_nextjs_data('', None, default=None), None)
self.assertEqual(self.ie._search_nextjs_data('', None, default={}), {})
with self.assertWarns(DeprecationWarning):
self.assertEqual(self.ie._search_nextjs_data('', None, default='{}'), {})
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

@ -8,11 +8,10 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import contextlib
import copy import copy
import json import json
from test.helper import FakeYDL, assertRegexpMatches, try_rm from test.helper import FakeYDL, assertRegexpMatches
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_os_name from yt_dlp.compat import compat_os_name
from yt_dlp.extractor import YoutubeIE from yt_dlp.extractor import YoutubeIE
@ -25,7 +24,6 @@ from yt_dlp.utils import (
int_or_none, int_or_none,
match_filter_func, match_filter_func,
) )
from yt_dlp.utils.traversal import traverse_obj
TEST_URL = 'http://localhost/sample.mp4' TEST_URL = 'http://localhost/sample.mp4'
@ -130,8 +128,8 @@ class TestFormatSelection(unittest.TestCase):
'allow_multiple_audio_streams': multi, 'allow_multiple_audio_streams': multi,
}) })
ydl.process_ie_result(info_dict.copy()) ydl.process_ie_result(info_dict.copy())
downloaded = [x['format_id'] for x in ydl.downloaded_info_dicts] downloaded = map(lambda x: x['format_id'], ydl.downloaded_info_dicts)
self.assertEqual(downloaded, list(expected)) self.assertEqual(list(downloaded), list(expected))
test('20/47', '47') test('20/47', '47')
test('20/71/worst', '35') test('20/71/worst', '35')
@ -141,8 +139,6 @@ class TestFormatSelection(unittest.TestCase):
test('example-with-dashes', 'example-with-dashes') test('example-with-dashes', 'example-with-dashes')
test('all', '2', '47', '45', 'example-with-dashes', '35') test('all', '2', '47', '45', 'example-with-dashes', '35')
test('mergeall', '2+47+45+example-with-dashes+35', multi=True) test('mergeall', '2+47+45+example-with-dashes+35', multi=True)
# See: https://github.com/yt-dlp/yt-dlp/pulls/8797
test('7_a/worst', '35')
def test_format_selection_audio(self): def test_format_selection_audio(self):
formats = [ formats = [
@ -184,7 +180,7 @@ class TestFormatSelection(unittest.TestCase):
] ]
info_dict = _make_result(formats) info_dict = _make_result(formats)
ydl = YDL({'format': 'best', 'format_sort': ['abr', 'ext']}) ydl = YDL({'format': 'best'})
ydl.sort_formats(info_dict) ydl.sort_formats(info_dict)
ydl.process_ie_result(copy.deepcopy(info_dict)) ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
@ -196,7 +192,7 @@ class TestFormatSelection(unittest.TestCase):
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
self.assertEqual(downloaded['format_id'], 'mp3-64') self.assertEqual(downloaded['format_id'], 'mp3-64')
ydl = YDL({'prefer_free_formats': True, 'format_sort': ['abr', 'ext']}) ydl = YDL({'prefer_free_formats': True})
ydl.sort_formats(info_dict) ydl.sort_formats(info_dict)
ydl.process_ie_result(copy.deepcopy(info_dict)) ydl.process_ie_result(copy.deepcopy(info_dict))
downloaded = ydl.downloaded_info_dicts[0] downloaded = ydl.downloaded_info_dicts[0]
@ -516,8 +512,10 @@ class TestFormatSelection(unittest.TestCase):
self.assertEqual(downloaded_ids, ['D', 'C', 'B']) self.assertEqual(downloaded_ids, ['D', 'C', 'B'])
ydl = YDL({'format': 'best[height<40]'}) ydl = YDL({'format': 'best[height<40]'})
with contextlib.suppress(ExtractorError): try:
ydl.process_ie_result(info_dict) ydl.process_ie_result(info_dict)
except ExtractorError:
pass
self.assertEqual(ydl.downloaded_info_dicts, []) self.assertEqual(ydl.downloaded_info_dicts, [])
def test_default_format_spec(self): def test_default_format_spec(self):
@ -632,6 +630,7 @@ class TestYoutubeDL(unittest.TestCase):
self.assertEqual(test_dict['playlist'], 'funny videos') self.assertEqual(test_dict['playlist'], 'funny videos')
outtmpl_info = { outtmpl_info = {
'id': '1234',
'id': '1234', 'id': '1234',
'ext': 'mp4', 'ext': 'mp4',
'width': None, 'width': None,
@ -651,8 +650,8 @@ class TestYoutubeDL(unittest.TestCase):
'formats': [ 'formats': [
{'id': 'id 1', 'height': 1080, 'width': 1920}, {'id': 'id 1', 'height': 1080, 'width': 1920},
{'id': 'id 2', 'height': 720}, {'id': 'id 2', 'height': 720},
{'id': 'id 3'}, {'id': 'id 3'}
], ]
} }
def test_prepare_outtmpl_and_filename(self): def test_prepare_outtmpl_and_filename(self):
@ -685,8 +684,7 @@ class TestYoutubeDL(unittest.TestCase):
test('%(id)s.%(ext)s', '1234.mp4') test('%(id)s.%(ext)s', '1234.mp4')
test('%(duration_string)s', ('27:46:40', '27-46-40')) test('%(duration_string)s', ('27:46:40', '27-46-40'))
test('%(resolution)s', '1080p') test('%(resolution)s', '1080p')
test('%(playlist_index|)s', '001') test('%(playlist_index)s', '001')
test('%(playlist_index&{}!)s', '1!')
test('%(playlist_autonumber)s', '02') test('%(playlist_autonumber)s', '02')
test('%(autonumber)s', '00001') test('%(autonumber)s', '00001')
test('%(autonumber+2)03d', '005', autonumber_start=3) test('%(autonumber+2)03d', '005', autonumber_start=3)
@ -729,7 +727,7 @@ class TestYoutubeDL(unittest.TestCase):
self.assertEqual(got_dict.get(info_field), expected, info_field) self.assertEqual(got_dict.get(info_field), expected, info_field)
return True return True
test('%()j', (expect_same_infodict, None)) test('%()j', (expect_same_infodict, str))
# NA placeholder # NA placeholder
NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s' NA_TEST_OUTTMPL = '%(uploader_date)s-%(width)d-%(x|def)s-%(id)s.%(ext)s'
@ -772,7 +770,7 @@ class TestYoutubeDL(unittest.TestCase):
test('%(formats)j', (json.dumps(FORMATS), None)) test('%(formats)j', (json.dumps(FORMATS), None))
test('%(formats)#j', ( test('%(formats)#j', (
json.dumps(FORMATS, indent=4), json.dumps(FORMATS, indent=4),
json.dumps(FORMATS, indent=4).replace(':', '').replace('"', '').replace('\n', ' '), json.dumps(FORMATS, indent=4).replace(':', '').replace('"', "").replace('\n', ' ')
)) ))
test('%(title5).3B', 'á') test('%(title5).3B', 'á')
test('%(title5)U', 'áéí 𝐀') test('%(title5)U', 'áéí 𝐀')
@ -785,9 +783,9 @@ class TestYoutubeDL(unittest.TestCase):
test('%(title4)#S', 'foo_bar_test') test('%(title4)#S', 'foo_bar_test')
test('%(title4).10S', ('foo bar ', 'foo bar' + ('#' if compat_os_name == 'nt' else ' '))) test('%(title4).10S', ('foo bar ', 'foo bar' + ('#' if compat_os_name == 'nt' else ' ')))
if compat_os_name == 'nt': if compat_os_name == 'nt':
test('%(title4)q', ('"foo ""bar"" test"', None)) test('%(title4)q', ('"foo \\"bar\\" test"', "foo bar test"))
test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', None)) test('%(formats.:.id)#q', ('"id 1" "id 2" "id 3"', 'id 1 id 2 id 3'))
test('%(formats.0.id)#q', ('"id 1"', None)) test('%(formats.0.id)#q', ('"id 1"', 'id 1'))
else: else:
test('%(title4)q', ('\'foo "bar" test\'', '\'foo bar test\'')) test('%(title4)q', ('\'foo "bar" test\'', '\'foo bar test\''))
test('%(formats.:.id)#q', "'id 1' 'id 2' 'id 3'") test('%(formats.:.id)#q', "'id 1' 'id 2' 'id 3'")
@ -798,7 +796,6 @@ class TestYoutubeDL(unittest.TestCase):
test('%(title|%)s %(title|%%)s', '% %%') test('%(title|%)s %(title|%%)s', '% %%')
test('%(id+1-height+3)05d', '00158') test('%(id+1-height+3)05d', '00158')
test('%(width+100)05d', 'NA') test('%(width+100)05d', 'NA')
test('%(filesize*8)d', '8192')
test('%(formats.0) 15s', ('% 15s' % FORMATS[0], None)) test('%(formats.0) 15s', ('% 15s' % FORMATS[0], None))
test('%(formats.0)r', (repr(FORMATS[0]), None)) test('%(formats.0)r', (repr(FORMATS[0]), None))
test('%(height.0)03d', '001') test('%(height.0)03d', '001')
@ -832,7 +829,6 @@ class TestYoutubeDL(unittest.TestCase):
test('%(id&hi {:>10} {}|)s', 'hi 1234 1234') test('%(id&hi {:>10} {}|)s', 'hi 1234 1234')
test(R'%(id&{0} {}|)s', 'NA') test(R'%(id&{0} {}|)s', 'NA')
test(R'%(id&{0.1}|)s', 'NA') test(R'%(id&{0.1}|)s', 'NA')
test('%(height&{:,d})S', '1,080')
# Laziness # Laziness
def gen(): def gen():
@ -842,8 +838,8 @@ class TestYoutubeDL(unittest.TestCase):
# Empty filename # Empty filename
test('%(foo|)s-%(bar|)s.%(ext)s', '-.mp4') test('%(foo|)s-%(bar|)s.%(ext)s', '-.mp4')
# test('%(foo|)s.%(ext)s', ('.mp4', '_.mp4')) # FIXME: ? # test('%(foo|)s.%(ext)s', ('.mp4', '_.mp4')) # fixme
# test('%(foo|)s', ('', '_')) # FIXME: ? # test('%(foo|)s', ('', '_')) # fixme
# Environment variable expansion for prepare_filename # Environment variable expansion for prepare_filename
os.environ['__yt_dlp_var'] = 'expanded' os.environ['__yt_dlp_var'] = 'expanded'
@ -860,7 +856,7 @@ class TestYoutubeDL(unittest.TestCase):
test('Hello %(title1)s', 'Hello $PATH') test('Hello %(title1)s', 'Hello $PATH')
test('Hello %(title2)s', 'Hello %PATH%') test('Hello %(title2)s', 'Hello %PATH%')
test('%(title3)s', ('foo/bar\\test', 'foobartest')) test('%(title3)s', ('foo/bar\\test', 'foobartest'))
test('folder/%(title3)s', ('folder/foo/bar\\test', f'folder{os.path.sep}foobartest')) test('folder/%(title3)s', ('folder/foo/bar\\test', 'folder%sfoobartest' % os.path.sep))
def test_format_note(self): def test_format_note(self):
ydl = YoutubeDL() ydl = YoutubeDL()
@ -882,22 +878,22 @@ class TestYoutubeDL(unittest.TestCase):
f.write('EXAMPLE') f.write('EXAMPLE')
return [info['filepath']], info return [info['filepath']], info
def run_pp(params, pp): def run_pp(params, PP):
with open(filename, 'w') as f: with open(filename, 'w') as f:
f.write('EXAMPLE') f.write('EXAMPLE')
ydl = YoutubeDL(params) ydl = YoutubeDL(params)
ydl.add_post_processor(pp()) ydl.add_post_processor(PP())
ydl.post_process(filename, {'filepath': filename}) ydl.post_process(filename, {'filepath': filename})
run_pp({'keepvideo': True}, SimplePP) run_pp({'keepvideo': True}, SimplePP)
self.assertTrue(os.path.exists(filename), f'{filename} doesn\'t exist') self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
self.assertTrue(os.path.exists(audiofile), f'{audiofile} doesn\'t exist') self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
os.unlink(filename) os.unlink(filename)
os.unlink(audiofile) os.unlink(audiofile)
run_pp({'keepvideo': False}, SimplePP) run_pp({'keepvideo': False}, SimplePP)
self.assertFalse(os.path.exists(filename), f'{filename} exists') self.assertFalse(os.path.exists(filename), '%s exists' % filename)
self.assertTrue(os.path.exists(audiofile), f'{audiofile} doesn\'t exist') self.assertTrue(os.path.exists(audiofile), '%s doesn\'t exist' % audiofile)
os.unlink(audiofile) os.unlink(audiofile)
class ModifierPP(PostProcessor): class ModifierPP(PostProcessor):
@ -907,7 +903,7 @@ class TestYoutubeDL(unittest.TestCase):
return [], info return [], info
run_pp({'keepvideo': False}, ModifierPP) run_pp({'keepvideo': False}, ModifierPP)
self.assertTrue(os.path.exists(filename), f'{filename} doesn\'t exist') self.assertTrue(os.path.exists(filename), '%s doesn\'t exist' % filename)
os.unlink(filename) os.unlink(filename)
def test_match_filter(self): def test_match_filter(self):
@ -919,7 +915,7 @@ class TestYoutubeDL(unittest.TestCase):
'duration': 30, 'duration': 30,
'filesize': 10 * 1024, 'filesize': 10 * 1024,
'playlist_id': '42', 'playlist_id': '42',
'uploader': '變態妍字幕版 太妍 тест', 'uploader': "變態妍字幕版 太妍 тест",
'creator': "тест ' 123 ' тест--", 'creator': "тест ' 123 ' тест--",
'webpage_url': 'http://example.com/watch?v=shenanigans', 'webpage_url': 'http://example.com/watch?v=shenanigans',
} }
@ -932,7 +928,7 @@ class TestYoutubeDL(unittest.TestCase):
'description': 'foo', 'description': 'foo',
'filesize': 5 * 1024, 'filesize': 5 * 1024,
'playlist_id': '43', 'playlist_id': '43',
'uploader': 'тест 123', 'uploader': "тест 123",
'webpage_url': 'http://example.com/watch?v=SHENANIGANS', 'webpage_url': 'http://example.com/watch?v=SHENANIGANS',
} }
videos = [first, second] videos = [first, second]
@ -940,7 +936,7 @@ class TestYoutubeDL(unittest.TestCase):
def get_videos(filter_=None): def get_videos(filter_=None):
ydl = YDL({'match_filter': filter_, 'simulate': True}) ydl = YDL({'match_filter': filter_, 'simulate': True})
for v in videos: for v in videos:
ydl.process_ie_result(v.copy(), download=True) ydl.process_ie_result(v, download=True)
return [v['id'] for v in ydl.downloaded_info_dicts] return [v['id'] for v in ydl.downloaded_info_dicts]
res = get_videos() res = get_videos()
@ -1179,7 +1175,7 @@ class TestYoutubeDL(unittest.TestCase):
}) })
return { return {
'id': video_id, 'id': video_id,
'title': f'Video {video_id}', 'title': 'Video %s' % video_id,
'formats': formats, 'formats': formats,
} }
@ -1193,8 +1189,8 @@ class TestYoutubeDL(unittest.TestCase):
'_type': 'url_transparent', '_type': 'url_transparent',
'ie_key': VideoIE.ie_key(), 'ie_key': VideoIE.ie_key(),
'id': video_id, 'id': video_id,
'url': f'video:{video_id}', 'url': 'video:%s' % video_id,
'title': f'Video Transparent {video_id}', 'title': 'Video Transparent %s' % video_id,
} }
def _real_extract(self, url): def _real_extract(self, url):
@ -1217,129 +1213,6 @@ class TestYoutubeDL(unittest.TestCase):
self.assertEqual(downloaded['extractor'], 'Video') self.assertEqual(downloaded['extractor'], 'Video')
self.assertEqual(downloaded['extractor_key'], 'Video') self.assertEqual(downloaded['extractor_key'], 'Video')
def test_header_cookies(self):
from http.cookiejar import Cookie
ydl = FakeYDL()
ydl.report_warning = lambda *_, **__: None
def cookie(name, value, version=None, domain='', path='', secure=False, expires=None):
return Cookie(
version or 0, name, value, None, False,
domain, bool(domain), bool(domain), path, bool(path),
secure, expires, False, None, None, rest={})
_test_url = 'https://yt.dlp/test'
def test(encoded_cookies, cookies, *, headers=False, round_trip=None, error_re=None):
def _test():
ydl.cookiejar.clear()
ydl._load_cookies(encoded_cookies, autoscope=headers)
if headers:
ydl._apply_header_cookies(_test_url)
data = {'url': _test_url}
ydl._calc_headers(data)
self.assertCountEqual(
map(vars, ydl.cookiejar), map(vars, cookies),
'Extracted cookiejar.Cookie is not the same')
if not headers:
self.assertEqual(
data.get('cookies'), round_trip or encoded_cookies,
'Cookie is not the same as round trip')
ydl.__dict__['_YoutubeDL__header_cookies'] = []
with self.subTest(msg=encoded_cookies):
if not error_re:
_test()
return
with self.assertRaisesRegex(Exception, error_re):
_test()
test('test=value; Domain=.yt.dlp', [cookie('test', 'value', domain='.yt.dlp')])
test('test=value', [cookie('test', 'value')], error_re=r'Unscoped cookies are not allowed')
test('cookie1=value1; Domain=.yt.dlp; Path=/test; cookie2=value2; Domain=.yt.dlp; Path=/', [
cookie('cookie1', 'value1', domain='.yt.dlp', path='/test'),
cookie('cookie2', 'value2', domain='.yt.dlp', path='/')])
test('test=value; Domain=.yt.dlp; Path=/test; Secure; Expires=9999999999', [
cookie('test', 'value', domain='.yt.dlp', path='/test', secure=True, expires=9999999999)])
test('test="value; "; path=/test; domain=.yt.dlp', [
cookie('test', 'value; ', domain='.yt.dlp', path='/test')],
round_trip='test="value\\073 "; Domain=.yt.dlp; Path=/test')
test('name=; Domain=.yt.dlp', [cookie('name', '', domain='.yt.dlp')],
round_trip='name=""; Domain=.yt.dlp')
test('test=value', [cookie('test', 'value', domain='.yt.dlp')], headers=True)
test('cookie1=value; Domain=.yt.dlp; cookie2=value', [], headers=True, error_re=r'Invalid syntax')
ydl.deprecated_feature = ydl.report_error
test('test=value', [], headers=True, error_re=r'Passing cookies as a header is a potential security risk')
def test_infojson_cookies(self):
TEST_FILE = 'test_infojson_cookies.info.json'
TEST_URL = 'https://example.com/example.mp4'
COOKIES = 'a=b; Domain=.example.com; c=d; Domain=.example.com'
COOKIE_HEADER = {'Cookie': 'a=b; c=d'}
ydl = FakeYDL()
ydl.process_info = lambda x: ydl._write_info_json('test', x, TEST_FILE)
def make_info(info_header_cookies=False, fmts_header_cookies=False, cookies_field=False):
fmt = {'url': TEST_URL}
if fmts_header_cookies:
fmt['http_headers'] = COOKIE_HEADER
if cookies_field:
fmt['cookies'] = COOKIES
return _make_result([fmt], http_headers=COOKIE_HEADER if info_header_cookies else None)
def test(initial_info, note):
result = {}
result['processed'] = ydl.process_ie_result(initial_info)
self.assertTrue(ydl.cookiejar.get_cookies_for_url(TEST_URL),
msg=f'No cookies set in cookiejar after initial process when {note}')
ydl.cookiejar.clear()
with open(TEST_FILE) as infojson:
result['loaded'] = ydl.sanitize_info(json.load(infojson), True)
result['final'] = ydl.process_ie_result(result['loaded'].copy(), download=False)
self.assertTrue(ydl.cookiejar.get_cookies_for_url(TEST_URL),
msg=f'No cookies set in cookiejar after final process when {note}')
ydl.cookiejar.clear()
for key in ('processed', 'loaded', 'final'):
info = result[key]
self.assertIsNone(
traverse_obj(info, ((None, ('formats', 0)), 'http_headers', 'Cookie'), casesense=False, get_all=False),
msg=f'Cookie header not removed in {key} result when {note}')
self.assertEqual(
traverse_obj(info, ((None, ('formats', 0)), 'cookies'), get_all=False), COOKIES,
msg=f'No cookies field found in {key} result when {note}')
test({'url': TEST_URL, 'http_headers': COOKIE_HEADER, 'id': '1', 'title': 'x'}, 'no formats field')
test(make_info(info_header_cookies=True), 'info_dict header cokies')
test(make_info(fmts_header_cookies=True), 'format header cookies')
test(make_info(info_header_cookies=True, fmts_header_cookies=True), 'info_dict and format header cookies')
test(make_info(info_header_cookies=True, fmts_header_cookies=True, cookies_field=True), 'all cookies fields')
test(make_info(cookies_field=True), 'cookies format field')
test({'url': TEST_URL, 'cookies': COOKIES, 'id': '1', 'title': 'x'}, 'info_dict cookies field only')
try_rm(TEST_FILE)
def test_add_headers_cookie(self):
def check_for_cookie_header(result):
return traverse_obj(result, ((None, ('formats', 0)), 'http_headers', 'Cookie'), casesense=False, get_all=False)
ydl = FakeYDL({'http_headers': {'Cookie': 'a=b'}})
ydl._apply_header_cookies(_make_result([])['webpage_url']) # Scope to input webpage URL: .example.com
fmt = {'url': 'https://example.com/video.mp4'}
result = ydl.process_ie_result(_make_result([fmt]), download=False)
self.assertIsNone(check_for_cookie_header(result), msg='http_headers cookies in result info_dict')
self.assertEqual(result.get('cookies'), 'a=b; Domain=.example.com', msg='No cookies were set in cookies field')
self.assertIn('a=b', ydl.cookiejar.get_cookie_header(fmt['url']), msg='No cookies were set in cookiejar')
fmt = {'url': 'https://wrong.com/video.mp4'}
result = ydl.process_ie_result(_make_result([fmt]), download=False)
self.assertIsNone(check_for_cookie_header(result), msg='http_headers cookies for wrong domain')
self.assertFalse(result.get('cookies'), msg='Cookies set in cookies field for wrong domain')
self.assertFalse(ydl.cookiejar.get_cookie_header(fmt['url']), msg='Cookies set in cookiejar for wrong domain')
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

@ -17,10 +17,10 @@ from yt_dlp.cookies import YoutubeDLCookieJar
class TestYoutubeDLCookieJar(unittest.TestCase): class TestYoutubeDLCookieJar(unittest.TestCase):
def test_keep_session_cookies(self): def test_keep_session_cookies(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt') cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
cookiejar.load() cookiejar.load(ignore_discard=True, ignore_expires=True)
tf = tempfile.NamedTemporaryFile(delete=False) tf = tempfile.NamedTemporaryFile(delete=False)
try: try:
cookiejar.save(filename=tf.name) cookiejar.save(filename=tf.name, ignore_discard=True, ignore_expires=True)
temp = tf.read().decode() temp = tf.read().decode()
self.assertTrue(re.search( self.assertTrue(re.search(
r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpiresEmpty\s+YoutubeDLExpiresEmptyValue', temp)) r'www\.foobar\.foobar\s+FALSE\s+/\s+TRUE\s+0\s+YoutubeDLExpiresEmpty\s+YoutubeDLExpiresEmptyValue', temp))
@ -32,7 +32,7 @@ class TestYoutubeDLCookieJar(unittest.TestCase):
def test_strip_httponly_prefix(self): def test_strip_httponly_prefix(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt') cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
cookiejar.load() cookiejar.load(ignore_discard=True, ignore_expires=True)
def assert_cookie_has_value(key): def assert_cookie_has_value(key):
self.assertEqual(cookiejar._cookies['www.foobar.foobar']['/'][key].value, key + '_VALUE') self.assertEqual(cookiejar._cookies['www.foobar.foobar']['/'][key].value, key + '_VALUE')
@ -42,25 +42,17 @@ class TestYoutubeDLCookieJar(unittest.TestCase):
def test_malformed_cookies(self): def test_malformed_cookies(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/malformed_cookies.txt') cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/malformed_cookies.txt')
cookiejar.load() cookiejar.load(ignore_discard=True, ignore_expires=True)
# Cookies should be empty since all malformed cookie file entries # Cookies should be empty since all malformed cookie file entries
# will be ignored # will be ignored
self.assertFalse(cookiejar._cookies) self.assertFalse(cookiejar._cookies)
def test_get_cookie_header(self): def test_get_cookie_header(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt') cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/httponly_cookies.txt')
cookiejar.load() cookiejar.load(ignore_discard=True, ignore_expires=True)
header = cookiejar.get_cookie_header('https://www.foobar.foobar') header = cookiejar.get_cookie_header('https://www.foobar.foobar')
self.assertIn('HTTPONLY_COOKIE', header) self.assertIn('HTTPONLY_COOKIE', header)
def test_get_cookies_for_url(self):
cookiejar = YoutubeDLCookieJar('./test/testdata/cookies/session_cookies.txt')
cookiejar.load()
cookies = cookiejar.get_cookies_for_url('https://www.foobar.foobar/')
self.assertEqual(len(cookies), 2)
cookies = cookiejar.get_cookies_for_url('https://foobar.foobar/')
self.assertFalse(cookies)
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()

@ -87,7 +87,7 @@ class TestAES(unittest.TestCase):
password = intlist_to_bytes(self.key).decode() password = intlist_to_bytes(self.key).decode()
encrypted = base64.b64encode( encrypted = base64.b64encode(
intlist_to_bytes(self.iv[:8]) intlist_to_bytes(self.iv[:8])
+ b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae', + b'\x17\x15\x93\xab\x8d\x80V\xcdV\xe0\t\xcdo\xc2\xa5\xd8ksM\r\xe27N\xae'
).decode() ).decode()
decrypted = (aes_decrypt_text(encrypted, password, 16)) decrypted = (aes_decrypt_text(encrypted, password, 16))
self.assertEqual(decrypted, self.secret_msg) self.assertEqual(decrypted, self.secret_msg)
@ -95,7 +95,7 @@ class TestAES(unittest.TestCase):
password = intlist_to_bytes(self.key).decode() password = intlist_to_bytes(self.key).decode()
encrypted = base64.b64encode( encrypted = base64.b64encode(
intlist_to_bytes(self.iv[:8]) intlist_to_bytes(self.iv[:8])
+ b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83', + b'\x0b\xe6\xa4\xd9z\x0e\xb8\xb9\xd0\xd4i_\x85\x1d\x99\x98_\xe5\x80\xe7.\xbf\xa5\x83'
).decode() ).decode()
decrypted = (aes_decrypt_text(encrypted, password, 32)) decrypted = (aes_decrypt_text(encrypted, password, 32))
self.assertEqual(decrypted, self.secret_msg) self.assertEqual(decrypted, self.secret_msg)
@ -132,16 +132,16 @@ class TestAES(unittest.TestCase):
block = [0x21, 0xA0, 0x43, 0xFF] block = [0x21, 0xA0, 0x43, 0xFF]
self.assertEqual(pad_block(block, 'pkcs7'), self.assertEqual(pad_block(block, 'pkcs7'),
[*block, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C]) block + [0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C, 0x0C])
self.assertEqual(pad_block(block, 'iso7816'), self.assertEqual(pad_block(block, 'iso7816'),
[*block, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]) block + [0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
self.assertEqual(pad_block(block, 'whitespace'), self.assertEqual(pad_block(block, 'whitespace'),
[*block, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20]) block + [0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20])
self.assertEqual(pad_block(block, 'zero'), self.assertEqual(pad_block(block, 'zero'),
[*block, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]) block + [0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])
block = list(range(16)) block = list(range(16))
for mode in ('pkcs7', 'iso7816', 'whitespace', 'zero'): for mode in ('pkcs7', 'iso7816', 'whitespace', 'zero'):

@ -9,30 +9,30 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import struct import struct
import urllib.parse
from yt_dlp import compat from yt_dlp import compat
from yt_dlp.compat import urllib # isort: split
from yt_dlp.compat import ( from yt_dlp.compat import (
compat_etree_fromstring, compat_etree_fromstring,
compat_expanduser, compat_expanduser,
compat_urllib_parse_unquote, # noqa: TID251 compat_urllib_parse_unquote,
compat_urllib_parse_urlencode, # noqa: TID251 compat_urllib_parse_urlencode,
) )
from yt_dlp.compat.urllib.request import getproxies
class TestCompat(unittest.TestCase): class TestCompat(unittest.TestCase):
def test_compat_passthrough(self): def test_compat_passthrough(self):
with self.assertWarns(DeprecationWarning): with self.assertWarns(DeprecationWarning):
_ = compat.compat_basestring compat.compat_basestring
with self.assertWarns(DeprecationWarning): with self.assertWarns(DeprecationWarning):
_ = compat.WINDOWS_VT_MODE compat.WINDOWS_VT_MODE
self.assertEqual(urllib.request.getproxies, getproxies) # TODO: Test submodule
# compat.asyncio.events # Must not raise error
with self.assertWarns(DeprecationWarning): with self.assertWarns(DeprecationWarning):
_ = compat.compat_pycrypto_AES # Must not raise error compat.compat_pycrypto_AES # Must not raise error
def test_compat_expanduser(self): def test_compat_expanduser(self):
old_home = os.environ.get('HOME') old_home = os.environ.get('HOME')

@ -71,7 +71,7 @@ def _generate_expected_groups():
Path('/etc/yt-dlp.conf'), Path('/etc/yt-dlp.conf'),
Path('/etc/yt-dlp/config'), Path('/etc/yt-dlp/config'),
Path('/etc/yt-dlp/config.txt'), Path('/etc/yt-dlp/config.txt'),
], ]
} }

@ -1,5 +1,5 @@
import datetime as dt
import unittest import unittest
from datetime import datetime, timezone
from yt_dlp import cookies from yt_dlp import cookies
from yt_dlp.cookies import ( from yt_dlp.cookies import (
@ -67,7 +67,6 @@ class TestCookies(unittest.TestCase):
({'XDG_CURRENT_DESKTOP': 'GNOME'}, _LinuxDesktopEnvironment.GNOME), ({'XDG_CURRENT_DESKTOP': 'GNOME'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'GNOME:GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME), ({'XDG_CURRENT_DESKTOP': 'GNOME:GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'GNOME : GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME), ({'XDG_CURRENT_DESKTOP': 'GNOME : GNOME-Classic'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'Unity', 'DESKTOP_SESSION': 'gnome-fallback'}, _LinuxDesktopEnvironment.GNOME), ({'XDG_CURRENT_DESKTOP': 'Unity', 'DESKTOP_SESSION': 'gnome-fallback'}, _LinuxDesktopEnvironment.GNOME),
({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '5'}, _LinuxDesktopEnvironment.KDE5), ({'XDG_CURRENT_DESKTOP': 'KDE', 'KDE_SESSION_VERSION': '5'}, _LinuxDesktopEnvironment.KDE5),
@ -107,7 +106,7 @@ class TestCookies(unittest.TestCase):
def test_chrome_cookie_decryptor_windows_v10(self): def test_chrome_cookie_decryptor_windows_v10(self):
with MonkeyPatch(cookies, { with MonkeyPatch(cookies, {
'_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&', '_get_windows_v10_key': lambda *args, **kwargs: b'Y\xef\xad\xad\xeerp\xf0Y\xe6\x9b\x12\xc2<z\x16]\n\xbb\xb8\xcb\xd7\x9bA\xc3\x14e\x99{\xd6\xf4&'
}): }):
encrypted_value = b'v10T\xb8\xf3\xb8\x01\xa7TtcV\xfc\x88\xb8\xb8\xef\x05\xb5\xfd\x18\xc90\x009\xab\xb1\x893\x85)\x87\xe1\xa9-\xa3\xad=' encrypted_value = b'v10T\xb8\xf3\xb8\x01\xa7TtcV\xfc\x88\xb8\xb8\xef\x05\xb5\xfd\x18\xc90\x009\xab\xb1\x893\x85)\x87\xe1\xa9-\xa3\xad='
value = '32101439' value = '32101439'
@ -122,24 +121,24 @@ class TestCookies(unittest.TestCase):
self.assertEqual(decryptor.decrypt(encrypted_value), value) self.assertEqual(decryptor.decrypt(encrypted_value), value)
def test_safari_cookie_parsing(self): def test_safari_cookie_parsing(self):
cookies = ( cookies = \
b'cook\x00\x00\x00\x01\x00\x00\x00i\x00\x00\x01\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00Y' b'cook\x00\x00\x00\x01\x00\x00\x00i\x00\x00\x01\x00\x01\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00Y' \
b'\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x008\x00\x00\x00B\x00\x00\x00F\x00\x00\x00H' b'\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x008\x00\x00\x00B\x00\x00\x00F\x00\x00\x00H' \
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x03\xa5>\xc3A\x00\x00\x80\xc3\x07:\xc3A' b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x03\xa5>\xc3A\x00\x00\x80\xc3\x07:\xc3A' \
b'localhost\x00foo\x00/\x00test%20%3Bcookie\x00\x00\x00\x054\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01' b'localhost\x00foo\x00/\x00test%20%3Bcookie\x00\x00\x00\x054\x07\x17 \x05\x00\x00\x00Kbplist00\xd1\x01' \
b'\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00' b'\x02_\x10\x18NSHTTPCookieAcceptPolicy\x10\x02\x08\x0b&\x00\x00\x00\x00\x00\x00\x01\x01\x00\x00\x00' \
b'\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00(') b'\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00('
jar = parse_safari_cookies(cookies) jar = parse_safari_cookies(cookies)
self.assertEqual(len(jar), 1) self.assertEqual(len(jar), 1)
cookie = next(iter(jar)) cookie = list(jar)[0]
self.assertEqual(cookie.domain, 'localhost') self.assertEqual(cookie.domain, 'localhost')
self.assertEqual(cookie.port, None) self.assertEqual(cookie.port, None)
self.assertEqual(cookie.path, '/') self.assertEqual(cookie.path, '/')
self.assertEqual(cookie.name, 'foo') self.assertEqual(cookie.name, 'foo')
self.assertEqual(cookie.value, 'test%20%3Bcookie') self.assertEqual(cookie.value, 'test%20%3Bcookie')
self.assertFalse(cookie.secure) self.assertFalse(cookie.secure)
expected_expiration = dt.datetime(2021, 6, 18, 21, 39, 19, tzinfo=dt.timezone.utc) expected_expiration = datetime(2021, 6, 18, 21, 39, 19, tzinfo=timezone.utc)
self.assertEqual(cookie.expires, int(expected_expiration.timestamp())) self.assertEqual(cookie.expires, int(expected_expiration.timestamp()))
def test_pbkdf2_sha1(self): def test_pbkdf2_sha1(self):
@ -165,7 +164,7 @@ class TestLenientSimpleCookie(unittest.TestCase):
attributes = { attributes = {
key: value key: value
for key, value in dict(morsel).items() for key, value in dict(morsel).items()
if value != '' if value != ""
} }
self.assertEqual(attributes, expected_attributes, message) self.assertEqual(attributes, expected_attributes, message)
@ -175,133 +174,133 @@ class TestLenientSimpleCookie(unittest.TestCase):
self._run_tests( self._run_tests(
# Copied from https://github.com/python/cpython/blob/v3.10.7/Lib/test/test_http_cookies.py # Copied from https://github.com/python/cpython/blob/v3.10.7/Lib/test/test_http_cookies.py
( (
'Test basic cookie', "Test basic cookie",
'chips=ahoy; vienna=finger', "chips=ahoy; vienna=finger",
{'chips': 'ahoy', 'vienna': 'finger'}, {"chips": "ahoy", "vienna": "finger"},
), ),
( (
'Test quoted cookie', "Test quoted cookie",
'keebler="E=mc2; L=\\"Loves\\"; fudge=\\012;"', 'keebler="E=mc2; L=\\"Loves\\"; fudge=\\012;"',
{'keebler': 'E=mc2; L="Loves"; fudge=\012;'}, {"keebler": 'E=mc2; L="Loves"; fudge=\012;'},
), ),
( (
"Allow '=' in an unquoted value", "Allow '=' in an unquoted value",
'keebler=E=mc2', "keebler=E=mc2",
{'keebler': 'E=mc2'}, {"keebler": "E=mc2"},
), ),
( (
"Allow cookies with ':' in their name", "Allow cookies with ':' in their name",
'key:term=value:term', "key:term=value:term",
{'key:term': 'value:term'}, {"key:term": "value:term"},
), ),
( (
"Allow '[' and ']' in cookie values", "Allow '[' and ']' in cookie values",
'a=b; c=[; d=r; f=h', "a=b; c=[; d=r; f=h",
{'a': 'b', 'c': '[', 'd': 'r', 'f': 'h'}, {"a": "b", "c": "[", "d": "r", "f": "h"},
), ),
( (
'Test basic cookie attributes', "Test basic cookie attributes",
'Customer="WILE_E_COYOTE"; Version=1; Path=/acme', 'Customer="WILE_E_COYOTE"; Version=1; Path=/acme',
{'Customer': ('WILE_E_COYOTE', {'version': '1', 'path': '/acme'})}, {"Customer": ("WILE_E_COYOTE", {"version": "1", "path": "/acme"})},
), ),
( (
'Test flag only cookie attributes', "Test flag only cookie attributes",
'Customer="WILE_E_COYOTE"; HttpOnly; Secure', 'Customer="WILE_E_COYOTE"; HttpOnly; Secure',
{'Customer': ('WILE_E_COYOTE', {'httponly': True, 'secure': True})}, {"Customer": ("WILE_E_COYOTE", {"httponly": True, "secure": True})},
), ),
( (
'Test flag only attribute with values', "Test flag only attribute with values",
'eggs=scrambled; httponly=foo; secure=bar; Path=/bacon', "eggs=scrambled; httponly=foo; secure=bar; Path=/bacon",
{'eggs': ('scrambled', {'httponly': 'foo', 'secure': 'bar', 'path': '/bacon'})}, {"eggs": ("scrambled", {"httponly": "foo", "secure": "bar", "path": "/bacon"})},
), ),
( (
"Test special case for 'expires' attribute, 4 digit year", "Test special case for 'expires' attribute, 4 digit year",
'Customer="W"; expires=Wed, 01 Jan 2010 00:00:00 GMT', 'Customer="W"; expires=Wed, 01 Jan 2010 00:00:00 GMT',
{'Customer': ('W', {'expires': 'Wed, 01 Jan 2010 00:00:00 GMT'})}, {"Customer": ("W", {"expires": "Wed, 01 Jan 2010 00:00:00 GMT"})},
), ),
( (
"Test special case for 'expires' attribute, 2 digit year", "Test special case for 'expires' attribute, 2 digit year",
'Customer="W"; expires=Wed, 01 Jan 98 00:00:00 GMT', 'Customer="W"; expires=Wed, 01 Jan 98 00:00:00 GMT',
{'Customer': ('W', {'expires': 'Wed, 01 Jan 98 00:00:00 GMT'})}, {"Customer": ("W", {"expires": "Wed, 01 Jan 98 00:00:00 GMT"})},
), ),
( (
'Test extra spaces in keys and values', "Test extra spaces in keys and values",
'eggs = scrambled ; secure ; path = bar ; foo=foo ', "eggs = scrambled ; secure ; path = bar ; foo=foo ",
{'eggs': ('scrambled', {'secure': True, 'path': 'bar'}), 'foo': 'foo'}, {"eggs": ("scrambled", {"secure": True, "path": "bar"}), "foo": "foo"},
), ),
( (
'Test quoted attributes', "Test quoted attributes",
'Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"', 'Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"',
{'Customer': ('WILE_E_COYOTE', {'version': '1', 'path': '/acme'})}, {"Customer": ("WILE_E_COYOTE", {"version": "1", "path": "/acme"})}
), ),
# Our own tests that CPython passes # Our own tests that CPython passes
( (
"Allow ';' in quoted value", "Allow ';' in quoted value",
'chips="a;hoy"; vienna=finger', 'chips="a;hoy"; vienna=finger',
{'chips': 'a;hoy', 'vienna': 'finger'}, {"chips": "a;hoy", "vienna": "finger"},
), ),
( (
'Keep only the last set value', "Keep only the last set value",
'a=c; a=b', "a=c; a=b",
{'a': 'b'}, {"a": "b"},
), ),
) )
def test_lenient_parsing(self): def test_lenient_parsing(self):
self._run_tests( self._run_tests(
( (
'Ignore and try to skip invalid cookies', "Ignore and try to skip invalid cookies",
'chips={"ahoy;": 1}; vienna="finger;"', 'chips={"ahoy;": 1}; vienna="finger;"',
{'vienna': 'finger;'}, {"vienna": "finger;"},
), ),
( (
'Ignore cookies without a name', "Ignore cookies without a name",
'a=b; unnamed; c=d', "a=b; unnamed; c=d",
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
"Ignore '\"' cookie without name", "Ignore '\"' cookie without name",
'a=b; "; c=d', 'a=b; "; c=d',
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
'Skip all space separated values', "Skip all space separated values",
'x a=b c=d x; e=f', "x a=b c=d x; e=f",
{'a': 'b', 'c': 'd', 'e': 'f'}, {"a": "b", "c": "d", "e": "f"},
), ),
( (
'Skip all space separated values', "Skip all space separated values",
'x a=b; data={"complex": "json", "with": "key=value"}; x c=d x', 'x a=b; data={"complex": "json", "with": "key=value"}; x c=d x',
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
'Expect quote mending', "Expect quote mending",
'a=b; invalid="; c=d', 'a=b; invalid="; c=d',
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
'Reset morsel after invalid to not capture attributes', "Reset morsel after invalid to not capture attributes",
'a=b; invalid; Version=1; c=d', "a=b; invalid; Version=1; c=d",
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
'Reset morsel after invalid to not capture attributes', "Reset morsel after invalid to not capture attributes",
'a=b; $invalid; $Version=1; c=d', "a=b; $invalid; $Version=1; c=d",
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
'Continue after non-flag attribute without value', "Continue after non-flag attribute without value",
'a=b; path; Version=1; c=d', "a=b; path; Version=1; c=d",
{'a': 'b', 'c': 'd'}, {"a": "b", "c": "d"},
), ),
( (
'Allow cookie attributes with `$` prefix', "Allow cookie attributes with `$` prefix",
'Customer="WILE_E_COYOTE"; $Version=1; $Secure; $Path=/acme', 'Customer="WILE_E_COYOTE"; $Version=1; $Secure; $Path=/acme',
{'Customer': ('WILE_E_COYOTE', {'version': '1', 'secure': True, 'path': '/acme'})}, {"Customer": ("WILE_E_COYOTE", {"version": "1", "secure": True, "path": "/acme"})},
), ),
( (
'Invalid Morsel keys should not result in an error', "Invalid Morsel keys should not result in an error",
'Key=Value; [Invalid]=Value; Another=Value', "Key=Value; [Invalid]=Value; Another=Value",
{'Key': 'Value', 'Another': 'Value'}, {"Key": "Value", "Another": "Value"},
), ),
) )

@ -10,7 +10,10 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import collections import collections
import hashlib import hashlib
import http.client
import json import json
import socket
import urllib.error
from test.helper import ( from test.helper import (
assertGreaterEqual, assertGreaterEqual,
@ -20,17 +23,16 @@ from test.helper import (
gettestcases, gettestcases,
getwebpagetestcases, getwebpagetestcases,
is_download_test, is_download_test,
report_warning,
try_rm, try_rm,
) )
import yt_dlp.YoutubeDL # isort: split import yt_dlp.YoutubeDL # isort: split
from yt_dlp.extractor import get_info_extractor from yt_dlp.extractor import get_info_extractor
from yt_dlp.networking.exceptions import HTTPError, TransportError
from yt_dlp.utils import ( from yt_dlp.utils import (
DownloadError, DownloadError,
ExtractorError, ExtractorError,
UnavailableVideoError, UnavailableVideoError,
YoutubeDLError,
format_bytes, format_bytes,
join_nonempty, join_nonempty,
) )
@ -93,15 +95,13 @@ def generator(test_case, tname):
'playlist', [] if is_playlist else [test_case]) 'playlist', [] if is_playlist else [test_case])
def print_skipping(reason): def print_skipping(reason):
print('Skipping {}: {}'.format(test_case['name'], reason)) print('Skipping %s: %s' % (test_case['name'], reason))
self.skipTest(reason) self.skipTest(reason)
if not ie.working(): if not ie.working():
print_skipping('IE marked as not _WORKING') print_skipping('IE marked as not _WORKING')
for tc in test_cases: for tc in test_cases:
if tc.get('expected_exception'):
continue
info_dict = tc.get('info_dict', {}) info_dict = tc.get('info_dict', {})
params = tc.get('params', {}) params = tc.get('params', {})
if not info_dict.get('id'): if not info_dict.get('id'):
@ -116,7 +116,7 @@ def generator(test_case, tname):
for other_ie in other_ies: for other_ie in other_ies:
if not other_ie.working(): if not other_ie.working():
print_skipping(f'test depends on {other_ie.ie_key()}IE, marked as not WORKING') print_skipping('test depends on %sIE, marked as not WORKING' % other_ie.ie_key())
params = get_params(test_case.get('params', {})) params = get_params(test_case.get('params', {}))
params['outtmpl'] = tname + '_' + params['outtmpl'] params['outtmpl'] = tname + '_' + params['outtmpl']
@ -141,14 +141,6 @@ def generator(test_case, tname):
res_dict = None res_dict = None
def match_exception(err):
expected_exception = test_case.get('expected_exception')
if not expected_exception:
return False
if err.__class__.__name__ == expected_exception:
return True
return any(exc.__class__.__name__ == expected_exception for exc in err.exc_info)
def try_rm_tcs_files(tcs=None): def try_rm_tcs_files(tcs=None):
if tcs is None: if tcs is None:
tcs = test_cases tcs = test_cases
@ -170,22 +162,18 @@ def generator(test_case, tname):
force_generic_extractor=params.get('force_generic_extractor', False)) force_generic_extractor=params.get('force_generic_extractor', False))
except (DownloadError, ExtractorError) as err: except (DownloadError, ExtractorError) as err:
# Check if the exception is not a network related one # Check if the exception is not a network related one
if not isinstance(err.exc_info[1], (TransportError, UnavailableVideoError)) or (isinstance(err.exc_info[1], HTTPError) and err.exc_info[1].status == 503): if (err.exc_info[0] not in (urllib.error.URLError, socket.timeout, UnavailableVideoError, http.client.BadStatusLine)
if match_exception(err): or (err.exc_info[0] == urllib.error.HTTPError and err.exc_info[1].code == 503)):
return
err.msg = f'{getattr(err, "msg", err)} ({tname})' err.msg = f'{getattr(err, "msg", err)} ({tname})'
raise raise
if try_num == RETRIES: if try_num == RETRIES:
raise report_warning('%s failed due to network errors, skipping...' % tname)
return
print(f'Retrying: {try_num} failed tries\n\n##########\n\n') print(f'Retrying: {try_num} failed tries\n\n##########\n\n')
try_num += 1 try_num += 1
except YoutubeDLError as err:
if match_exception(err):
return
raise
else: else:
break break
@ -239,8 +227,9 @@ def generator(test_case, tname):
got_fsize = os.path.getsize(tc_filename) got_fsize = os.path.getsize(tc_filename)
assertGreaterEqual( assertGreaterEqual(
self, got_fsize, expected_minsize, self, got_fsize, expected_minsize,
f'Expected {tc_filename} to be at least {format_bytes(expected_minsize)}, ' 'Expected %s to be at least %s, but it\'s only %s ' %
f'but it\'s only {format_bytes(got_fsize)} ') (tc_filename, format_bytes(expected_minsize),
format_bytes(got_fsize)))
if 'md5' in tc: if 'md5' in tc:
md5_for_file = _file_md5(tc_filename) md5_for_file = _file_md5(tc_filename)
self.assertEqual(tc['md5'], md5_for_file) self.assertEqual(tc['md5'], md5_for_file)
@ -249,7 +238,7 @@ def generator(test_case, tname):
info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json' info_json_fn = os.path.splitext(tc_filename)[0] + '.info.json'
self.assertTrue( self.assertTrue(
os.path.exists(info_json_fn), os.path.exists(info_json_fn),
f'Missing info file {info_json_fn}') 'Missing info file %s' % info_json_fn)
with open(info_json_fn, encoding='utf-8') as infof: with open(info_json_fn, encoding='utf-8') as infof:
info_dict = json.load(infof) info_dict = json.load(infof)
expect_info_dict(self, info_dict, tc.get('info_dict', {})) expect_info_dict(self, info_dict, tc.get('info_dict', {}))
@ -260,7 +249,7 @@ def generator(test_case, tname):
# extractor returns full results even with extract_flat # extractor returns full results even with extract_flat
res_tcs = [{'info_dict': e} for e in res_dict['entries']] res_tcs = [{'info_dict': e} for e in res_dict['entries']]
try_rm_tcs_files(res_tcs) try_rm_tcs_files(res_tcs)
ydl.close()
return test_template return test_template

@ -1,139 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import http.cookiejar
from test.helper import FakeYDL
from yt_dlp.downloader.external import (
Aria2cFD,
AxelFD,
CurlFD,
FFmpegFD,
HttpieFD,
WgetFD,
)
TEST_COOKIE = {
'version': 0,
'name': 'test',
'value': 'ytdlp',
'port': None,
'port_specified': False,
'domain': '.example.com',
'domain_specified': True,
'domain_initial_dot': False,
'path': '/',
'path_specified': True,
'secure': False,
'expires': None,
'discard': False,
'comment': None,
'comment_url': None,
'rest': {},
}
TEST_INFO = {'url': 'http://www.example.com/'}
class TestHttpieFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = HttpieFD(ydl, {})
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['http', '--download', '--output', 'test', 'http://www.example.com/'])
# Test cookie header is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['http', '--download', '--output', 'test', 'http://www.example.com/', 'Cookie:test=ytdlp'])
class TestAxelFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = AxelFD(ydl, {})
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['axel', '-o', 'test', '--', 'http://www.example.com/'])
# Test cookie header is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertEqual(
downloader._make_cmd('test', TEST_INFO),
['axel', '-o', 'test', '-H', 'Cookie: test=ytdlp', '--max-redirect=0', '--', 'http://www.example.com/'])
class TestWgetFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = WgetFD(ydl, {})
self.assertNotIn('--load-cookies', downloader._make_cmd('test', TEST_INFO))
# Test cookiejar tempfile arg is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertIn('--load-cookies', downloader._make_cmd('test', TEST_INFO))
class TestCurlFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = CurlFD(ydl, {})
self.assertNotIn('--cookie', downloader._make_cmd('test', TEST_INFO))
# Test cookie header is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
self.assertIn('--cookie', downloader._make_cmd('test', TEST_INFO))
self.assertIn('test=ytdlp', downloader._make_cmd('test', TEST_INFO))
class TestAria2cFD(unittest.TestCase):
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = Aria2cFD(ydl, {})
downloader._make_cmd('test', TEST_INFO)
self.assertFalse(hasattr(downloader, '_cookies_tempfile'))
# Test cookiejar tempfile arg is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
cmd = downloader._make_cmd('test', TEST_INFO)
self.assertIn(f'--load-cookies={downloader._cookies_tempfile}', cmd)
@unittest.skipUnless(FFmpegFD.available(), 'ffmpeg not found')
class TestFFmpegFD(unittest.TestCase):
_args = []
def _test_cmd(self, args):
self._args = args
def test_make_cmd(self):
with FakeYDL() as ydl:
downloader = FFmpegFD(ydl, {})
downloader._debug_cmd = self._test_cmd
downloader._call_downloader('test', {**TEST_INFO, 'ext': 'mp4'})
self.assertEqual(self._args, [
'ffmpeg', '-y', '-hide_banner', '-i', 'http://www.example.com/',
'-c', 'copy', '-f', 'mp4', 'file:test'])
# Test cookies arg is added
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(**TEST_COOKIE))
downloader._call_downloader('test', {**TEST_INFO, 'ext': 'mp4'})
self.assertEqual(self._args, [
'ffmpeg', '-y', '-hide_banner', '-cookies', 'test=ytdlp; path=/; domain=.example.com;\r\n',
'-i', 'http://www.example.com/', '-c', 'copy', '-f', 'mp4', 'file:test'])
# Test with non-url input (ffmpeg reads from stdin '-' for websockets)
downloader._call_downloader('test', {'url': 'x', 'ext': 'mp4'})
self.assertEqual(self._args, [
'ffmpeg', '-y', '-hide_banner', '-i', 'x', '-c', 'copy', '-f', 'mp4', 'file:test'])
if __name__ == '__main__':
unittest.main()

@ -16,7 +16,6 @@ from test.helper import http_server_port, try_rm
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.downloader.http import HttpFD from yt_dlp.downloader.http import HttpFD
from yt_dlp.utils import encodeFilename from yt_dlp.utils import encodeFilename
from yt_dlp.utils._utils import _YDLLogger as FakeLogger
TEST_DIR = os.path.dirname(os.path.abspath(__file__)) TEST_DIR = os.path.dirname(os.path.abspath(__file__))
@ -38,9 +37,9 @@ class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
end = int(mobj.group(2)) end = int(mobj.group(2))
valid_range = start is not None and end is not None valid_range = start is not None and end is not None
if valid_range: if valid_range:
content_range = f'bytes {start}-{end}' content_range = 'bytes %d-%d' % (start, end)
if total: if total:
content_range += f'/{total}' content_range += '/%d' % total
self.send_header('Content-Range', content_range) self.send_header('Content-Range', content_range)
return (end - start + 1) if valid_range else total return (end - start + 1) if valid_range else total
@ -68,6 +67,17 @@ class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
assert False assert False
class FakeLogger:
def debug(self, msg):
pass
def warning(self, msg):
pass
def error(self, msg):
pass
class TestHttpFD(unittest.TestCase): class TestHttpFD(unittest.TestCase):
def setUp(self): def setUp(self):
self.httpd = http.server.HTTPServer( self.httpd = http.server.HTTPServer(
@ -84,7 +94,7 @@ class TestHttpFD(unittest.TestCase):
filename = 'testfile.mp4' filename = 'testfile.mp4'
try_rm(encodeFilename(filename)) try_rm(encodeFilename(filename))
self.assertTrue(downloader.real_download(filename, { self.assertTrue(downloader.real_download(filename, {
'url': f'http://127.0.0.1:{self.port}/{ep}', 'url': 'http://127.0.0.1:%d/%s' % (self.port, ep),
}), ep) }), ep)
self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE, ep) self.assertEqual(os.path.getsize(encodeFilename(filename)), TEST_SIZE, ep)
try_rm(encodeFilename(filename)) try_rm(encodeFilename(filename))

@ -45,9 +45,6 @@ class TestExecution(unittest.TestCase):
self.assertTrue(os.path.exists(LAZY_EXTRACTORS)) self.assertTrue(os.path.exists(LAZY_EXTRACTORS))
_, stderr = self.run_yt_dlp(opts=('-s', 'test:')) _, stderr = self.run_yt_dlp(opts=('-s', 'test:'))
# `MIN_RECOMMENDED` emits a deprecated feature warning for deprecated Python versions
if stderr and stderr.startswith('Deprecated Feature: Support for Python'):
stderr = ''
self.assertFalse(stderr) self.assertFalse(stderr)
subprocess.check_call([sys.executable, 'test/test_all_urls.py'], cwd=rootDir, stdout=subprocess.DEVNULL) subprocess.check_call([sys.executable, 'test/test_all_urls.py'], cwd=rootDir, stdout=subprocess.DEVNULL)

@ -0,0 +1,500 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import gzip
import http.cookiejar
import http.server
import io
import pathlib
import ssl
import tempfile
import threading
import urllib.error
import urllib.request
import zlib
from test.helper import http_server_port
from yt_dlp import YoutubeDL
from yt_dlp.dependencies import brotli
from yt_dlp.utils import sanitized_Request, urlencode_postdata
from .helper import FakeYDL
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
protocol_version = 'HTTP/1.1'
def log_message(self, format, *args):
pass
def _headers(self):
payload = str(self.headers).encode('utf-8')
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def _redirect(self):
self.send_response(int(self.path[len('/redirect_'):]))
self.send_header('Location', '/method')
self.send_header('Content-Length', '0')
self.end_headers()
def _method(self, method, payload=None):
self.send_response(200)
self.send_header('Content-Length', str(len(payload or '')))
self.send_header('Method', method)
self.end_headers()
if payload:
self.wfile.write(payload)
def _status(self, status):
payload = f'<html>{status} NOT FOUND</html>'.encode()
self.send_response(int(status))
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def _read_data(self):
if 'Content-Length' in self.headers:
return self.rfile.read(int(self.headers['Content-Length']))
def do_POST(self):
data = self._read_data()
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('POST', data)
elif self.path.startswith('/headers'):
self._headers()
else:
self._status(404)
def do_HEAD(self):
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('HEAD')
else:
self._status(404)
def do_PUT(self):
data = self._read_data()
if self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('PUT', data)
else:
self._status(404)
def do_GET(self):
if self.path == '/video.html':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload))) # required for persistent connections
self.end_headers()
self.wfile.write(payload)
elif self.path == '/vid.mp4':
payload = b'\x00\x00\x00\x00\x20\x66\x74[video]'
self.send_response(200)
self.send_header('Content-Type', 'video/mp4')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
elif self.path == '/%E4%B8%AD%E6%96%87.html':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
elif self.path == '/%c7%9f':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
elif self.path.startswith('/redirect_'):
self._redirect()
elif self.path.startswith('/method'):
self._method('GET')
elif self.path.startswith('/headers'):
self._headers()
elif self.path == '/trailing_garbage':
payload = b'<html><video src="/vid.mp4" /></html>'
self.send_response(200)
self.send_header('Content-Type', 'text/html; charset=utf-8')
self.send_header('Content-Encoding', 'gzip')
buf = io.BytesIO()
with gzip.GzipFile(fileobj=buf, mode='wb') as f:
f.write(payload)
compressed = buf.getvalue() + b'trailing garbage'
self.send_header('Content-Length', str(len(compressed)))
self.end_headers()
self.wfile.write(compressed)
elif self.path == '/302-non-ascii-redirect':
new_url = f'http://127.0.0.1:{http_server_port(self.server)}/中文.html'
self.send_response(301)
self.send_header('Location', new_url)
self.send_header('Content-Length', '0')
self.end_headers()
elif self.path == '/content-encoding':
encodings = self.headers.get('ytdl-encoding', '')
payload = b'<html><video src="/vid.mp4" /></html>'
for encoding in filter(None, (e.strip() for e in encodings.split(','))):
if encoding == 'br' and brotli:
payload = brotli.compress(payload)
elif encoding == 'gzip':
buf = io.BytesIO()
with gzip.GzipFile(fileobj=buf, mode='wb') as f:
f.write(payload)
payload = buf.getvalue()
elif encoding == 'deflate':
payload = zlib.compress(payload)
elif encoding == 'unsupported':
payload = b'raw'
break
else:
self._status(415)
return
self.send_response(200)
self.send_header('Content-Encoding', encodings)
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload)
else:
self._status(404)
def send_header(self, keyword, value):
"""
Forcibly allow HTTP server to send non percent-encoded non-ASCII characters in headers.
This is against what is defined in RFC 3986, however we need to test we support this
since some sites incorrectly do this.
"""
if keyword.lower() == 'connection':
return super().send_header(keyword, value)
if not hasattr(self, '_headers_buffer'):
self._headers_buffer = []
self._headers_buffer.append(f'{keyword}: {value}\r\n'.encode())
class FakeLogger:
def debug(self, msg):
pass
def warning(self, msg):
pass
def error(self, msg):
pass
class TestHTTP(unittest.TestCase):
def setUp(self):
# HTTP server
self.http_httpd = http.server.ThreadingHTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler)
self.http_port = http_server_port(self.http_httpd)
self.http_server_thread = threading.Thread(target=self.http_httpd.serve_forever)
# FIXME: we should probably stop the http server thread after each test
# See: https://github.com/yt-dlp/yt-dlp/pull/7094#discussion_r1199746041
self.http_server_thread.daemon = True
self.http_server_thread.start()
# HTTPS server
certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.https_httpd = http.server.ThreadingHTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler)
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
self.https_httpd.socket = sslctx.wrap_socket(self.https_httpd.socket, server_side=True)
self.https_port = http_server_port(self.https_httpd)
self.https_server_thread = threading.Thread(target=self.https_httpd.serve_forever)
self.https_server_thread.daemon = True
self.https_server_thread.start()
def test_nocheckcertificate(self):
with FakeYDL({'logger': FakeLogger()}) as ydl:
with self.assertRaises(urllib.error.URLError):
ydl.urlopen(sanitized_Request(f'https://127.0.0.1:{self.https_port}/headers'))
with FakeYDL({'logger': FakeLogger(), 'nocheckcertificate': True}) as ydl:
r = ydl.urlopen(sanitized_Request(f'https://127.0.0.1:{self.https_port}/headers'))
self.assertEqual(r.status, 200)
r.close()
def test_percent_encode(self):
with FakeYDL() as ydl:
# Unicode characters should be encoded with uppercase percent-encoding
res = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/中文.html'))
self.assertEqual(res.status, 200)
res.close()
# don't normalize existing percent encodings
res = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/%c7%9f'))
self.assertEqual(res.status, 200)
res.close()
def test_unicode_path_redirection(self):
with FakeYDL() as ydl:
r = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/302-non-ascii-redirect'))
self.assertEqual(r.url, f'http://127.0.0.1:{self.http_port}/%E4%B8%AD%E6%96%87.html')
r.close()
def test_redirect(self):
with FakeYDL() as ydl:
def do_req(redirect_status, method):
data = b'testdata' if method in ('POST', 'PUT') else None
res = ydl.urlopen(sanitized_Request(
f'http://127.0.0.1:{self.http_port}/redirect_{redirect_status}', method=method, data=data))
return res.read().decode('utf-8'), res.headers.get('method', '')
# A 303 must either use GET or HEAD for subsequent request
self.assertEqual(do_req(303, 'POST'), ('', 'GET'))
self.assertEqual(do_req(303, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(303, 'PUT'), ('', 'GET'))
# 301 and 302 turn POST only into a GET
self.assertEqual(do_req(301, 'POST'), ('', 'GET'))
self.assertEqual(do_req(301, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(302, 'POST'), ('', 'GET'))
self.assertEqual(do_req(302, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(301, 'PUT'), ('testdata', 'PUT'))
self.assertEqual(do_req(302, 'PUT'), ('testdata', 'PUT'))
# 307 and 308 should not change method
for m in ('POST', 'PUT'):
self.assertEqual(do_req(307, m), ('testdata', m))
self.assertEqual(do_req(308, m), ('testdata', m))
self.assertEqual(do_req(307, 'HEAD'), ('', 'HEAD'))
self.assertEqual(do_req(308, 'HEAD'), ('', 'HEAD'))
# These should not redirect and instead raise an HTTPError
for code in (300, 304, 305, 306):
with self.assertRaises(urllib.error.HTTPError):
do_req(code, 'GET')
def test_content_type(self):
# https://github.com/yt-dlp/yt-dlp/commit/379a4f161d4ad3e40932dcf5aca6e6fb9715ab28
with FakeYDL({'nocheckcertificate': True}) as ydl:
# method should be auto-detected as POST
r = sanitized_Request(f'https://localhost:{self.https_port}/headers', data=urlencode_postdata({'test': 'test'}))
headers = ydl.urlopen(r).read().decode('utf-8')
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
# test http
r = sanitized_Request(f'http://localhost:{self.http_port}/headers', data=urlencode_postdata({'test': 'test'}))
headers = ydl.urlopen(r).read().decode('utf-8')
self.assertIn('Content-Type: application/x-www-form-urlencoded', headers)
def test_cookiejar(self):
with FakeYDL() as ydl:
ydl.cookiejar.set_cookie(http.cookiejar.Cookie(
0, 'test', 'ytdlp', None, False, '127.0.0.1', True,
False, '/headers', True, False, None, False, None, None, {}))
data = ydl.urlopen(sanitized_Request(f'http://127.0.0.1:{self.http_port}/headers')).read()
self.assertIn(b'Cookie: test=ytdlp', data)
def test_no_compression_compat_header(self):
with FakeYDL() as ydl:
data = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/headers',
headers={'Youtubedl-no-compression': True})).read()
self.assertIn(b'Accept-Encoding: identity', data)
self.assertNotIn(b'youtubedl-no-compression', data.lower())
def test_gzip_trailing_garbage(self):
# https://github.com/ytdl-org/youtube-dl/commit/aa3e950764337ef9800c936f4de89b31c00dfcf5
# https://github.com/ytdl-org/youtube-dl/commit/6f2ec15cee79d35dba065677cad9da7491ec6e6f
with FakeYDL() as ydl:
data = ydl.urlopen(sanitized_Request(f'http://localhost:{self.http_port}/trailing_garbage')).read().decode('utf-8')
self.assertEqual(data, '<html><video src="/vid.mp4" /></html>')
@unittest.skipUnless(brotli, 'brotli support is not installed')
def test_brotli(self):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'br'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'br')
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_deflate(self):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'deflate'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'deflate')
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_gzip(self):
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'gzip'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'gzip')
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_multiple_encodings(self):
# https://www.rfc-editor.org/rfc/rfc9110.html#section-8.4
with FakeYDL() as ydl:
for pair in ('gzip,deflate', 'deflate, gzip', 'gzip, gzip', 'deflate, deflate'):
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': pair}))
self.assertEqual(res.headers.get('Content-Encoding'), pair)
self.assertEqual(res.read(), b'<html><video src="/vid.mp4" /></html>')
def test_unsupported_encoding(self):
# it should return the raw content
with FakeYDL() as ydl:
res = ydl.urlopen(
sanitized_Request(
f'http://127.0.0.1:{self.http_port}/content-encoding',
headers={'ytdl-encoding': 'unsupported'}))
self.assertEqual(res.headers.get('Content-Encoding'), 'unsupported')
self.assertEqual(res.read(), b'raw')
class TestClientCert(unittest.TestCase):
def setUp(self):
certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.certdir = os.path.join(TEST_DIR, 'testdata', 'certificate')
cacertfn = os.path.join(self.certdir, 'ca.crt')
self.httpd = http.server.HTTPServer(('127.0.0.1', 0), HTTPTestRequestHandler)
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.verify_mode = ssl.CERT_REQUIRED
sslctx.load_verify_locations(cafile=cacertfn)
sslctx.load_cert_chain(certfn, None)
self.httpd.socket = sslctx.wrap_socket(self.httpd.socket, server_side=True)
self.port = http_server_port(self.httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def _run_test(self, **params):
ydl = YoutubeDL({
'logger': FakeLogger(),
# Disable client-side validation of unacceptable self-signed testcert.pem
# The test is of a check on the server side, so unaffected
'nocheckcertificate': True,
**params,
})
r = ydl.extract_info(f'https://127.0.0.1:{self.port}/video.html')
self.assertEqual(r['url'], f'https://127.0.0.1:{self.port}/vid.mp4')
def test_certificate_combined_nopass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'clientwithkey.crt'))
def test_certificate_nocombined_nopass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'client.crt'),
client_certificate_key=os.path.join(self.certdir, 'client.key'))
def test_certificate_combined_pass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'clientwithencryptedkey.crt'),
client_certificate_password='foobar')
def test_certificate_nocombined_pass(self):
self._run_test(client_certificate=os.path.join(self.certdir, 'client.crt'),
client_certificate_key=os.path.join(self.certdir, 'clientencrypted.key'),
client_certificate_password='foobar')
def _build_proxy_handler(name):
class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
proxy_name = name
def log_message(self, format, *args):
pass
def do_GET(self):
self.send_response(200)
self.send_header('Content-Type', 'text/plain; charset=utf-8')
self.end_headers()
self.wfile.write(f'{self.proxy_name}: {self.path}'.encode())
return HTTPTestRequestHandler
class TestProxy(unittest.TestCase):
def setUp(self):
self.proxy = http.server.HTTPServer(
('127.0.0.1', 0), _build_proxy_handler('normal'))
self.port = http_server_port(self.proxy)
self.proxy_thread = threading.Thread(target=self.proxy.serve_forever)
self.proxy_thread.daemon = True
self.proxy_thread.start()
self.geo_proxy = http.server.HTTPServer(
('127.0.0.1', 0), _build_proxy_handler('geo'))
self.geo_port = http_server_port(self.geo_proxy)
self.geo_proxy_thread = threading.Thread(target=self.geo_proxy.serve_forever)
self.geo_proxy_thread.daemon = True
self.geo_proxy_thread.start()
def test_proxy(self):
geo_proxy = f'127.0.0.1:{self.geo_port}'
ydl = YoutubeDL({
'proxy': f'127.0.0.1:{self.port}',
'geo_verification_proxy': geo_proxy,
})
url = 'http://foo.com/bar'
response = ydl.urlopen(url).read().decode()
self.assertEqual(response, f'normal: {url}')
req = urllib.request.Request(url)
req.add_header('Ytdl-request-proxy', geo_proxy)
response = ydl.urlopen(req).read().decode()
self.assertEqual(response, f'geo: {url}')
def test_proxy_with_idn(self):
ydl = YoutubeDL({
'proxy': f'127.0.0.1:{self.port}',
})
url = 'http://中文.tw/'
response = ydl.urlopen(url).read().decode()
# b'xn--fiq228c' is '中文'.encode('idna')
self.assertEqual(response, 'normal: http://xn--fiq228c.tw/')
class TestFileURL(unittest.TestCase):
# See https://github.com/ytdl-org/youtube-dl/issues/8227
def test_file_urls(self):
tf = tempfile.NamedTemporaryFile(delete=False)
tf.write(b'foobar')
tf.close()
url = pathlib.Path(tf.name).as_uri()
with FakeYDL() as ydl:
self.assertRaisesRegex(
urllib.error.URLError, 'file:// URLs are explicitly disabled in yt-dlp for security reasons', ydl.urlopen, url)
with FakeYDL({'enable_file_urls': True}) as ydl:
res = ydl.urlopen(url)
self.assertEqual(res.read(), b'foobar')
res.close()
os.unlink(tf.name)
if __name__ == '__main__':
unittest.main()

@ -1,380 +0,0 @@
import abc
import base64
import contextlib
import functools
import json
import os
import random
import ssl
import threading
from http.server import BaseHTTPRequestHandler
from socketserver import ThreadingTCPServer
import pytest
from test.helper import http_server_port, verify_address_availability
from test.test_networking import TEST_DIR
from test.test_socks import IPv6ThreadingTCPServer
from yt_dlp.dependencies import urllib3
from yt_dlp.networking import Request
from yt_dlp.networking.exceptions import HTTPError, ProxyError, SSLError
class HTTPProxyAuthMixin:
def proxy_auth_error(self):
self.send_response(407)
self.send_header('Proxy-Authenticate', 'Basic realm="test http proxy"')
self.end_headers()
return False
def do_proxy_auth(self, username, password):
if username is None and password is None:
return True
proxy_auth_header = self.headers.get('Proxy-Authorization', None)
if proxy_auth_header is None:
return self.proxy_auth_error()
if not proxy_auth_header.startswith('Basic '):
return self.proxy_auth_error()
auth = proxy_auth_header[6:]
try:
auth_username, auth_password = base64.b64decode(auth).decode().split(':', 1)
except Exception:
return self.proxy_auth_error()
if auth_username != (username or '') or auth_password != (password or ''):
return self.proxy_auth_error()
return True
class HTTPProxyHandler(BaseHTTPRequestHandler, HTTPProxyAuthMixin):
def __init__(self, *args, proxy_info=None, username=None, password=None, request_handler=None, **kwargs):
self.username = username
self.password = password
self.proxy_info = proxy_info
super().__init__(*args, **kwargs)
def do_GET(self):
if not self.do_proxy_auth(self.username, self.password):
self.server.close_request(self.request)
return
if self.path.endswith('/proxy_info'):
payload = json.dumps(self.proxy_info or {
'client_address': self.client_address,
'connect': False,
'connect_host': None,
'connect_port': None,
'headers': dict(self.headers),
'path': self.path,
'proxy': ':'.join(str(y) for y in self.connection.getsockname()),
})
self.send_response(200)
self.send_header('Content-Type', 'application/json; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload.encode())
else:
self.send_response(404)
self.end_headers()
self.server.close_request(self.request)
if urllib3:
import urllib3.util.ssltransport
class SSLTransport(urllib3.util.ssltransport.SSLTransport):
"""
Modified version of urllib3 SSLTransport to support server side SSL
This allows us to chain multiple TLS connections.
"""
def __init__(self, socket, ssl_context, server_hostname=None, suppress_ragged_eofs=True, server_side=False):
self.incoming = ssl.MemoryBIO()
self.outgoing = ssl.MemoryBIO()
self.suppress_ragged_eofs = suppress_ragged_eofs
self.socket = socket
self.sslobj = ssl_context.wrap_bio(
self.incoming,
self.outgoing,
server_hostname=server_hostname,
server_side=server_side,
)
self._ssl_io_loop(self.sslobj.do_handshake)
@property
def _io_refs(self):
return self.socket._io_refs
@_io_refs.setter
def _io_refs(self, value):
self.socket._io_refs = value
def shutdown(self, *args, **kwargs):
self.socket.shutdown(*args, **kwargs)
else:
SSLTransport = None
class HTTPSProxyHandler(HTTPProxyHandler):
def __init__(self, request, *args, **kwargs):
certfn = os.path.join(TEST_DIR, 'testcert.pem')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
if isinstance(request, ssl.SSLSocket):
request = SSLTransport(request, ssl_context=sslctx, server_side=True)
else:
request = sslctx.wrap_socket(request, server_side=True)
super().__init__(request, *args, **kwargs)
class HTTPConnectProxyHandler(BaseHTTPRequestHandler, HTTPProxyAuthMixin):
protocol_version = 'HTTP/1.1'
default_request_version = 'HTTP/1.1'
def __init__(self, *args, username=None, password=None, request_handler=None, **kwargs):
self.username = username
self.password = password
self.request_handler = request_handler
super().__init__(*args, **kwargs)
def do_CONNECT(self):
if not self.do_proxy_auth(self.username, self.password):
self.server.close_request(self.request)
return
self.send_response(200)
self.end_headers()
proxy_info = {
'client_address': self.client_address,
'connect': True,
'connect_host': self.path.split(':')[0],
'connect_port': int(self.path.split(':')[1]),
'headers': dict(self.headers),
'path': self.path,
'proxy': ':'.join(str(y) for y in self.connection.getsockname()),
}
self.request_handler(self.request, self.client_address, self.server, proxy_info=proxy_info)
self.server.close_request(self.request)
class HTTPSConnectProxyHandler(HTTPConnectProxyHandler):
def __init__(self, request, *args, **kwargs):
certfn = os.path.join(TEST_DIR, 'testcert.pem')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
request = sslctx.wrap_socket(request, server_side=True)
self._original_request = request
super().__init__(request, *args, **kwargs)
def do_CONNECT(self):
super().do_CONNECT()
self.server.close_request(self._original_request)
@contextlib.contextmanager
def proxy_server(proxy_server_class, request_handler, bind_ip=None, **proxy_server_kwargs):
server = server_thread = None
try:
bind_address = bind_ip or '127.0.0.1'
server_type = ThreadingTCPServer if '.' in bind_address else IPv6ThreadingTCPServer
server = server_type(
(bind_address, 0), functools.partial(proxy_server_class, request_handler=request_handler, **proxy_server_kwargs))
server_port = http_server_port(server)
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
if '.' not in bind_address:
yield f'[{bind_address}]:{server_port}'
else:
yield f'{bind_address}:{server_port}'
finally:
server.shutdown()
server.server_close()
server_thread.join(2.0)
class HTTPProxyTestContext(abc.ABC):
REQUEST_HANDLER_CLASS = None
REQUEST_PROTO = None
def http_server(self, server_class, *args, **kwargs):
return proxy_server(server_class, self.REQUEST_HANDLER_CLASS, *args, **kwargs)
@abc.abstractmethod
def proxy_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs) -> dict:
"""return a dict of proxy_info"""
class HTTPProxyHTTPTestContext(HTTPProxyTestContext):
# Standard HTTP Proxy for http requests
REQUEST_HANDLER_CLASS = HTTPProxyHandler
REQUEST_PROTO = 'http'
def proxy_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'http://{target_domain or "127.0.0.1"}:{target_port or "40000"}/proxy_info', **req_kwargs)
handler.validate(request)
return json.loads(handler.send(request).read().decode())
class HTTPProxyHTTPSTestContext(HTTPProxyTestContext):
# HTTP Connect proxy, for https requests
REQUEST_HANDLER_CLASS = HTTPSProxyHandler
REQUEST_PROTO = 'https'
def proxy_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'https://{target_domain or "127.0.0.1"}:{target_port or "40000"}/proxy_info', **req_kwargs)
handler.validate(request)
return json.loads(handler.send(request).read().decode())
CTX_MAP = {
'http': HTTPProxyHTTPTestContext,
'https': HTTPProxyHTTPSTestContext,
}
@pytest.fixture(scope='module')
def ctx(request):
return CTX_MAP[request.param]()
@pytest.mark.parametrize(
'handler', ['Urllib', 'Requests', 'CurlCFFI'], indirect=True)
@pytest.mark.parametrize('ctx', ['http'], indirect=True) # pure http proxy can only support http
class TestHTTPProxy:
def test_http_no_auth(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler) as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is False
assert 'Proxy-Authorization' not in proxy_info['headers']
def test_http_auth(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler, username='test', password='test') as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://test:test@{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert 'Proxy-Authorization' in proxy_info['headers']
def test_http_bad_auth(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler, username='test', password='test') as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://test:bad@{server_address}'}) as rh:
with pytest.raises(HTTPError) as exc_info:
ctx.proxy_info_request(rh)
assert exc_info.value.response.status == 407
exc_info.value.response.close()
def test_http_source_address(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'},
source_address=source_address) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['client_address'][0] == source_address
@pytest.mark.skip_handler('Urllib', 'urllib does not support https proxies')
def test_https(self, handler, ctx):
with ctx.http_server(HTTPSProxyHandler) as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is False
assert 'Proxy-Authorization' not in proxy_info['headers']
@pytest.mark.skip_handler('Urllib', 'urllib does not support https proxies')
def test_https_verify_failed(self, handler, ctx):
with ctx.http_server(HTTPSProxyHandler) as server_address:
with handler(verify=True, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
# Accept SSLError as may not be feasible to tell if it is proxy or request error.
# note: if request proto also does ssl verification, this may also be the error of the request.
# Until we can support passing custom cacerts to handlers, we cannot properly test this for all cases.
with pytest.raises((ProxyError, SSLError)):
ctx.proxy_info_request(rh)
def test_http_with_idn(self, handler, ctx):
with ctx.http_server(HTTPProxyHandler) as server_address:
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh, target_domain='中文.tw')
assert proxy_info['proxy'] == server_address
assert proxy_info['path'].startswith('http://xn--fiq228c.tw')
assert proxy_info['headers']['Host'].split(':', 1)[0] == 'xn--fiq228c.tw'
@pytest.mark.parametrize(
'handler,ctx', [
('Requests', 'https'),
('CurlCFFI', 'https'),
], indirect=True)
class TestHTTPConnectProxy:
def test_http_connect_no_auth(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler) as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'http://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is True
assert 'Proxy-Authorization' not in proxy_info['headers']
def test_http_connect_auth(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler, username='test', password='test') as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'http://test:test@{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert 'Proxy-Authorization' in proxy_info['headers']
@pytest.mark.skip_handler(
'Requests',
'bug in urllib3 causes unclosed socket: https://github.com/urllib3/urllib3/issues/3374',
)
def test_http_connect_bad_auth(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler, username='test', password='test') as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'http://test:bad@{server_address}'}) as rh:
with pytest.raises(ProxyError):
ctx.proxy_info_request(rh)
def test_http_connect_source_address(self, handler, ctx):
with ctx.http_server(HTTPConnectProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={ctx.REQUEST_PROTO: f'http://{server_address}'},
source_address=source_address,
verify=False) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['client_address'][0] == source_address
@pytest.mark.skipif(urllib3 is None, reason='requires urllib3 to test')
def test_https_connect_proxy(self, handler, ctx):
with ctx.http_server(HTTPSConnectProxyHandler) as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert proxy_info['connect'] is True
assert 'Proxy-Authorization' not in proxy_info['headers']
@pytest.mark.skipif(urllib3 is None, reason='requires urllib3 to test')
def test_https_connect_verify_failed(self, handler, ctx):
with ctx.http_server(HTTPSConnectProxyHandler) as server_address:
with handler(verify=True, proxies={ctx.REQUEST_PROTO: f'https://{server_address}'}) as rh:
# Accept SSLError as may not be feasible to tell if it is proxy or request error.
# note: if request proto also does ssl verification, this may also be the error of the request.
# Until we can support passing custom cacerts to handlers, we cannot properly test this for all cases.
with pytest.raises((ProxyError, SSLError)):
ctx.proxy_info_request(rh)
@pytest.mark.skipif(urllib3 is None, reason='requires urllib3 to test')
def test_https_connect_proxy_auth(self, handler, ctx):
with ctx.http_server(HTTPSConnectProxyHandler, username='test', password='test') as server_address:
with handler(verify=False, proxies={ctx.REQUEST_PROTO: f'https://test:test@{server_address}'}) as rh:
proxy_info = ctx.proxy_info_request(rh)
assert proxy_info['proxy'] == server_address
assert 'Proxy-Authorization' in proxy_info['headers']

@ -29,11 +29,11 @@ class WarningLogger:
@is_download_test @is_download_test
class TestIqiyiSDKInterpreter(unittest.TestCase): class TestIqiyiSDKInterpreter(unittest.TestCase):
def test_iqiyi_sdk_interpreter(self): def test_iqiyi_sdk_interpreter(self):
""" '''
Test the functionality of IqiyiSDKInterpreter by trying to log in Test the functionality of IqiyiSDKInterpreter by trying to log in
If `sign` is incorrect, /validate call throws an HTTP 556 error If `sign` is incorrect, /validate call throws an HTTP 556 error
""" '''
logger = WarningLogger() logger = WarningLogger()
ie = IqiyiIE(FakeYDL({'logger': logger})) ie = IqiyiIE(FakeYDL({'logger': logger}))
ie._perform_login('foo', 'bar') ie._perform_login('foo', 'bar')

@ -92,7 +92,6 @@ class TestJSInterpreter(unittest.TestCase):
self._test('function f(){return 0 && 1 || 2;}', 2) self._test('function f(){return 0 && 1 || 2;}', 2)
self._test('function f(){return 0 ?? 42;}', 0) self._test('function f(){return 0 ?? 42;}', 0)
self._test('function f(){return "life, the universe and everything" < 42;}', False) self._test('function f(){return "life, the universe and everything" < 42;}', False)
self._test('function f(){return 0 - 7 * - 6;}', 42)
def test_array_access(self): def test_array_access(self):
self._test('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}', [5, 2, 7]) self._test('function f(){var x = [1,2,3]; x[0] = 4; x[0] = 5; x[2.0] = 7; return x;}', [5, 2, 7])

@ -21,7 +21,7 @@ class TestNetRc(unittest.TestCase):
continue continue
self.assertTrue( self.assertTrue(
ie._NETRC_MACHINE, ie._NETRC_MACHINE,
f'Extractor {ie.IE_NAME} supports login, but is missing a _NETRC_MACHINE property') 'Extractor %s supports login, but is missing a _NETRC_MACHINE property' % ie.IE_NAME)
if __name__ == '__main__': if __name__ == '__main__':

File diff suppressed because it is too large Load Diff

@ -1,208 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import pytest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import io
import random
import ssl
from yt_dlp.cookies import YoutubeDLCookieJar
from yt_dlp.dependencies import certifi
from yt_dlp.networking import Response
from yt_dlp.networking._helper import (
InstanceStoreMixin,
add_accept_encoding_header,
get_redirect_method,
make_socks_proxy_opts,
select_proxy,
ssl_load_certs,
)
from yt_dlp.networking.exceptions import (
HTTPError,
IncompleteRead,
)
from yt_dlp.socks import ProxyType
from yt_dlp.utils.networking import HTTPHeaderDict
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
class TestNetworkingUtils:
def test_select_proxy(self):
proxies = {
'all': 'socks5://example.com',
'http': 'http://example.com:1080',
'no': 'bypass.example.com,yt-dl.org',
}
assert select_proxy('https://example.com', proxies) == proxies['all']
assert select_proxy('http://example.com', proxies) == proxies['http']
assert select_proxy('http://bypass.example.com', proxies) is None
assert select_proxy('https://yt-dl.org', proxies) is None
@pytest.mark.parametrize('socks_proxy,expected', [
('socks5h://example.com', {
'proxytype': ProxyType.SOCKS5,
'addr': 'example.com',
'port': 1080,
'rdns': True,
'username': None,
'password': None,
}),
('socks5://user:@example.com:5555', {
'proxytype': ProxyType.SOCKS5,
'addr': 'example.com',
'port': 5555,
'rdns': False,
'username': 'user',
'password': '',
}),
('socks4://u%40ser:pa%20ss@127.0.0.1:1080', {
'proxytype': ProxyType.SOCKS4,
'addr': '127.0.0.1',
'port': 1080,
'rdns': False,
'username': 'u@ser',
'password': 'pa ss',
}),
('socks4a://:pa%20ss@127.0.0.1', {
'proxytype': ProxyType.SOCKS4A,
'addr': '127.0.0.1',
'port': 1080,
'rdns': True,
'username': '',
'password': 'pa ss',
}),
])
def test_make_socks_proxy_opts(self, socks_proxy, expected):
assert make_socks_proxy_opts(socks_proxy) == expected
def test_make_socks_proxy_unknown(self):
with pytest.raises(ValueError, match='Unknown SOCKS proxy version: socks'):
make_socks_proxy_opts('socks://127.0.0.1')
@pytest.mark.skipif(not certifi, reason='certifi is not installed')
def test_load_certifi(self):
context_certifi = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context_certifi.load_verify_locations(cafile=certifi.where())
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ssl_load_certs(context, use_certifi=True)
assert context.get_ca_certs() == context_certifi.get_ca_certs()
context_default = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context_default.load_default_certs()
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ssl_load_certs(context, use_certifi=False)
assert context.get_ca_certs() == context_default.get_ca_certs()
if context_default.get_ca_certs() == context_certifi.get_ca_certs():
pytest.skip('System uses certifi as default. The test is not valid')
@pytest.mark.parametrize('method,status,expected', [
('GET', 303, 'GET'),
('HEAD', 303, 'HEAD'),
('PUT', 303, 'GET'),
('POST', 301, 'GET'),
('HEAD', 301, 'HEAD'),
('POST', 302, 'GET'),
('HEAD', 302, 'HEAD'),
('PUT', 302, 'PUT'),
('POST', 308, 'POST'),
('POST', 307, 'POST'),
('HEAD', 308, 'HEAD'),
('HEAD', 307, 'HEAD'),
])
def test_get_redirect_method(self, method, status, expected):
assert get_redirect_method(method, status) == expected
@pytest.mark.parametrize('headers,supported_encodings,expected', [
({'Accept-Encoding': 'br'}, ['gzip', 'br'], {'Accept-Encoding': 'br'}),
({}, ['gzip', 'br'], {'Accept-Encoding': 'gzip, br'}),
({'Content-type': 'application/json'}, [], {'Content-type': 'application/json', 'Accept-Encoding': 'identity'}),
])
def test_add_accept_encoding_header(self, headers, supported_encodings, expected):
headers = HTTPHeaderDict(headers)
add_accept_encoding_header(headers, supported_encodings)
assert headers == HTTPHeaderDict(expected)
class TestInstanceStoreMixin:
class FakeInstanceStoreMixin(InstanceStoreMixin):
def _create_instance(self, **kwargs):
return random.randint(0, 1000000)
def _close_instance(self, instance):
pass
def test_mixin(self):
mixin = self.FakeInstanceStoreMixin()
assert mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}}) == mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}})
assert mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'e', 4}}) != mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}})
assert mixin._get_instance(d={'a': 1, 'b': 2, 'c': {'d', 4}} != mixin._get_instance(d={'a': 1, 'b': 2, 'g': {'d', 4}}))
assert mixin._get_instance(d={'a': 1}, e=[1, 2, 3]) == mixin._get_instance(d={'a': 1}, e=[1, 2, 3])
assert mixin._get_instance(d={'a': 1}, e=[1, 2, 3]) != mixin._get_instance(d={'a': 1}, e=[1, 2, 3, 4])
cookiejar = YoutubeDLCookieJar()
assert mixin._get_instance(b=[1, 2], c=cookiejar) == mixin._get_instance(b=[1, 2], c=cookiejar)
assert mixin._get_instance(b=[1, 2], c=cookiejar) != mixin._get_instance(b=[1, 2], c=YoutubeDLCookieJar())
# Different order
assert mixin._get_instance(c=cookiejar, b=[1, 2]) == mixin._get_instance(b=[1, 2], c=cookiejar)
m = mixin._get_instance(t=1234)
assert mixin._get_instance(t=1234) == m
mixin._clear_instances()
assert mixin._get_instance(t=1234) != m
class TestNetworkingExceptions:
@staticmethod
def create_response(status):
return Response(fp=io.BytesIO(b'test'), url='http://example.com', headers={'tesT': 'test'}, status=status)
def test_http_error(self):
response = self.create_response(403)
error = HTTPError(response)
assert error.status == 403
assert str(error) == error.msg == 'HTTP Error 403: Forbidden'
assert error.reason == response.reason
assert error.response is response
data = error.response.read()
assert data == b'test'
assert repr(error) == '<HTTPError 403: Forbidden>'
def test_redirect_http_error(self):
response = self.create_response(301)
error = HTTPError(response, redirect_loop=True)
assert str(error) == error.msg == 'HTTP Error 301: Moved Permanently (redirect loop detected)'
assert error.reason == 'Moved Permanently'
def test_incomplete_read_error(self):
error = IncompleteRead(4, 3, cause='test')
assert isinstance(error, IncompleteRead)
assert repr(error) == '<IncompleteRead: 4 bytes read, 3 more expected>'
assert str(error) == error.msg == '4 bytes read, 3 more expected'
assert error.partial == 4
assert error.expected == 3
assert error.cause == 'test'
error = IncompleteRead(3)
assert repr(error) == '<IncompleteRead: 3 bytes read>'
assert str(error) == '3 bytes read'

@ -27,7 +27,7 @@ class TestOverwrites(unittest.TestCase):
[ [
sys.executable, 'yt_dlp/__main__.py', sys.executable, 'yt_dlp/__main__.py',
'-o', 'test.webm', '-o', 'test.webm',
'https://www.youtube.com/watch?v=jNQXAC9IVRw', 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE) ], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate() sout, serr = outp.communicate()
self.assertTrue(b'has already been downloaded' in sout) self.assertTrue(b'has already been downloaded' in sout)
@ -39,7 +39,7 @@ class TestOverwrites(unittest.TestCase):
[ [
sys.executable, 'yt_dlp/__main__.py', '--yes-overwrites', sys.executable, 'yt_dlp/__main__.py', '--yes-overwrites',
'-o', 'test.webm', '-o', 'test.webm',
'https://www.youtube.com/watch?v=jNQXAC9IVRw', 'https://www.youtube.com/watch?v=jNQXAC9IVRw'
], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE) ], cwd=root_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = outp.communicate() sout, serr = outp.communicate()
self.assertTrue(b'has already been downloaded' not in sout) self.assertTrue(b'has already been downloaded' not in sout)

@ -31,7 +31,7 @@ class TestPlugins(unittest.TestCase):
# don't load modules with underscore prefix # don't load modules with underscore prefix
self.assertFalse( self.assertFalse(
f'{PACKAGE_NAME}.extractor._ignore' in sys.modules, f'{PACKAGE_NAME}.extractor._ignore' in sys.modules.keys(),
'loaded module beginning with underscore') 'loaded module beginning with underscore')
self.assertNotIn('IgnorePluginIE', plugins_ie.keys()) self.assertNotIn('IgnorePluginIE', plugins_ie.keys())

@ -59,7 +59,7 @@ class TestPostHooks(unittest.TestCase):
def hook_three(self, filename): def hook_three(self, filename):
self.files.append(filename) self.files.append(filename)
raise Exception(f'Test exception for \'{filename}\'') raise Exception('Test exception for \'%s\'' % filename)
def tearDown(self): def tearDown(self):
for f in self.files: for f in self.files:

@ -9,7 +9,7 @@ sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.utils import shell_quote from yt_dlp.compat import compat_shlex_quote
from yt_dlp.postprocessor import ( from yt_dlp.postprocessor import (
ExecPP, ExecPP,
FFmpegThumbnailsConvertorPP, FFmpegThumbnailsConvertorPP,
@ -65,7 +65,7 @@ class TestExec(unittest.TestCase):
def test_parse_cmd(self): def test_parse_cmd(self):
pp = ExecPP(YoutubeDL(), '') pp = ExecPP(YoutubeDL(), '')
info = {'filepath': 'file name'} info = {'filepath': 'file name'}
cmd = 'echo {}'.format(shell_quote(info['filepath'])) cmd = 'echo %s' % compat_shlex_quote(info['filepath'])
self.assertEqual(pp.parse_cmd('echo', info), cmd) self.assertEqual(pp.parse_cmd('echo', info), cmd)
self.assertEqual(pp.parse_cmd('echo {}', info), cmd) self.assertEqual(pp.parse_cmd('echo {}', info), cmd)
@ -125,8 +125,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, []) self._remove_marked_arrange_sponsors_test_impl(chapters, chapters, [])
def test_remove_marked_arrange_sponsors_ChapterWithSponsors(self): def test_remove_marked_arrange_sponsors_ChapterWithSponsors(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'), self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'preview'), self._sponsor_chapter(30, 40, 'preview'),
self._sponsor_chapter(50, 60, 'filler')] self._sponsor_chapter(50, 60, 'filler')]
@ -137,8 +136,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_SponsorBlockChapters(self): def test_remove_marked_arrange_sponsors_SponsorBlockChapters(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'chapter', title='sb c1'), self._sponsor_chapter(10, 20, 'chapter', title='sb c1'),
self._sponsor_chapter(15, 16, 'chapter', title='sb c2'), self._sponsor_chapter(15, 16, 'chapter', title='sb c2'),
self._sponsor_chapter(30, 40, 'preview'), self._sponsor_chapter(30, 40, 'preview'),
@ -151,14 +149,10 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self): def test_remove_marked_arrange_sponsors_UniqueNamesForOverlappingSponsors(self):
chapters = [ chapters = self._chapters([120], ['c']) + [
*self._chapters([120], ['c']), self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(20, 40, 'selfpromo'),
self._sponsor_chapter(10, 45, 'sponsor'), self._sponsor_chapter(50, 70, 'sponsor'), self._sponsor_chapter(60, 85, 'selfpromo'),
self._sponsor_chapter(20, 40, 'selfpromo'), self._sponsor_chapter(90, 120, 'selfpromo'), self._sponsor_chapter(100, 110, 'sponsor')]
self._sponsor_chapter(50, 70, 'sponsor'),
self._sponsor_chapter(60, 85, 'selfpromo'),
self._sponsor_chapter(90, 120, 'selfpromo'),
self._sponsor_chapter(100, 110, 'sponsor')]
expected = self._chapters( expected = self._chapters(
[10, 20, 40, 45, 50, 60, 70, 85, 90, 100, 110, 120], [10, 20, 40, 45, 50, 60, 70, 85, 90, 100, 110, 120],
['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion', ['c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Sponsor, Unpaid/Self Promotion',
@ -178,8 +172,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, self._chapters([40], ['c']), cuts) chapters, self._chapters([40], ['c']), cuts)
def test_remove_marked_arrange_sponsors_ChapterWithSponsorsAndCuts(self): def test_remove_marked_arrange_sponsors_ChapterWithSponsorsAndCuts(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'), self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(30, 40, 'selfpromo', remove=True), self._sponsor_chapter(30, 40, 'selfpromo', remove=True),
self._sponsor_chapter(50, 60, 'interaction')] self._sponsor_chapter(50, 60, 'interaction')]
@ -192,29 +185,24 @@ class TestModifyChaptersPP(unittest.TestCase):
def test_remove_marked_arrange_sponsors_ChapterWithSponsorCutInTheMiddle(self): def test_remove_marked_arrange_sponsors_ChapterWithSponsorCutInTheMiddle(self):
cuts = [self._sponsor_chapter(20, 30, 'selfpromo', remove=True), cuts = [self._sponsor_chapter(20, 30, 'selfpromo', remove=True),
self._chapter(40, 50, remove=True)] self._chapter(40, 50, remove=True)]
chapters = [ chapters = self._chapters([70], ['c']) + [self._sponsor_chapter(10, 60, 'sponsor')] + cuts
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 60, 'sponsor'),
*cuts]
expected = self._chapters( expected = self._chapters(
[10, 40, 50], ['c', '[SponsorBlock]: Sponsor', 'c']) [10, 40, 50], ['c', '[SponsorBlock]: Sponsor', 'c'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self): def test_remove_marked_arrange_sponsors_ChapterWithCutHidingSponsor(self):
cuts = [self._sponsor_chapter(20, 50, 'selfpromo', remove=True)] cuts = [self._sponsor_chapter(20, 50, 'selfpromo', remove=True)]
chapters = [ chapters = self._chapters([60], ['c']) + [
*self._chapters([60], ['c']),
self._sponsor_chapter(10, 20, 'intro'), self._sponsor_chapter(10, 20, 'intro'),
self._sponsor_chapter(30, 40, 'sponsor'), self._sponsor_chapter(30, 40, 'sponsor'),
self._sponsor_chapter(50, 60, 'outro'), self._sponsor_chapter(50, 60, 'outro'),
*cuts] ] + cuts
expected = self._chapters( expected = self._chapters(
[10, 20, 30], ['c', '[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits']) [10, 20, 30], ['c', '[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentSponsors(self): def test_remove_marked_arrange_sponsors_ChapterWithAdjacentSponsors(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'), self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'selfpromo'), self._sponsor_chapter(20, 30, 'selfpromo'),
self._sponsor_chapter(30, 40, 'interaction')] self._sponsor_chapter(30, 40, 'interaction')]
@ -225,8 +213,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithAdjacentCuts(self): def test_remove_marked_arrange_sponsors_ChapterWithAdjacentCuts(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 20, 'sponsor'), self._sponsor_chapter(10, 20, 'sponsor'),
self._sponsor_chapter(20, 30, 'interaction', remove=True), self._sponsor_chapter(20, 30, 'interaction', remove=True),
self._chapter(30, 40, remove=True), self._chapter(30, 40, remove=True),
@ -239,8 +226,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, expected, [self._chapter(20, 50, remove=True)]) chapters, expected, [self._chapter(20, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingSponsors(self): def test_remove_marked_arrange_sponsors_ChapterWithOverlappingSponsors(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 30, 'sponsor'), self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'selfpromo'), self._sponsor_chapter(20, 50, 'selfpromo'),
self._sponsor_chapter(40, 60, 'interaction')] self._sponsor_chapter(40, 60, 'interaction')]
@ -252,8 +238,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithOverlappingCuts(self): def test_remove_marked_arrange_sponsors_ChapterWithOverlappingCuts(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 30, 'sponsor', remove=True), self._sponsor_chapter(10, 30, 'sponsor', remove=True),
self._sponsor_chapter(20, 50, 'selfpromo', remove=True), self._sponsor_chapter(20, 50, 'selfpromo', remove=True),
self._sponsor_chapter(40, 60, 'interaction', remove=True)] self._sponsor_chapter(40, 60, 'interaction', remove=True)]
@ -261,8 +246,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, self._chapters([20], ['c']), [self._chapter(10, 60, remove=True)]) chapters, self._chapters([20], ['c']), [self._chapter(10, 60, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsors(self): def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsors(self):
chapters = [ chapters = self._chapters([170], ['c']) + [
*self._chapters([170], ['c']),
self._sponsor_chapter(0, 30, 'intro'), self._sponsor_chapter(0, 30, 'intro'),
self._sponsor_chapter(20, 50, 'sponsor'), self._sponsor_chapter(20, 50, 'sponsor'),
self._sponsor_chapter(40, 60, 'selfpromo'), self._sponsor_chapter(40, 60, 'selfpromo'),
@ -283,8 +267,7 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingCuts(self): def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingCuts(self):
chapters = [ chapters = self._chapters([170], ['c']) + [
*self._chapters([170], ['c']),
self._chapter(0, 30, remove=True), self._chapter(0, 30, remove=True),
self._sponsor_chapter(20, 50, 'sponsor', remove=True), self._sponsor_chapter(20, 50, 'sponsor', remove=True),
self._chapter(40, 60, remove=True), self._chapter(40, 60, remove=True),
@ -301,8 +284,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, self._chapters([20], ['c']), expected_cuts) chapters, self._chapters([20], ['c']), expected_cuts)
def test_remove_marked_arrange_sponsors_OverlappingSponsorsDifferentTitlesAfterCut(self): def test_remove_marked_arrange_sponsors_OverlappingSponsorsDifferentTitlesAfterCut(self):
chapters = [ chapters = self._chapters([60], ['c']) + [
*self._chapters([60], ['c']),
self._sponsor_chapter(10, 60, 'sponsor'), self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(10, 40, 'intro'), self._sponsor_chapter(10, 40, 'intro'),
self._sponsor_chapter(30, 50, 'interaction'), self._sponsor_chapter(30, 50, 'interaction'),
@ -315,8 +297,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, expected, [self._chapter(30, 50, remove=True)]) chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsNoLongerOverlapAfterCut(self): def test_remove_marked_arrange_sponsors_SponsorsNoLongerOverlapAfterCut(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 30, 'sponsor'), self._sponsor_chapter(10, 30, 'sponsor'),
self._sponsor_chapter(20, 50, 'interaction'), self._sponsor_chapter(20, 50, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True), self._sponsor_chapter(30, 50, 'selfpromo', remove=True),
@ -329,8 +310,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, expected, [self._chapter(30, 50, remove=True)]) chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_SponsorsStillOverlapAfterCut(self): def test_remove_marked_arrange_sponsors_SponsorsStillOverlapAfterCut(self):
chapters = [ chapters = self._chapters([70], ['c']) + [
*self._chapters([70], ['c']),
self._sponsor_chapter(10, 60, 'sponsor'), self._sponsor_chapter(10, 60, 'sponsor'),
self._sponsor_chapter(20, 60, 'interaction'), self._sponsor_chapter(20, 60, 'interaction'),
self._sponsor_chapter(30, 50, 'selfpromo', remove=True)] self._sponsor_chapter(30, 50, 'selfpromo', remove=True)]
@ -341,8 +321,7 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, expected, [self._chapter(30, 50, remove=True)]) chapters, expected, [self._chapter(30, 50, remove=True)])
def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsorsAndCuts(self): def test_remove_marked_arrange_sponsors_ChapterWithRunsOfOverlappingSponsorsAndCuts(self):
chapters = [ chapters = self._chapters([200], ['c']) + [
*self._chapters([200], ['c']),
self._sponsor_chapter(10, 40, 'sponsor'), self._sponsor_chapter(10, 40, 'sponsor'),
self._sponsor_chapter(10, 30, 'intro'), self._sponsor_chapter(10, 30, 'intro'),
self._chapter(20, 30, remove=True), self._chapter(20, 30, remove=True),
@ -368,9 +347,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, expected_cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, expected_cuts)
def test_remove_marked_arrange_sponsors_SponsorOverlapsMultipleChapters(self): def test_remove_marked_arrange_sponsors_SponsorOverlapsMultipleChapters(self):
chapters = [ chapters = (self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5'])
*self._chapters([20, 40, 60, 80, 100], ['c1', 'c2', 'c3', 'c4', 'c5']), + [self._sponsor_chapter(10, 90, 'sponsor')])
self._sponsor_chapter(10, 90, 'sponsor')]
expected = self._chapters([10, 90, 100], ['c1', '[SponsorBlock]: Sponsor', 'c5']) expected = self._chapters([10, 90, 100], ['c1', '[SponsorBlock]: Sponsor', 'c5'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -381,10 +359,9 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsWithinSomeChaptersAndOverlappingOthers(self): def test_remove_marked_arrange_sponsors_SponsorsWithinSomeChaptersAndOverlappingOthers(self):
chapters = [ chapters = (self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4'])
*self._chapters([10, 40, 60, 80], ['c1', 'c2', 'c3', 'c4']), + [self._sponsor_chapter(20, 30, 'sponsor'),
self._sponsor_chapter(20, 30, 'sponsor'), self._sponsor_chapter(50, 70, 'selfpromo')])
self._sponsor_chapter(50, 70, 'selfpromo')]
expected = self._chapters([10, 20, 30, 40, 50, 70, 80], expected = self._chapters([10, 20, 30, 40, 50, 70, 80],
['c1', 'c2', '[SponsorBlock]: Sponsor', 'c2', 'c3', ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c2', 'c3',
'[SponsorBlock]: Unpaid/Self Promotion', 'c4']) '[SponsorBlock]: Unpaid/Self Promotion', 'c4'])
@ -397,9 +374,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_ChaptersAfterLastSponsor(self): def test_remove_marked_arrange_sponsors_ChaptersAfterLastSponsor(self):
chapters = [ chapters = (self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4'])
*self._chapters([20, 40, 50, 60], ['c1', 'c2', 'c3', 'c4']), + [self._sponsor_chapter(10, 30, 'music_offtopic')])
self._sponsor_chapter(10, 30, 'music_offtopic')]
expected = self._chapters( expected = self._chapters(
[10, 30, 40, 50, 60], [10, 30, 40, 50, 60],
['c1', '[SponsorBlock]: Non-Music Section', 'c2', 'c3', 'c4']) ['c1', '[SponsorBlock]: Non-Music Section', 'c2', 'c3', 'c4'])
@ -412,9 +388,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorStartsAtChapterStart(self): def test_remove_marked_arrange_sponsors_SponsorStartsAtChapterStart(self):
chapters = [ chapters = (self._chapters([10, 20, 40], ['c1', 'c2', 'c3'])
*self._chapters([10, 20, 40], ['c1', 'c2', 'c3']), + [self._sponsor_chapter(20, 30, 'sponsor')])
self._sponsor_chapter(20, 30, 'sponsor')]
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3']) expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -425,9 +400,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorEndsAtChapterEnd(self): def test_remove_marked_arrange_sponsors_SponsorEndsAtChapterEnd(self):
chapters = [ chapters = (self._chapters([10, 30, 40], ['c1', 'c2', 'c3'])
*self._chapters([10, 30, 40], ['c1', 'c2', 'c3']), + [self._sponsor_chapter(20, 30, 'sponsor')])
self._sponsor_chapter(20, 30, 'sponsor')]
expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3']) expected = self._chapters([10, 20, 30, 40], ['c1', 'c2', '[SponsorBlock]: Sponsor', 'c3'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -438,9 +412,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorCoincidesWithChapters(self): def test_remove_marked_arrange_sponsors_SponsorCoincidesWithChapters(self):
chapters = [ chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
*self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']), + [self._sponsor_chapter(10, 30, 'sponsor')])
self._sponsor_chapter(10, 30, 'sponsor')]
expected = self._chapters([10, 30, 40], ['c1', '[SponsorBlock]: Sponsor', 'c4']) expected = self._chapters([10, 30, 40], ['c1', '[SponsorBlock]: Sponsor', 'c4'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -451,9 +424,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsAtVideoBoundaries(self): def test_remove_marked_arrange_sponsors_SponsorsAtVideoBoundaries(self):
chapters = [ chapters = (self._chapters([20, 40, 60], ['c1', 'c2', 'c3'])
*self._chapters([20, 40, 60], ['c1', 'c2', 'c3']), + [self._sponsor_chapter(0, 10, 'intro'), self._sponsor_chapter(50, 60, 'outro')])
self._sponsor_chapter(0, 10, 'intro'), self._sponsor_chapter(50, 60, 'outro')]
expected = self._chapters( expected = self._chapters(
[10, 20, 40, 50, 60], ['[SponsorBlock]: Intermission/Intro Animation', 'c1', 'c2', 'c3', '[SponsorBlock]: Endcards/Credits']) [10, 20, 40, 50, 60], ['[SponsorBlock]: Intermission/Intro Animation', 'c1', 'c2', 'c3', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -465,10 +437,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_SponsorsOverlapChaptersAtVideoBoundaries(self): def test_remove_marked_arrange_sponsors_SponsorsOverlapChaptersAtVideoBoundaries(self):
chapters = [ chapters = (self._chapters([10, 40, 50], ['c1', 'c2', 'c3'])
*self._chapters([10, 40, 50], ['c1', 'c2', 'c3']), + [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(30, 50, 'outro')])
self._sponsor_chapter(0, 20, 'intro'),
self._sponsor_chapter(30, 50, 'outro')]
expected = self._chapters( expected = self._chapters(
[20, 30, 50], ['[SponsorBlock]: Intermission/Intro Animation', 'c2', '[SponsorBlock]: Endcards/Credits']) [20, 30, 50], ['[SponsorBlock]: Intermission/Intro Animation', 'c2', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -480,10 +450,8 @@ class TestModifyChaptersPP(unittest.TestCase):
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, cuts)
def test_remove_marked_arrange_sponsors_EverythingSponsored(self): def test_remove_marked_arrange_sponsors_EverythingSponsored(self):
chapters = [ chapters = (self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4'])
*self._chapters([10, 20, 30, 40], ['c1', 'c2', 'c3', 'c4']), + [self._sponsor_chapter(0, 20, 'intro'), self._sponsor_chapter(20, 40, 'outro')])
self._sponsor_chapter(0, 20, 'intro'),
self._sponsor_chapter(20, 40, 'outro')]
expected = self._chapters([20, 40], ['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits']) expected = self._chapters([20, 40], ['[SponsorBlock]: Intermission/Intro Animation', '[SponsorBlock]: Endcards/Credits'])
self._remove_marked_arrange_sponsors_test_impl(chapters, expected, []) self._remove_marked_arrange_sponsors_test_impl(chapters, expected, [])
@ -523,39 +491,38 @@ class TestModifyChaptersPP(unittest.TestCase):
chapters, self._chapters([2.5], ['c2']), cuts) chapters, self._chapters([2.5], ['c2']), cuts)
def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromSponsorOverlapAreIgnored(self): def test_remove_marked_arrange_sponsors_TinyChaptersResultingFromSponsorOverlapAreIgnored(self):
chapters = [ chapters = self._chapters([1, 3, 4], ['c1', 'c2', 'c3']) + [
*self._chapters([1, 3, 4], ['c1', 'c2', 'c3']),
self._sponsor_chapter(1.5, 2.5, 'sponsor')] self._sponsor_chapter(1.5, 2.5, 'sponsor')]
self._remove_marked_arrange_sponsors_test_impl( self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 2.5, 4], ['c1', '[SponsorBlock]: Sponsor', 'c3']), []) chapters, self._chapters([1.5, 2.5, 4], ['c1', '[SponsorBlock]: Sponsor', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsOverlapsAreIgnored(self): def test_remove_marked_arrange_sponsors_TinySponsorsOverlapsAreIgnored(self):
chapters = [ chapters = self._chapters([2, 3, 5], ['c1', 'c2', 'c3']) + [
*self._chapters([2, 3, 5], ['c1', 'c2', 'c3']),
self._sponsor_chapter(1, 3, 'sponsor'), self._sponsor_chapter(1, 3, 'sponsor'),
self._sponsor_chapter(2.5, 4, 'selfpromo')] self._sponsor_chapter(2.5, 4, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl( self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1, 3, 4, 5], [ chapters, self._chapters([1, 3, 4, 5], [
'c1', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion', 'c3']), []) 'c1', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion', 'c3']), [])
def test_remove_marked_arrange_sponsors_TinySponsorsPrependedToTheNextSponsor(self): def test_remove_marked_arrange_sponsors_TinySponsorsPrependedToTheNextSponsor(self):
chapters = [ chapters = self._chapters([4], ['c']) + [
*self._chapters([4], ['c']),
self._sponsor_chapter(1.5, 2, 'sponsor'), self._sponsor_chapter(1.5, 2, 'sponsor'),
self._sponsor_chapter(2, 4, 'selfpromo')] self._sponsor_chapter(2, 4, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl( self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([1.5, 4], ['c', '[SponsorBlock]: Unpaid/Self Promotion']), []) chapters, self._chapters([1.5, 4], ['c', '[SponsorBlock]: Unpaid/Self Promotion']), [])
def test_remove_marked_arrange_sponsors_SmallestSponsorInTheOverlapGetsNamed(self): def test_remove_marked_arrange_sponsors_SmallestSponsorInTheOverlapGetsNamed(self):
self._pp._sponsorblock_chapter_title = '[SponsorBlock]: %(name)s' self._pp._sponsorblock_chapter_title = '[SponsorBlock]: %(name)s'
chapters = [ chapters = self._chapters([10], ['c']) + [
*self._chapters([10], ['c']),
self._sponsor_chapter(2, 8, 'sponsor'), self._sponsor_chapter(2, 8, 'sponsor'),
self._sponsor_chapter(4, 6, 'selfpromo')] self._sponsor_chapter(4, 6, 'selfpromo')
]
self._remove_marked_arrange_sponsors_test_impl( self._remove_marked_arrange_sponsors_test_impl(
chapters, self._chapters([2, 4, 6, 8, 10], [ chapters, self._chapters([2, 4, 6, 8, 10], [
'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion', 'c', '[SponsorBlock]: Sponsor', '[SponsorBlock]: Unpaid/Self Promotion',
'[SponsorBlock]: Sponsor', 'c', '[SponsorBlock]: Sponsor', 'c'
]), []) ]), [])
def test_make_concat_opts_CommonCase(self): def test_make_concat_opts_CommonCase(self):

@ -1,471 +1,113 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
import threading
import unittest import unittest
import pytest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import abc
import contextlib
import enum
import functools
import http.server
import json
import random
import socket
import struct
import time
from socketserver import (
BaseRequestHandler,
StreamRequestHandler,
ThreadingTCPServer,
)
from test.helper import http_server_port, verify_address_availability
from yt_dlp.networking import Request
from yt_dlp.networking.exceptions import ProxyError, TransportError
from yt_dlp.socks import (
SOCKS4_REPLY_VERSION,
SOCKS4_VERSION,
SOCKS5_USER_AUTH_SUCCESS,
SOCKS5_USER_AUTH_VERSION,
SOCKS5_VERSION,
Socks5AddressType,
Socks5Auth,
)
SOCKS5_USER_AUTH_FAILURE = 0x1
class Socks4CD(enum.IntEnum):
REQUEST_GRANTED = 90
REQUEST_REJECTED_OR_FAILED = 91
REQUEST_REJECTED_CANNOT_CONNECT_TO_IDENTD = 92
REQUEST_REJECTED_DIFFERENT_USERID = 93
class Socks5Reply(enum.IntEnum):
SUCCEEDED = 0x0
GENERAL_FAILURE = 0x1
CONNECTION_NOT_ALLOWED = 0x2
NETWORK_UNREACHABLE = 0x3
HOST_UNREACHABLE = 0x4
CONNECTION_REFUSED = 0x5
TTL_EXPIRED = 0x6
COMMAND_NOT_SUPPORTED = 0x7
ADDRESS_TYPE_NOT_SUPPORTED = 0x8
class SocksTestRequestHandler(BaseRequestHandler):
def __init__(self, *args, socks_info=None, **kwargs):
self.socks_info = socks_info
super().__init__(*args, **kwargs)
class SocksProxyHandler(BaseRequestHandler):
def __init__(self, request_handler_class, socks_server_kwargs, *args, **kwargs):
self.socks_kwargs = socks_server_kwargs or {}
self.request_handler_class = request_handler_class
super().__init__(*args, **kwargs)
class Socks5ProxyHandler(StreamRequestHandler, SocksProxyHandler):
# SOCKS5 protocol https://tools.ietf.org/html/rfc1928
# SOCKS5 username/password authentication https://tools.ietf.org/html/rfc1929
def handle(self):
sleep = self.socks_kwargs.get('sleep')
if sleep:
time.sleep(sleep)
version, nmethods = self.connection.recv(2)
assert version == SOCKS5_VERSION
methods = list(self.connection.recv(nmethods))
auth = self.socks_kwargs.get('auth') import random
import subprocess
if auth is not None and Socks5Auth.AUTH_USER_PASS not in methods: import urllib.request
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_NO_ACCEPTABLE))
self.server.close_request(self.request)
return
elif Socks5Auth.AUTH_USER_PASS in methods: from test.helper import FakeYDL, get_params, is_download_test
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_USER_PASS))
_, user_len = struct.unpack('!BB', self.connection.recv(2))
username = self.connection.recv(user_len).decode()
pass_len = ord(self.connection.recv(1))
password = self.connection.recv(pass_len).decode()
if username == auth[0] and password == auth[1]: @is_download_test
self.connection.sendall(struct.pack('!BB', SOCKS5_USER_AUTH_VERSION, SOCKS5_USER_AUTH_SUCCESS)) class TestMultipleSocks(unittest.TestCase):
else: @staticmethod
self.connection.sendall(struct.pack('!BB', SOCKS5_USER_AUTH_VERSION, SOCKS5_USER_AUTH_FAILURE)) def _check_params(attrs):
self.server.close_request(self.request) params = get_params()
for attr in attrs:
if attr not in params:
print('Missing %s. Skipping.' % attr)
return return
return params
elif Socks5Auth.AUTH_NONE in methods: def test_proxy_http(self):
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_NONE)) params = self._check_params(['primary_proxy', 'primary_server_ip'])
else: if params is None:
self.connection.sendall(struct.pack('!BB', SOCKS5_VERSION, Socks5Auth.AUTH_NO_ACCEPTABLE))
self.server.close_request(self.request)
return return
ydl = FakeYDL({
version, command, _, address_type = struct.unpack('!BBBB', self.connection.recv(4)) 'proxy': params['primary_proxy']
socks_info = { })
'version': version, self.assertEqual(
'auth_methods': methods, ydl.urlopen('http://yt-dl.org/ip').read().decode(),
'command': command, params['primary_server_ip'])
'client_address': self.client_address,
'ipv4_address': None, def test_proxy_https(self):
'domain_address': None, params = self._check_params(['primary_proxy', 'primary_server_ip'])
'ipv6_address': None, if params is None:
}
if address_type == Socks5AddressType.ATYP_IPV4:
socks_info['ipv4_address'] = socket.inet_ntoa(self.connection.recv(4))
elif address_type == Socks5AddressType.ATYP_DOMAINNAME:
socks_info['domain_address'] = self.connection.recv(ord(self.connection.recv(1))).decode()
elif address_type == Socks5AddressType.ATYP_IPV6:
socks_info['ipv6_address'] = socket.inet_ntop(socket.AF_INET6, self.connection.recv(16))
else:
self.server.close_request(self.request)
socks_info['port'] = struct.unpack('!H', self.connection.recv(2))[0]
# dummy response, the returned IP is just a placeholder
self.connection.sendall(struct.pack(
'!BBBBIH', SOCKS5_VERSION, self.socks_kwargs.get('reply', Socks5Reply.SUCCEEDED), 0x0, 0x1, 0x7f000001, 40000))
self.request_handler_class(self.request, self.client_address, self.server, socks_info=socks_info)
class Socks4ProxyHandler(StreamRequestHandler, SocksProxyHandler):
# SOCKS4 protocol http://www.openssh.com/txt/socks4.protocol
# SOCKS4A protocol http://www.openssh.com/txt/socks4a.protocol
def _read_until_null(self):
return b''.join(iter(functools.partial(self.connection.recv, 1), b'\x00'))
def handle(self):
sleep = self.socks_kwargs.get('sleep')
if sleep:
time.sleep(sleep)
socks_info = {
'version': SOCKS4_VERSION,
'command': None,
'client_address': self.client_address,
'ipv4_address': None,
'port': None,
'domain_address': None,
}
version, command, dest_port, dest_ip = struct.unpack('!BBHI', self.connection.recv(8))
socks_info['port'] = dest_port
socks_info['command'] = command
if version != SOCKS4_VERSION:
self.server.close_request(self.request)
return return
use_remote_dns = False ydl = FakeYDL({
if 0x0 < dest_ip <= 0xFF: 'proxy': params['primary_proxy']
use_remote_dns = True })
else: self.assertEqual(
socks_info['ipv4_address'] = socket.inet_ntoa(struct.pack('!I', dest_ip)) ydl.urlopen('https://yt-dl.org/ip').read().decode(),
params['primary_server_ip'])
user_id = self._read_until_null().decode()
if user_id != (self.socks_kwargs.get('user_id') or ''): def test_secondary_proxy_http(self):
self.connection.sendall(struct.pack( params = self._check_params(['secondary_proxy', 'secondary_server_ip'])
'!BBHI', SOCKS4_REPLY_VERSION, Socks4CD.REQUEST_REJECTED_DIFFERENT_USERID, 0x00, 0x00000000)) if params is None:
self.server.close_request(self.request)
return return
ydl = FakeYDL()
req = urllib.request.Request('http://yt-dl.org/ip')
req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
self.assertEqual(
ydl.urlopen(req).read().decode(),
params['secondary_server_ip'])
def test_secondary_proxy_https(self):
params = self._check_params(['secondary_proxy', 'secondary_server_ip'])
if params is None:
return
ydl = FakeYDL()
req = urllib.request.Request('https://yt-dl.org/ip')
req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
self.assertEqual(
ydl.urlopen(req).read().decode(),
params['secondary_server_ip'])
if use_remote_dns:
socks_info['domain_address'] = self._read_until_null().decode()
# dummy response, the returned IP is just a placeholder
self.connection.sendall(
struct.pack(
'!BBHI', SOCKS4_REPLY_VERSION,
self.socks_kwargs.get('cd_reply', Socks4CD.REQUEST_GRANTED), 40000, 0x7f000001))
self.request_handler_class(self.request, self.client_address, self.server, socks_info=socks_info)
class IPv6ThreadingTCPServer(ThreadingTCPServer):
address_family = socket.AF_INET6
class SocksHTTPTestRequestHandler(http.server.BaseHTTPRequestHandler, SocksTestRequestHandler):
def do_GET(self):
if self.path == '/socks_info':
payload = json.dumps(self.socks_info.copy())
self.send_response(200)
self.send_header('Content-Type', 'application/json; charset=utf-8')
self.send_header('Content-Length', str(len(payload)))
self.end_headers()
self.wfile.write(payload.encode())
class SocksWebSocketTestRequestHandler(SocksTestRequestHandler):
def handle(self):
import websockets.sync.server
protocol = websockets.ServerProtocol()
connection = websockets.sync.server.ServerConnection(socket=self.request, protocol=protocol, close_timeout=0)
connection.handshake()
connection.send(json.dumps(self.socks_info))
connection.close()
@contextlib.contextmanager
def socks_server(socks_server_class, request_handler, bind_ip=None, **socks_server_kwargs):
server = server_thread = None
try:
bind_address = bind_ip or '127.0.0.1'
server_type = ThreadingTCPServer if '.' in bind_address else IPv6ThreadingTCPServer
server = server_type(
(bind_address, 0), functools.partial(socks_server_class, request_handler, socks_server_kwargs))
server_port = http_server_port(server)
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
if '.' not in bind_address:
yield f'[{bind_address}]:{server_port}'
else:
yield f'{bind_address}:{server_port}'
finally:
server.shutdown()
server.server_close()
server_thread.join(2.0)
class SocksProxyTestContext(abc.ABC):
REQUEST_HANDLER_CLASS = None
def socks_server(self, server_class, *args, **kwargs):
return socks_server(server_class, self.REQUEST_HANDLER_CLASS, *args, **kwargs)
@abc.abstractmethod
def socks_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs) -> dict:
"""return a dict of socks_info"""
class HTTPSocksTestProxyContext(SocksProxyTestContext):
REQUEST_HANDLER_CLASS = SocksHTTPTestRequestHandler
def socks_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'http://{target_domain or "127.0.0.1"}:{target_port or "40000"}/socks_info', **req_kwargs)
handler.validate(request)
return json.loads(handler.send(request).read().decode())
class WebSocketSocksTestProxyContext(SocksProxyTestContext):
REQUEST_HANDLER_CLASS = SocksWebSocketTestRequestHandler
def socks_info_request(self, handler, target_domain=None, target_port=None, **req_kwargs):
request = Request(f'ws://{target_domain or "127.0.0.1"}:{target_port or "40000"}', **req_kwargs)
handler.validate(request)
ws = handler.send(request)
ws.send('socks_info')
socks_info = ws.recv()
ws.close()
return json.loads(socks_info)
CTX_MAP = {
'http': HTTPSocksTestProxyContext,
'ws': WebSocketSocksTestProxyContext,
}
@pytest.fixture(scope='module')
def ctx(request):
return CTX_MAP[request.param]()
@pytest.mark.parametrize(
'handler,ctx', [
('Urllib', 'http'),
('Requests', 'http'),
('Websockets', 'ws'),
('CurlCFFI', 'http'),
], indirect=True)
class TestSocks4Proxy:
def test_socks4_no_auth(self, handler, ctx):
with handler() as rh:
with ctx.socks_server(Socks4ProxyHandler) as server_address:
response = ctx.socks_info_request(
rh, proxies={'all': f'socks4://{server_address}'})
assert response['version'] == 4
def test_socks4_auth(self, handler, ctx):
with handler() as rh:
with ctx.socks_server(Socks4ProxyHandler, user_id='user') as server_address:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh, proxies={'all': f'socks4://{server_address}'})
response = ctx.socks_info_request(
rh, proxies={'all': f'socks4://user:@{server_address}'})
assert response['version'] == 4
def test_socks4a_ipv4_target(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler) as server_address:
with handler(proxies={'all': f'socks4a://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['version'] == 4
assert (response['ipv4_address'] == '127.0.0.1') != (response['domain_address'] == '127.0.0.1')
def test_socks4a_domain_target(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler) as server_address:
with handler(proxies={'all': f'socks4a://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='localhost')
assert response['version'] == 4
assert response['ipv4_address'] is None
assert response['domain_address'] == 'localhost'
def test_ipv4_client_source_address(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={'all': f'socks4://{server_address}'},
source_address=source_address) as rh:
response = ctx.socks_info_request(rh)
assert response['client_address'][0] == source_address
assert response['version'] == 4
@pytest.mark.parametrize('reply_code', [
Socks4CD.REQUEST_REJECTED_OR_FAILED,
Socks4CD.REQUEST_REJECTED_CANNOT_CONNECT_TO_IDENTD,
Socks4CD.REQUEST_REJECTED_DIFFERENT_USERID,
])
def test_socks4_errors(self, handler, ctx, reply_code):
with ctx.socks_server(Socks4ProxyHandler, cd_reply=reply_code) as server_address:
with handler(proxies={'all': f'socks4://{server_address}'}) as rh:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh)
def test_ipv6_socks4_proxy(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler, bind_ip='::1') as server_address:
with handler(proxies={'all': f'socks4://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['client_address'][0] == '::1'
assert response['ipv4_address'] == '127.0.0.1'
assert response['version'] == 4
def test_timeout(self, handler, ctx):
with ctx.socks_server(Socks4ProxyHandler, sleep=2) as server_address:
with handler(proxies={'all': f'socks4://{server_address}'}, timeout=0.5) as rh:
with pytest.raises(TransportError):
ctx.socks_info_request(rh)
@pytest.mark.parametrize(
'handler,ctx', [
('Urllib', 'http'),
('Requests', 'http'),
('Websockets', 'ws'),
('CurlCFFI', 'http'),
], indirect=True)
class TestSocks5Proxy:
def test_socks5_no_auth(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
response = ctx.socks_info_request(rh)
assert response['auth_methods'] == [0x0]
assert response['version'] == 5
def test_socks5_user_pass(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler, auth=('test', 'testpass')) as server_address:
with handler() as rh:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh, proxies={'all': f'socks5://{server_address}'})
response = ctx.socks_info_request(
rh, proxies={'all': f'socks5://test:testpass@{server_address}'})
assert response['auth_methods'] == [Socks5Auth.AUTH_NONE, Socks5Auth.AUTH_USER_PASS] @is_download_test
assert response['version'] == 5 class TestSocks(unittest.TestCase):
_SKIP_SOCKS_TEST = True
def test_socks5_ipv4_target(self, handler, ctx): def setUp(self):
with ctx.socks_server(Socks5ProxyHandler) as server_address: if self._SKIP_SOCKS_TEST:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh: return
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['ipv4_address'] == '127.0.0.1'
assert response['version'] == 5
def test_socks5_domain_target(self, handler, ctx): self.port = random.randint(20000, 30000)
with ctx.socks_server(Socks5ProxyHandler) as server_address: self.server_process = subprocess.Popen([
with handler(proxies={'all': f'socks5://{server_address}'}) as rh: 'srelay', '-f', '-i', '127.0.0.1:%d' % self.port],
response = ctx.socks_info_request(rh, target_domain='localhost') stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert (response['ipv4_address'] == '127.0.0.1') != (response['ipv6_address'] == '::1')
assert response['version'] == 5
def test_socks5h_domain_target(self, handler, ctx): def tearDown(self):
with ctx.socks_server(Socks5ProxyHandler) as server_address: if self._SKIP_SOCKS_TEST:
with handler(proxies={'all': f'socks5h://{server_address}'}) as rh: return
response = ctx.socks_info_request(rh, target_domain='localhost')
assert response['ipv4_address'] is None
assert response['domain_address'] == 'localhost'
assert response['version'] == 5
def test_socks5h_ip_target(self, handler, ctx): self.server_process.terminate()
with ctx.socks_server(Socks5ProxyHandler) as server_address: self.server_process.communicate()
with handler(proxies={'all': f'socks5h://{server_address}'}) as rh:
response = ctx.socks_info_request(rh, target_domain='127.0.0.1')
assert response['ipv4_address'] == '127.0.0.1'
assert response['domain_address'] is None
assert response['version'] == 5
def test_socks5_ipv6_destination(self, handler, ctx): def _get_ip(self, protocol):
with ctx.socks_server(Socks5ProxyHandler) as server_address: if self._SKIP_SOCKS_TEST:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh: return '127.0.0.1'
response = ctx.socks_info_request(rh, target_domain='[::1]')
assert response['ipv6_address'] == '::1'
assert response['version'] == 5
def test_ipv6_socks5_proxy(self, handler, ctx): ydl = FakeYDL({
with ctx.socks_server(Socks5ProxyHandler, bind_ip='::1') as server_address: 'proxy': '%s://127.0.0.1:%d' % (protocol, self.port),
with handler(proxies={'all': f'socks5://{server_address}'}) as rh: })
response = ctx.socks_info_request(rh, target_domain='127.0.0.1') return ydl.urlopen('http://yt-dl.org/ip').read().decode()
assert response['client_address'][0] == '::1'
assert response['ipv4_address'] == '127.0.0.1'
assert response['version'] == 5
# XXX: is there any feasible way of testing IPv6 source addresses? def test_socks4(self):
# Same would go for non-proxy source_address test... self.assertTrue(isinstance(self._get_ip('socks4'), str))
def test_ipv4_client_source_address(self, handler, ctx):
with ctx.socks_server(Socks5ProxyHandler) as server_address:
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(proxies={'all': f'socks5://{server_address}'}, source_address=source_address) as rh:
response = ctx.socks_info_request(rh)
assert response['client_address'][0] == source_address
assert response['version'] == 5
@pytest.mark.parametrize('reply_code', [ def test_socks4a(self):
Socks5Reply.GENERAL_FAILURE, self.assertTrue(isinstance(self._get_ip('socks4a'), str))
Socks5Reply.CONNECTION_NOT_ALLOWED,
Socks5Reply.NETWORK_UNREACHABLE,
Socks5Reply.HOST_UNREACHABLE,
Socks5Reply.CONNECTION_REFUSED,
Socks5Reply.TTL_EXPIRED,
Socks5Reply.COMMAND_NOT_SUPPORTED,
Socks5Reply.ADDRESS_TYPE_NOT_SUPPORTED,
])
def test_socks5_errors(self, handler, ctx, reply_code):
with ctx.socks_server(Socks5ProxyHandler, reply=reply_code) as server_address:
with handler(proxies={'all': f'socks5://{server_address}'}) as rh:
with pytest.raises(ProxyError):
ctx.socks_info_request(rh)
def test_timeout(self, handler, ctx): def test_socks5(self):
with ctx.socks_server(Socks5ProxyHandler, sleep=2) as server_address: self.assertTrue(isinstance(self._get_ip('socks5'), str))
with handler(proxies={'all': f'socks5://{server_address}'}, timeout=1) as rh:
with pytest.raises(TransportError):
ctx.socks_info_request(rh)
if __name__ == '__main__': if __name__ == '__main__':

@ -40,11 +40,12 @@ class BaseTestSubtitles(unittest.TestCase):
self.ie = self.IE() self.ie = self.IE()
self.DL.add_info_extractor(self.ie) self.DL.add_info_extractor(self.ie)
if not self.IE.working(): if not self.IE.working():
print(f'Skipping: {self.IE.ie_key()} marked as not _WORKING') print('Skipping: %s marked as not _WORKING' % self.IE.ie_key())
self.skipTest('IE marked as not _WORKING') self.skipTest('IE marked as not _WORKING')
def getInfoDict(self): def getInfoDict(self):
return self.DL.extract_info(self.url, download=False) info_dict = self.DL.extract_info(self.url, download=False)
return info_dict
def getSubtitles(self): def getSubtitles(self):
info_dict = self.getInfoDict() info_dict = self.getInfoDict()
@ -86,7 +87,7 @@ class TestYoutubeSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), 'ae1bd34126571a77aabd4d276b28044d') self.assertEqual(md5(subtitles['en']), 'ae1bd34126571a77aabd4d276b28044d')
self.assertEqual(md5(subtitles['it']), '0e0b667ba68411d88fd1c5f4f4eab2f9') self.assertEqual(md5(subtitles['it']), '0e0b667ba68411d88fd1c5f4f4eab2f9')
for lang in ['fr', 'de']: for lang in ['fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, f'Subtitles for \'{lang}\' not extracted') self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
def _test_subtitles_format(self, fmt, md5_hash, lang='en'): def _test_subtitles_format(self, fmt, md5_hash, lang='en'):
self.DL.params['writesubtitles'] = True self.DL.params['writesubtitles'] = True
@ -156,7 +157,7 @@ class TestDailymotionSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '976553874490cba125086bbfea3ff76f') self.assertEqual(md5(subtitles['en']), '976553874490cba125086bbfea3ff76f')
self.assertEqual(md5(subtitles['fr']), '594564ec7d588942e384e920e5341792') self.assertEqual(md5(subtitles['fr']), '594564ec7d588942e384e920e5341792')
for lang in ['es', 'fr', 'de']: for lang in ['es', 'fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, f'Subtitles for \'{lang}\' not extracted') self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
def test_nosubtitles(self): def test_nosubtitles(self):
self.DL.expect_warning('video doesn\'t have subtitles') self.DL.expect_warning('video doesn\'t have subtitles')
@ -181,7 +182,7 @@ class TestTedSubtitles(BaseTestSubtitles):
self.assertEqual(md5(subtitles['en']), '4262c1665ff928a2dada178f62cb8d14') self.assertEqual(md5(subtitles['en']), '4262c1665ff928a2dada178f62cb8d14')
self.assertEqual(md5(subtitles['fr']), '66a63f7f42c97a50f8c0e90bc7797bb5') self.assertEqual(md5(subtitles['fr']), '66a63f7f42c97a50f8c0e90bc7797bb5')
for lang in ['es', 'fr', 'de']: for lang in ['es', 'fr', 'de']:
self.assertTrue(subtitles.get(lang) is not None, f'Subtitles for \'{lang}\' not extracted') self.assertTrue(subtitles.get(lang) is not None, 'Subtitles for \'%s\' not extracted' % lang)
@is_download_test @is_download_test

@ -1,444 +0,0 @@
import http.cookies
import re
import xml.etree.ElementTree
import pytest
from yt_dlp.utils import dict_get, int_or_none, str_or_none
from yt_dlp.utils.traversal import traverse_obj
_TEST_DATA = {
100: 100,
1.2: 1.2,
'str': 'str',
'None': None,
'...': ...,
'urls': [
{'index': 0, 'url': 'https://www.example.com/0'},
{'index': 1, 'url': 'https://www.example.com/1'},
],
'data': (
{'index': 2},
{'index': 3},
),
'dict': {},
}
class TestTraversal:
def test_traversal_base(self):
assert traverse_obj(_TEST_DATA, ('str',)) == 'str', \
'allow tuple path'
assert traverse_obj(_TEST_DATA, ['str']) == 'str', \
'allow list path'
assert traverse_obj(_TEST_DATA, (value for value in ('str',))) == 'str', \
'allow iterable path'
assert traverse_obj(_TEST_DATA, 'str') == 'str', \
'single items should be treated as a path'
assert traverse_obj(_TEST_DATA, 100) == 100, \
'allow int path'
assert traverse_obj(_TEST_DATA, 1.2) == 1.2, \
'allow float path'
assert traverse_obj(_TEST_DATA, None) == _TEST_DATA, \
'`None` should not perform any modification'
def test_traversal_ellipsis(self):
assert traverse_obj(_TEST_DATA, ...) == [x for x in _TEST_DATA.values() if x not in (None, {})], \
'`...` should give all non discarded values'
assert traverse_obj(_TEST_DATA, ('urls', 0, ...)) == list(_TEST_DATA['urls'][0].values()), \
'`...` selection for dicts should select all values'
assert traverse_obj(_TEST_DATA, (..., ..., 'url')) == ['https://www.example.com/0', 'https://www.example.com/1'], \
'nested `...` queries should work'
assert traverse_obj(_TEST_DATA, (..., ..., 'index')) == list(range(4)), \
'`...` query result should be flattened'
assert traverse_obj(iter(range(4)), ...) == list(range(4)), \
'`...` should accept iterables'
def test_traversal_function(self):
filter_func = lambda x, y: x == 'urls' and isinstance(y, list)
assert traverse_obj(_TEST_DATA, filter_func) == [_TEST_DATA['urls']], \
'function as query key should perform a filter based on (key, value)'
assert traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)) == ['str'], \
'exceptions in the query function should be catched'
assert traverse_obj(iter(range(4)), lambda _, x: x % 2 == 0) == [0, 2], \
'function key should accept iterables'
# Wrong function signature should raise (debug mode)
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, lambda a: ...)
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, lambda a, b, c: ...)
def test_traversal_set(self):
# transformation/type, like `expected_type`
assert traverse_obj(_TEST_DATA, (..., {str.upper})) == ['STR'], \
'Function in set should be a transformation'
assert traverse_obj(_TEST_DATA, (..., {str})) == ['str'], \
'Type in set should be a type filter'
assert traverse_obj(_TEST_DATA, (..., {str, int})) == [100, 'str'], \
'Multiple types in set should be a type filter'
assert traverse_obj(_TEST_DATA, {dict}) == _TEST_DATA, \
'A single set should be wrapped into a path'
assert traverse_obj(_TEST_DATA, (..., {str.upper})) == ['STR'], \
'Transformation function should not raise'
expected = [x for x in map(str_or_none, _TEST_DATA.values()) if x is not None]
assert traverse_obj(_TEST_DATA, (..., {str_or_none})) == expected, \
'Function in set should be a transformation'
assert traverse_obj(_TEST_DATA, ('fail', {lambda _: 'const'})) == 'const', \
'Function in set should always be called'
# Sets with length < 1 or > 1 not including only types should raise
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, set())
with pytest.raises(Exception):
traverse_obj(_TEST_DATA, {str.upper, str})
def test_traversal_slice(self):
_SLICE_DATA = [0, 1, 2, 3, 4]
assert traverse_obj(_TEST_DATA, ('dict', slice(1))) is None, \
'slice on a dictionary should not throw'
assert traverse_obj(_SLICE_DATA, slice(1)) == _SLICE_DATA[:1], \
'slice key should apply slice to sequence'
assert traverse_obj(_SLICE_DATA, slice(1, 2)) == _SLICE_DATA[1:2], \
'slice key should apply slice to sequence'
assert traverse_obj(_SLICE_DATA, slice(1, 4, 2)) == _SLICE_DATA[1:4:2], \
'slice key should apply slice to sequence'
def test_traversal_alternatives(self):
assert traverse_obj(_TEST_DATA, 'fail', 'str') == 'str', \
'multiple `paths` should be treated as alternative paths'
assert traverse_obj(_TEST_DATA, 'str', 100) == 'str', \
'alternatives should exit early'
assert traverse_obj(_TEST_DATA, 'fail', 'fail') is None, \
'alternatives should return `default` if exhausted'
assert traverse_obj(_TEST_DATA, (..., 'fail'), 100) == 100, \
'alternatives should track their own branching return'
assert traverse_obj(_TEST_DATA, ('dict', ...), ('data', ...)) == list(_TEST_DATA['data']), \
'alternatives on empty objects should search further'
def test_traversal_branching_nesting(self):
assert traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')) == ['https://www.example.com/0'], \
'tuple as key should be treated as branches'
assert traverse_obj(_TEST_DATA, ('urls', [3, 0], 'url')) == ['https://www.example.com/0'], \
'list as key should be treated as branches'
assert traverse_obj(_TEST_DATA, ('urls', ((1, 'fail'), (0, 'url')))) == ['https://www.example.com/0'], \
'double nesting in path should be treated as paths'
assert traverse_obj(['0', [1, 2]], [(0, 1), 0]) == [1], \
'do not fail early on branching'
expected = ['https://www.example.com/0', 'https://www.example.com/1']
assert traverse_obj(_TEST_DATA, ('urls', ((0, ('fail', 'url')), (1, 'url')))) == expected, \
'tripple nesting in path should be treated as branches'
assert traverse_obj(_TEST_DATA, ('urls', ('fail', (..., 'url')))) == expected, \
'ellipsis as branch path start gets flattened'
def test_traversal_dict(self):
assert traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}) == {0: 100, 1: 1.2}, \
'dict key should result in a dict with the same keys'
expected = {0: 'https://www.example.com/0'}
assert traverse_obj(_TEST_DATA, {0: ('urls', 0, 'url')}) == expected, \
'dict key should allow paths'
expected = {0: ['https://www.example.com/0']}
assert traverse_obj(_TEST_DATA, {0: ('urls', (3, 0), 'url')}) == expected, \
'tuple in dict path should be treated as branches'
assert traverse_obj(_TEST_DATA, {0: ('urls', ((1, 'fail'), (0, 'url')))}) == expected, \
'double nesting in dict path should be treated as paths'
expected = {0: ['https://www.example.com/1', 'https://www.example.com/0']}
assert traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}) == expected, \
'tripple nesting in dict path should be treated as branches'
assert traverse_obj(_TEST_DATA, {0: 'fail'}) == {}, \
'remove `None` values when top level dict key fails'
assert traverse_obj(_TEST_DATA, {0: 'fail'}, default=...) == {0: ...}, \
'use `default` if key fails and `default`'
assert traverse_obj(_TEST_DATA, {0: 'dict'}) == {}, \
'remove empty values when dict key'
assert traverse_obj(_TEST_DATA, {0: 'dict'}, default=...) == {0: ...}, \
'use `default` when dict key and `default`'
assert traverse_obj(_TEST_DATA, {0: {0: 'fail'}}) == {}, \
'remove empty values when nested dict key fails'
assert traverse_obj(None, {0: 'fail'}) == {}, \
'default to dict if pruned'
assert traverse_obj(None, {0: 'fail'}, default=...) == {0: ...}, \
'default to dict if pruned and default is given'
assert traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=...) == {0: {0: ...}}, \
'use nested `default` when nested dict key fails and `default`'
assert traverse_obj(_TEST_DATA, {0: ('dict', ...)}) == {}, \
'remove key if branch in dict key not successful'
def test_traversal_default(self):
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
assert traverse_obj(_DEFAULT_DATA, 'fail') is None, \
'default value should be `None`'
assert traverse_obj(_DEFAULT_DATA, 'fail', 'fail', default=...) == ..., \
'chained fails should result in default'
assert traverse_obj(_DEFAULT_DATA, 'None', 'int') == 0, \
'should not short cirquit on `None`'
assert traverse_obj(_DEFAULT_DATA, 'fail', default=1) == 1, \
'invalid dict key should result in `default`'
assert traverse_obj(_DEFAULT_DATA, 'None', default=1) == 1, \
'`None` is a deliberate sentinel and should become `default`'
assert traverse_obj(_DEFAULT_DATA, ('list', 10)) is None, \
'`IndexError` should result in `default`'
assert traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=1) == 1, \
'if branched but not successful return `default` if defined, not `[]`'
assert traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=None) is None, \
'if branched but not successful return `default` even if `default` is `None`'
assert traverse_obj(_DEFAULT_DATA, (..., 'fail')) == [], \
'if branched but not successful return `[]`, not `default`'
assert traverse_obj(_DEFAULT_DATA, ('list', ...)) == [], \
'if branched but object is empty return `[]`, not `default`'
assert traverse_obj(None, ...) == [], \
'if branched but object is `None` return `[]`, not `default`'
assert traverse_obj({0: None}, (0, ...)) == [], \
'if branched but state is `None` return `[]`, not `default`'
@pytest.mark.parametrize('path', [
('fail', ...),
(..., 'fail'),
100 * ('fail',) + (...,),
(...,) + 100 * ('fail',),
])
def test_traversal_branching(self, path):
assert traverse_obj({}, path) == [], \
'if branched but state is `None`, return `[]` (not `default`)'
assert traverse_obj({}, 'fail', path) == [], \
'if branching in last alternative and previous did not match, return `[]` (not `default`)'
assert traverse_obj({0: 'x'}, 0, path) == 'x', \
'if branching in last alternative and previous did match, return single value'
assert traverse_obj({0: 'x'}, path, 0) == 'x', \
'if branching in first alternative and non-branching path does match, return single value'
assert traverse_obj({}, path, 'fail') is None, \
'if branching in first alternative and non-branching path does not match, return `default`'
def test_traversal_expected_type(self):
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
assert traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str) == 'str', \
'accept matching `expected_type` type'
assert traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int) is None, \
'reject non matching `expected_type` type'
assert traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)) == '0', \
'transform type using type function'
assert traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0) is None, \
'wrap expected_type fuction in try_call'
assert traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str) == ['str'], \
'eliminate items that expected_type fails on'
assert traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int) == {0: 100}, \
'type as expected_type should filter dict values'
assert traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none) == {0: '100', 1: '1.2'}, \
'function as expected_type should transform dict values'
assert traverse_obj(_TEST_DATA, ({0: 1.2}, 0, {int_or_none}), expected_type=int) == 1, \
'expected_type should not filter non final dict values'
assert traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int) == {0: {0: 100}}, \
'expected_type should transform deep dict values'
assert traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(...)) == [{0: ...}, {0: ...}], \
'expected_type should transform branched dict values'
assert traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int) == [4], \
'expected_type regression for type matching in tuple branching'
assert traverse_obj(_TEST_DATA, ['data', ...], expected_type=int) == [], \
'expected_type regression for type matching in dict result'
def test_traversal_get_all(self):
_GET_ALL_DATA = {'key': [0, 1, 2]}
assert traverse_obj(_GET_ALL_DATA, ('key', ...), get_all=False) == 0, \
'if not `get_all`, return only first matching value'
assert traverse_obj(_GET_ALL_DATA, ..., get_all=False) == [0, 1, 2], \
'do not overflatten if not `get_all`'
def test_traversal_casesense(self):
_CASESENSE_DATA = {
'KeY': 'value0',
0: {
'KeY': 'value1',
0: {'KeY': 'value2'},
},
}
assert traverse_obj(_CASESENSE_DATA, 'key') is None, \
'dict keys should be case sensitive unless `casesense`'
assert traverse_obj(_CASESENSE_DATA, 'keY', casesense=False) == 'value0', \
'allow non matching key case if `casesense`'
assert traverse_obj(_CASESENSE_DATA, [0, ('keY',)], casesense=False) == ['value1'], \
'allow non matching key case in branch if `casesense`'
assert traverse_obj(_CASESENSE_DATA, [0, ([0, 'keY'],)], casesense=False) == ['value2'], \
'allow non matching key case in branch path if `casesense`'
def test_traversal_traverse_string(self):
_TRAVERSE_STRING_DATA = {'str': 'str', 1.2: 1.2}
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0)) is None, \
'do not traverse into string if not `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0), traverse_string=True) == 's', \
'traverse into string if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, (1.2, 1), traverse_string=True) == '.', \
'traverse into converted data if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...), traverse_string=True) == 'str', \
'`...` should result in string (same value) if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)), traverse_string=True) == 'sr', \
'`slice` should result in string if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == 's'), traverse_string=True) == 'str', \
'function should result in string if `traverse_string`'
assert traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)), traverse_string=True) == ['s', 'r'], \
'branching should result in list if `traverse_string`'
assert traverse_obj({}, (0, ...), traverse_string=True) == [], \
'branching should result in list if `traverse_string`'
assert traverse_obj({}, (0, lambda x, y: True), traverse_string=True) == [], \
'branching should result in list if `traverse_string`'
assert traverse_obj({}, (0, slice(1)), traverse_string=True) == [], \
'branching should result in list if `traverse_string`'
def test_traversal_re(self):
mobj = re.fullmatch(r'0(12)(?P<group>3)(4)?', '0123')
assert traverse_obj(mobj, ...) == [x for x in mobj.groups() if x is not None], \
'`...` on a `re.Match` should give its `groups()`'
assert traverse_obj(mobj, lambda k, _: k in (0, 2)) == ['0123', '3'], \
'function on a `re.Match` should give groupno, value starting at 0'
assert traverse_obj(mobj, 'group') == '3', \
'str key on a `re.Match` should give group with that name'
assert traverse_obj(mobj, 2) == '3', \
'int key on a `re.Match` should give group with that name'
assert traverse_obj(mobj, 'gRoUp', casesense=False) == '3', \
'str key on a `re.Match` should respect casesense'
assert traverse_obj(mobj, 'fail') is None, \
'failing str key on a `re.Match` should return `default`'
assert traverse_obj(mobj, 'gRoUpS', casesense=False) is None, \
'failing str key on a `re.Match` should return `default`'
assert traverse_obj(mobj, 8) is None, \
'failing int key on a `re.Match` should return `default`'
assert traverse_obj(mobj, lambda k, _: k in (0, 'group')) == ['0123', '3'], \
'function on a `re.Match` should give group name as well'
def test_traversal_xml_etree(self):
etree = xml.etree.ElementTree.fromstring('''<?xml version="1.0"?>
<data>
<country name="Liechtenstein">
<rank>1</rank>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E"/>
<neighbor name="Switzerland" direction="W"/>
</country>
<country name="Singapore">
<rank>4</rank>
<year>2011</year>
<gdppc>59900</gdppc>
<neighbor name="Malaysia" direction="N"/>
</country>
<country name="Panama">
<rank>68</rank>
<year>2011</year>
<gdppc>13600</gdppc>
<neighbor name="Costa Rica" direction="W"/>
<neighbor name="Colombia" direction="E"/>
</country>
</data>''')
assert traverse_obj(etree, '') == etree, \
'empty str key should return the element itself'
assert traverse_obj(etree, 'country') == list(etree), \
'str key should lead all children with that tag name'
assert traverse_obj(etree, ...) == list(etree), \
'`...` as key should return all children'
assert traverse_obj(etree, lambda _, x: x[0].text == '4') == [etree[1]], \
'function as key should get element as value'
assert traverse_obj(etree, lambda i, _: i == 1) == [etree[1]], \
'function as key should get index as key'
assert traverse_obj(etree, 0) == etree[0], \
'int key should return the nth child'
expected = ['Austria', 'Switzerland', 'Malaysia', 'Costa Rica', 'Colombia']
assert traverse_obj(etree, './/neighbor/@name') == expected, \
'`@<attribute>` at end of path should give that attribute'
assert traverse_obj(etree, '//neighbor/@fail') == [None, None, None, None, None], \
'`@<nonexistant>` at end of path should give `None`'
assert traverse_obj(etree, ('//neighbor/@', 2)) == {'name': 'Malaysia', 'direction': 'N'}, \
'`@` should give the full attribute dict'
assert traverse_obj(etree, '//year/text()') == ['2008', '2011', '2011'], \
'`text()` at end of path should give the inner text'
assert traverse_obj(etree, '//*[@direction]/@direction') == ['E', 'W', 'N', 'W', 'E'], \
'full Python xpath features should be supported'
assert traverse_obj(etree, (0, '@name')) == 'Liechtenstein', \
'special transformations should act on current element'
assert traverse_obj(etree, ('country', 0, ..., 'text()', {int_or_none})) == [1, 2008, 141100], \
'special transformations should act on current element'
def test_traversal_unbranching(self):
assert traverse_obj(_TEST_DATA, [(100, 1.2), all]) == [100, 1.2], \
'`all` should give all results as list'
assert traverse_obj(_TEST_DATA, [(100, 1.2), any]) == 100, \
'`any` should give the first result'
assert traverse_obj(_TEST_DATA, [100, all]) == [100], \
'`all` should give list if non branching'
assert traverse_obj(_TEST_DATA, [100, any]) == 100, \
'`any` should give single item if non branching'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100), all]) == [100], \
'`all` should filter `None` and empty dict'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100), any]) == 100, \
'`any` should filter `None` and empty dict'
assert traverse_obj(_TEST_DATA, [{
'all': [('dict', 'None', 100, 1.2), all],
'any': [('dict', 'None', 100, 1.2), any],
}]) == {'all': [100, 1.2], 'any': 100}, \
'`all`/`any` should apply to each dict path separately'
assert traverse_obj(_TEST_DATA, [{
'all': [('dict', 'None', 100, 1.2), all],
'any': [('dict', 'None', 100, 1.2), any],
}], get_all=False) == {'all': [100, 1.2], 'any': 100}, \
'`all`/`any` should apply to dict regardless of `get_all`'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, {float}]) is None, \
'`all` should reset branching status'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), any, {float}]) is None, \
'`any` should reset branching status'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 100, 1.2), all, ..., {float}]) == [1.2], \
'`all` should allow further branching'
assert traverse_obj(_TEST_DATA, [('dict', 'None', 'urls', 'data'), any, ..., 'index']) == [0, 1], \
'`any` should allow further branching'
def test_traversal_morsel(self):
values = {
'expires': 'a',
'path': 'b',
'comment': 'c',
'domain': 'd',
'max-age': 'e',
'secure': 'f',
'httponly': 'g',
'version': 'h',
'samesite': 'i',
}
morsel = http.cookies.Morsel()
morsel.set('item_key', 'item_value', 'coded_value')
morsel.update(values)
values['key'] = 'item_key'
values['value'] = 'item_value'
for key, value in values.items():
assert traverse_obj(morsel, key) == value, \
'Morsel should provide access to all values'
assert traverse_obj(morsel, ...) == list(values.values()), \
'`...` should yield all values'
assert traverse_obj(morsel, lambda k, v: True) == list(values.values()), \
'function key should yield all values'
assert traverse_obj(morsel, [(None,), any]) == morsel, \
'Morsel should not be implicitly changed to dict on usage'
class TestDictGet:
def test_dict_get(self):
FALSE_VALUES = {
'none': None,
'false': False,
'zero': 0,
'empty_string': '',
'empty_list': [],
}
d = {**FALSE_VALUES, 'a': 42}
assert dict_get(d, 'a') == 42
assert dict_get(d, 'b') is None
assert dict_get(d, 'b', 42) == 42
assert dict_get(d, ('a',)) == 42
assert dict_get(d, ('b', 'a')) == 42
assert dict_get(d, ('b', 'c', 'a', 'd')) == 42
assert dict_get(d, ('b', 'c')) is None
assert dict_get(d, ('b', 'c'), 42) == 42
for key, false_value in FALSE_VALUES.items():
assert dict_get(d, ('b', 'c', key)) is None
assert dict_get(d, ('b', 'c', key), skip_false_values=False) == false_value

@ -1,228 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, report_warning
from yt_dlp.update import UpdateInfo, Updater
# XXX: Keep in sync with yt_dlp.update.UPDATE_SOURCES
TEST_UPDATE_SOURCES = {
'stable': 'yt-dlp/yt-dlp',
'nightly': 'yt-dlp/yt-dlp-nightly-builds',
'master': 'yt-dlp/yt-dlp-master-builds',
}
TEST_API_DATA = {
'yt-dlp/yt-dlp/latest': {
'tag_name': '2023.12.31',
'target_commitish': 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb',
'name': 'yt-dlp 2023.12.31',
'body': 'BODY',
},
'yt-dlp/yt-dlp-nightly-builds/latest': {
'tag_name': '2023.12.31.123456',
'target_commitish': 'master',
'name': 'yt-dlp nightly 2023.12.31.123456',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/cccccccccccccccccccccccccccccccccccccccc',
},
'yt-dlp/yt-dlp-master-builds/latest': {
'tag_name': '2023.12.31.987654',
'target_commitish': 'master',
'name': 'yt-dlp master 2023.12.31.987654',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/dddddddddddddddddddddddddddddddddddddddd',
},
'yt-dlp/yt-dlp/tags/testing': {
'tag_name': 'testing',
'target_commitish': '9999999999999999999999999999999999999999',
'name': 'testing',
'body': 'BODY',
},
'fork/yt-dlp/latest': {
'tag_name': '2050.12.31',
'target_commitish': 'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee',
'name': '2050.12.31',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr0000': {
'tag_name': 'pr0000',
'target_commitish': 'ffffffffffffffffffffffffffffffffffffffff',
'name': 'pr1234 2023.11.11.000000',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr1234': {
'tag_name': 'pr1234',
'target_commitish': '0000000000000000000000000000000000000000',
'name': 'pr1234 2023.12.31.555555',
'body': 'BODY',
},
'fork/yt-dlp/tags/pr9999': {
'tag_name': 'pr9999',
'target_commitish': '1111111111111111111111111111111111111111',
'name': 'pr9999',
'body': 'BODY',
},
'fork/yt-dlp-satellite/tags/pr987': {
'tag_name': 'pr987',
'target_commitish': 'master',
'name': 'pr987',
'body': 'Generated from: https://github.com/yt-dlp/yt-dlp/commit/2222222222222222222222222222222222222222',
},
}
TEST_LOCKFILE_COMMENT = '# This file is used for regulating self-update'
TEST_LOCKFILE_V1 = rf'''{TEST_LOCKFILE_COMMENT}
lock 2022.08.18.36 .+ Python 3\.6
lock 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lock 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
'''
TEST_LOCKFILE_V2_TMPL = r'''%s
lockV2 yt-dlp/yt-dlp 2022.08.18.36 .+ Python 3\.6
lockV2 yt-dlp/yt-dlp 2023.11.16 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp 2023.11.16 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-nightly-builds 2023.11.15.232826 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 (?!win_x86_exe).+ Python 3\.7
lockV2 yt-dlp/yt-dlp-master-builds 2023.11.15.232812 win_x86_exe .+ Windows-(?:Vista|2008Server)
'''
TEST_LOCKFILE_V2 = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_COMMENT
TEST_LOCKFILE_ACTUAL = TEST_LOCKFILE_V2_TMPL % TEST_LOCKFILE_V1.rstrip('\n')
TEST_LOCKFILE_FORK = rf'''{TEST_LOCKFILE_ACTUAL}# Test if a fork blocks updates to non-numeric tags
lockV2 fork/yt-dlp pr0000 .+ Python 3.6
lockV2 fork/yt-dlp pr1234 (?!win_x86_exe).+ Python 3\.7
lockV2 fork/yt-dlp pr1234 win_x86_exe .+ Windows-(?:Vista|2008Server)
lockV2 fork/yt-dlp pr9999 .+ Python 3.11
'''
class FakeUpdater(Updater):
current_version = '2022.01.01'
current_commit = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
_channel = 'stable'
_origin = 'yt-dlp/yt-dlp'
_update_sources = TEST_UPDATE_SOURCES
def _download_update_spec(self, *args, **kwargs):
return TEST_LOCKFILE_ACTUAL
def _call_api(self, tag):
tag = f'tags/{tag}' if tag != 'latest' else tag
return TEST_API_DATA[f'{self.requested_repo}/{tag}']
def _report_error(self, msg, *args, **kwargs):
report_warning(msg)
class TestUpdate(unittest.TestCase):
maxDiff = None
def test_update_spec(self):
ydl = FakeYDL()
updater = FakeUpdater(ydl, 'stable')
def test(lockfile, identifier, input_tag, expect_tag, exact=False, repo='yt-dlp/yt-dlp'):
updater._identifier = identifier
updater._exact = exact
updater.requested_repo = repo
result = updater._process_update_spec(lockfile, input_tag)
self.assertEqual(
result, expect_tag,
f'{identifier!r} requesting {repo}@{input_tag} (exact={exact}) '
f'returned {result!r} instead of {expect_tag!r}')
for lockfile in (TEST_LOCKFILE_V1, TEST_LOCKFILE_V2, TEST_LOCKFILE_ACTUAL, TEST_LOCKFILE_FORK):
# Normal operation
test(lockfile, 'zip Python 3.12.0', '2023.12.31', '2023.12.31')
test(lockfile, 'zip stable Python 3.12.0', '2023.12.31', '2023.12.31', exact=True)
# Python 3.6 --update should update only to its lock
test(lockfile, 'zip Python 3.6.0', '2023.11.16', '2022.08.18.36')
# --update-to an exact version later than the lock should return None
test(lockfile, 'zip stable Python 3.6.0', '2023.11.16', None, exact=True)
# Python 3.7 should be able to update to its lock
test(lockfile, 'zip Python 3.7.0', '2023.11.16', '2023.11.16')
test(lockfile, 'zip stable Python 3.7.1', '2023.11.16', '2023.11.16', exact=True)
# Non-win_x86_exe builds on py3.7 must be locked
test(lockfile, 'zip Python 3.7.1', '2023.12.31', '2023.11.16')
test(lockfile, 'zip stable Python 3.7.1', '2023.12.31', None, exact=True)
test( # Windows Vista w/ win_x86_exe must be locked
lockfile, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-Vista-6.0.6003-SP2',
'2023.12.31', '2023.11.16')
test( # Windows 2008Server w/ win_x86_exe must be locked
lockfile, 'win_x86_exe Python 3.7.9 (CPython x86 32bit) - Windows-2008Server',
'2023.12.31', None, exact=True)
test( # Windows 7 w/ win_x86_exe py3.7 build should be able to update beyond lock
lockfile, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-7-6.1.7601-SP1',
'2023.12.31', '2023.12.31')
test( # Windows 8.1 w/ '2008Server' in platform string should be able to update beyond lock
lockfile, 'win_x86_exe Python 3.7.9 (CPython x86 32bit) - Windows-post2008Server-6.2.9200',
'2023.12.31', '2023.12.31', exact=True)
# Forks can block updates to non-numeric tags rather than lock
test(TEST_LOCKFILE_FORK, 'zip Python 3.6.3', 'pr0000', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip stable Python 3.7.4', 'pr0000', 'pr0000', repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip stable Python 3.7.4', 'pr1234', None, repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip Python 3.8.1', 'pr1234', 'pr1234', repo='fork/yt-dlp', exact=True)
test(
TEST_LOCKFILE_FORK, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-Vista-6.0.6003-SP2',
'pr1234', None, repo='fork/yt-dlp')
test(
TEST_LOCKFILE_FORK, 'win_x86_exe stable Python 3.7.9 (CPython x86 32bit) - Windows-7-6.1.7601-SP1',
'2023.12.31', '2023.12.31', repo='fork/yt-dlp')
test(TEST_LOCKFILE_FORK, 'zip Python 3.11.2', 'pr9999', None, repo='fork/yt-dlp', exact=True)
test(TEST_LOCKFILE_FORK, 'zip stable Python 3.12.0', 'pr9999', 'pr9999', repo='fork/yt-dlp')
def test_query_update(self):
ydl = FakeYDL()
def test(target, expected, current_version=None, current_commit=None, identifier=None):
updater = FakeUpdater(ydl, target)
if current_version:
updater.current_version = current_version
if current_commit:
updater.current_commit = current_commit
updater._identifier = identifier or 'zip'
update_info = updater.query_update(_output=True)
self.assertDictEqual(
update_info.__dict__ if update_info else {}, expected.__dict__ if expected else {})
test('yt-dlp/yt-dlp@latest', UpdateInfo(
'2023.12.31', version='2023.12.31', requested_version='2023.12.31', commit='b' * 40))
test('yt-dlp/yt-dlp-nightly-builds@latest', UpdateInfo(
'2023.12.31.123456', version='2023.12.31.123456', requested_version='2023.12.31.123456', commit='c' * 40))
test('yt-dlp/yt-dlp-master-builds@latest', UpdateInfo(
'2023.12.31.987654', version='2023.12.31.987654', requested_version='2023.12.31.987654', commit='d' * 40))
test('fork/yt-dlp@latest', UpdateInfo(
'2050.12.31', version='2050.12.31', requested_version='2050.12.31', commit='e' * 40))
test('fork/yt-dlp@pr0000', UpdateInfo(
'pr0000', version='2023.11.11.000000', requested_version='2023.11.11.000000', commit='f' * 40))
test('fork/yt-dlp@pr1234', UpdateInfo(
'pr1234', version='2023.12.31.555555', requested_version='2023.12.31.555555', commit='0' * 40))
test('fork/yt-dlp@pr9999', UpdateInfo(
'pr9999', version=None, requested_version=None, commit='1' * 40))
test('fork/yt-dlp-satellite@pr987', UpdateInfo(
'pr987', version=None, requested_version=None, commit='2' * 40))
test('yt-dlp/yt-dlp', None, current_version='2024.01.01')
test('stable', UpdateInfo(
'2023.12.31', version='2023.12.31', requested_version='2023.12.31', commit='b' * 40))
test('nightly', UpdateInfo(
'2023.12.31.123456', version='2023.12.31.123456', requested_version='2023.12.31.123456', commit='c' * 40))
test('master', UpdateInfo(
'2023.12.31.987654', version='2023.12.31.987654', requested_version='2023.12.31.987654', commit='d' * 40))
test('testing', None, current_commit='9' * 40)
test('testing', UpdateInfo('testing', commit='9' * 40))
if __name__ == '__main__':
unittest.main()

@ -0,0 +1,30 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import json
from yt_dlp.update import rsa_verify
class TestUpdate(unittest.TestCase):
def test_rsa_verify(self):
UPDATES_RSA_KEY = (0x9d60ee4d8f805312fdb15a62f87b95bd66177b91df176765d13514a0f1754bcd2057295c5b6f1d35daa6742c3ffc9a82d3e118861c207995a8031e151d863c9927e304576bc80692bc8e094896fcf11b66f3e29e04e3a71e9a11558558acea1840aec37fc396fb6b65dc81a1c4144e03bd1c011de62e3f1357b327d08426fe93, 65537)
with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), 'versions.json'), 'rb') as f:
versions_info = f.read().decode()
versions_info = json.loads(versions_info)
signature = versions_info['signature']
del versions_info['signature']
self.assertTrue(rsa_verify(
json.dumps(versions_info, sort_keys=True).encode(),
signature, UPDATES_RSA_KEY))
if __name__ == '__main__':
unittest.main()

@ -2,10 +2,10 @@
# Allow direct execution # Allow direct execution
import os import os
import re
import sys import sys
import unittest import unittest
import warnings import warnings
import datetime as dt
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@ -14,7 +14,6 @@ import contextlib
import io import io
import itertools import itertools
import json import json
import subprocess
import xml.etree.ElementTree import xml.etree.ElementTree
from yt_dlp.compat import ( from yt_dlp.compat import (
@ -28,9 +27,7 @@ from yt_dlp.utils import (
ExtractorError, ExtractorError,
InAdvancePagedList, InAdvancePagedList,
LazyList, LazyList,
NO_DEFAULT,
OnDemandPagedList, OnDemandPagedList,
Popen,
age_restricted, age_restricted,
args_to_str, args_to_str,
base_url, base_url,
@ -46,12 +43,14 @@ from yt_dlp.utils import (
determine_ext, determine_ext,
determine_file_encoding, determine_file_encoding,
dfxp2srt, dfxp2srt,
dict_get,
encode_base_n, encode_base_n,
encode_compat_str, encode_compat_str,
encodeFilename, encodeFilename,
escape_rfc3986,
escape_url,
expand_path, expand_path,
extract_attributes, extract_attributes,
extract_basic_auth,
find_xpath_attr, find_xpath_attr,
fix_xml_ampersands, fix_xml_ampersands,
float_or_none, float_or_none,
@ -104,13 +103,16 @@ from yt_dlp.utils import (
sanitize_filename, sanitize_filename,
sanitize_path, sanitize_path,
sanitize_url, sanitize_url,
sanitized_Request,
shell_quote, shell_quote,
smuggle_url, smuggle_url,
str_or_none,
str_to_int, str_to_int,
strip_jsonp, strip_jsonp,
strip_or_none, strip_or_none,
subtitles_filename, subtitles_filename,
timeconvert, timeconvert,
traverse_obj,
try_call, try_call,
unescapeHTML, unescapeHTML,
unified_strdate, unified_strdate,
@ -130,13 +132,6 @@ from yt_dlp.utils import (
xpath_text, xpath_text,
xpath_with_ns, xpath_with_ns,
) )
from yt_dlp.utils._utils import _UnsafeExtensionError
from yt_dlp.utils.networking import (
HTTPHeaderDict,
escape_rfc3986,
normalize_url,
remove_dot_segments,
)
class TestUtil(unittest.TestCase): class TestUtil(unittest.TestCase):
@ -263,6 +258,15 @@ class TestUtil(unittest.TestCase):
self.assertEqual(sanitize_url('https://foo.bar'), 'https://foo.bar') self.assertEqual(sanitize_url('https://foo.bar'), 'https://foo.bar')
self.assertEqual(sanitize_url('foo bar'), 'foo bar') self.assertEqual(sanitize_url('foo bar'), 'foo bar')
def test_extract_basic_auth(self):
auth_header = lambda url: sanitized_Request(url).get_header('Authorization')
self.assertFalse(auth_header('http://foo.bar'))
self.assertFalse(auth_header('http://:foo.bar'))
self.assertEqual(auth_header('http://@foo.bar'), 'Basic Og==')
self.assertEqual(auth_header('http://:pass@foo.bar'), 'Basic OnBhc3M=')
self.assertEqual(auth_header('http://user:@foo.bar'), 'Basic dXNlcjo=')
self.assertEqual(auth_header('http://user:pass@foo.bar'), 'Basic dXNlcjpwYXNz')
def test_expand_path(self): def test_expand_path(self):
def env(var): def env(var):
return f'%{var}%' if sys.platform == 'win32' else f'${var}' return f'%{var}%' if sys.platform == 'win32' else f'${var}'
@ -277,18 +281,11 @@ class TestUtil(unittest.TestCase):
self.assertEqual(expand_path(env('HOME')), os.getenv('HOME')) self.assertEqual(expand_path(env('HOME')), os.getenv('HOME'))
self.assertEqual(expand_path('~'), os.getenv('HOME')) self.assertEqual(expand_path('~'), os.getenv('HOME'))
self.assertEqual( self.assertEqual(
expand_path('~/{}'.format(env('yt_dlp_EXPATH_PATH'))), expand_path('~/%s' % env('yt_dlp_EXPATH_PATH')),
'{}/expanded'.format(os.getenv('HOME'))) '%s/expanded' % os.getenv('HOME'))
finally: finally:
os.environ['HOME'] = old_home or '' os.environ['HOME'] = old_home or ''
_uncommon_extensions = [
('exe', 'abc.exe.ext'),
('de', 'abc.de.ext'),
('../.mp4', None),
('..\\.mp4', None),
]
def test_prepend_extension(self): def test_prepend_extension(self):
self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext') self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext')
self.assertEqual(prepend_extension('abc.ext', 'temp', 'ext'), 'abc.temp.ext') self.assertEqual(prepend_extension('abc.ext', 'temp', 'ext'), 'abc.temp.ext')
@ -297,19 +294,6 @@ class TestUtil(unittest.TestCase):
self.assertEqual(prepend_extension('.abc', 'temp'), '.abc.temp') self.assertEqual(prepend_extension('.abc', 'temp'), '.abc.temp')
self.assertEqual(prepend_extension('.abc.ext', 'temp'), '.abc.temp.ext') self.assertEqual(prepend_extension('.abc.ext', 'temp'), '.abc.temp.ext')
# Test uncommon extensions
self.assertEqual(prepend_extension('abc.ext', 'bin'), 'abc.bin.ext')
for ext, result in self._uncommon_extensions:
with self.assertRaises(_UnsafeExtensionError):
prepend_extension('abc', ext)
if result:
self.assertEqual(prepend_extension('abc.ext', ext, 'ext'), result)
else:
with self.assertRaises(_UnsafeExtensionError):
prepend_extension('abc.ext', ext, 'ext')
with self.assertRaises(_UnsafeExtensionError):
prepend_extension('abc.unexpected_ext', ext, 'ext')
def test_replace_extension(self): def test_replace_extension(self):
self.assertEqual(replace_extension('abc.ext', 'temp'), 'abc.temp') self.assertEqual(replace_extension('abc.ext', 'temp'), 'abc.temp')
self.assertEqual(replace_extension('abc.ext', 'temp', 'ext'), 'abc.temp') self.assertEqual(replace_extension('abc.ext', 'temp', 'ext'), 'abc.temp')
@ -318,16 +302,6 @@ class TestUtil(unittest.TestCase):
self.assertEqual(replace_extension('.abc', 'temp'), '.abc.temp') self.assertEqual(replace_extension('.abc', 'temp'), '.abc.temp')
self.assertEqual(replace_extension('.abc.ext', 'temp'), '.abc.temp') self.assertEqual(replace_extension('.abc.ext', 'temp'), '.abc.temp')
# Test uncommon extensions
self.assertEqual(replace_extension('abc.ext', 'bin'), 'abc.unknown_video')
for ext, _ in self._uncommon_extensions:
with self.assertRaises(_UnsafeExtensionError):
replace_extension('abc', ext)
with self.assertRaises(_UnsafeExtensionError):
replace_extension('abc.ext', ext, 'ext')
with self.assertRaises(_UnsafeExtensionError):
replace_extension('abc.unexpected_ext', ext, 'ext')
def test_subtitles_filename(self): def test_subtitles_filename(self):
self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt'), 'abc.en.vtt') self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt'), 'abc.en.vtt')
self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt', 'ext'), 'abc.en.vtt') self.assertEqual(subtitles_filename('abc.ext', 'en', 'vtt', 'ext'), 'abc.en.vtt')
@ -387,12 +361,12 @@ class TestUtil(unittest.TestCase):
self.assertEqual(datetime_from_str('now+23hours', precision='hour'), datetime_from_str('now+23hours', precision='auto')) self.assertEqual(datetime_from_str('now+23hours', precision='hour'), datetime_from_str('now+23hours', precision='auto'))
def test_daterange(self): def test_daterange(self):
_20century = DateRange('19000101', '20000101') _20century = DateRange("19000101", "20000101")
self.assertFalse('17890714' in _20century) self.assertFalse("17890714" in _20century)
_ac = DateRange('00010101') _ac = DateRange("00010101")
self.assertTrue('19690721' in _ac) self.assertTrue("19690721" in _ac)
_firstmilenium = DateRange(end='10000101') _firstmilenium = DateRange(end="10000101")
self.assertTrue('07110427' in _firstmilenium) self.assertTrue("07110427" in _firstmilenium)
def test_unified_dates(self): def test_unified_dates(self):
self.assertEqual(unified_strdate('December 21, 2010'), '20101221') self.assertEqual(unified_strdate('December 21, 2010'), '20101221')
@ -537,7 +511,7 @@ class TestUtil(unittest.TestCase):
self.assertRaises(ExtractorError, xpath_attr, doc, 'div/p', 'y', fatal=True) self.assertRaises(ExtractorError, xpath_attr, doc, 'div/p', 'y', fatal=True)
def test_smuggle_url(self): def test_smuggle_url(self):
data = {'ö': 'ö', 'abc': [3]} data = {"ö": "ö", "abc": [3]}
url = 'https://foo.bar/baz?x=y#a' url = 'https://foo.bar/baz?x=y#a'
smug_url = smuggle_url(url, data) smug_url = smuggle_url(url, data)
unsmug_url, unsmug_data = unsmuggle_url(smug_url) unsmug_url, unsmug_data = unsmuggle_url(smug_url)
@ -689,8 +663,6 @@ class TestUtil(unittest.TestCase):
self.assertEqual(parse_duration('P0Y0M0DT0H4M20.880S'), 260.88) self.assertEqual(parse_duration('P0Y0M0DT0H4M20.880S'), 260.88)
self.assertEqual(parse_duration('01:02:03:050'), 3723.05) self.assertEqual(parse_duration('01:02:03:050'), 3723.05)
self.assertEqual(parse_duration('103:050'), 103.05) self.assertEqual(parse_duration('103:050'), 103.05)
self.assertEqual(parse_duration('1HR 3MIN'), 3780)
self.assertEqual(parse_duration('2hrs 3mins'), 7380)
def test_fix_xml_ampersands(self): def test_fix_xml_ampersands(self):
self.assertEqual( self.assertEqual(
@ -784,6 +756,28 @@ class TestUtil(unittest.TestCase):
self.assertRaises( self.assertRaises(
ValueError, multipart_encode, {b'field': b'value'}, boundary='value') ValueError, multipart_encode, {b'field': b'value'}, boundary='value')
def test_dict_get(self):
FALSE_VALUES = {
'none': None,
'false': False,
'zero': 0,
'empty_string': '',
'empty_list': [],
}
d = FALSE_VALUES.copy()
d['a'] = 42
self.assertEqual(dict_get(d, 'a'), 42)
self.assertEqual(dict_get(d, 'b'), None)
self.assertEqual(dict_get(d, 'b', 42), 42)
self.assertEqual(dict_get(d, ('a', )), 42)
self.assertEqual(dict_get(d, ('b', 'a', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', 'a', 'd', )), 42)
self.assertEqual(dict_get(d, ('b', 'c', )), None)
self.assertEqual(dict_get(d, ('b', 'c', ), 42), 42)
for key, false_value in FALSE_VALUES.items():
self.assertEqual(dict_get(d, ('b', 'c', key, )), None)
self.assertEqual(dict_get(d, ('b', 'c', key, ), skip_false_values=False), false_value)
def test_merge_dicts(self): def test_merge_dicts(self):
self.assertEqual(merge_dicts({'a': 1}, {'b': 2}), {'a': 1, 'b': 2}) self.assertEqual(merge_dicts({'a': 1}, {'b': 2}), {'a': 1, 'b': 2})
self.assertEqual(merge_dicts({'a': 1}, {'a': 2}), {'a': 1}) self.assertEqual(merge_dicts({'a': 1}, {'a': 2}), {'a': 1})
@ -801,11 +795,6 @@ class TestUtil(unittest.TestCase):
def test_parse_iso8601(self): def test_parse_iso8601(self):
self.assertEqual(parse_iso8601('2014-03-23T23:04:26+0100'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T23:04:26+0100'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T23:04:26-07:00'), 1395641066)
self.assertEqual(parse_iso8601('2014-03-23T23:04:26', timezone=dt.timedelta(hours=-7)), 1395641066)
self.assertEqual(parse_iso8601('2014-03-23T23:04:26', timezone=NO_DEFAULT), None)
# default does not override timezone in date_str
self.assertEqual(parse_iso8601('2014-03-23T23:04:26-07:00', timezone=dt.timedelta(hours=-10)), 1395641066)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26+0000'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26+0000'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26Z'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26Z'), 1395612266)
self.assertEqual(parse_iso8601('2014-03-23T22:04:26.1234Z'), 1395612266) self.assertEqual(parse_iso8601('2014-03-23T22:04:26.1234Z'), 1395612266)
@ -815,7 +804,7 @@ class TestUtil(unittest.TestCase):
def test_strip_jsonp(self): def test_strip_jsonp(self):
stripped = strip_jsonp('cb ([ {"id":"532cb",\n\n\n"x":\n3}\n]\n);') stripped = strip_jsonp('cb ([ {"id":"532cb",\n\n\n"x":\n3}\n]\n);')
d = json.loads(stripped) d = json.loads(stripped)
self.assertEqual(d, [{'id': '532cb', 'x': 3}]) self.assertEqual(d, [{"id": "532cb", "x": 3}])
stripped = strip_jsonp('parseMetadata({"STATUS":"OK"})\n\n\n//epc') stripped = strip_jsonp('parseMetadata({"STATUS":"OK"})\n\n\n//epc')
d = json.loads(stripped) d = json.loads(stripped)
@ -950,45 +939,24 @@ class TestUtil(unittest.TestCase):
self.assertEqual(escape_rfc3986('foo bar'), 'foo%20bar') self.assertEqual(escape_rfc3986('foo bar'), 'foo%20bar')
self.assertEqual(escape_rfc3986('foo%20bar'), 'foo%20bar') self.assertEqual(escape_rfc3986('foo%20bar'), 'foo%20bar')
def test_normalize_url(self): def test_escape_url(self):
self.assertEqual( self.assertEqual(
normalize_url('http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavré_FD.mp4'), escape_url('http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavré_FD.mp4'),
'http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavre%CC%81_FD.mp4', 'http://wowza.imust.org/srv/vod/telemb/new/UPLOAD/UPLOAD/20224_IncendieHavre%CC%81_FD.mp4'
) )
self.assertEqual( self.assertEqual(
normalize_url('http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erklärt/Das-Erste/Video?documentId=22673108&bcastId=5290'), escape_url('http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erklärt/Das-Erste/Video?documentId=22673108&bcastId=5290'),
'http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erkl%C3%A4rt/Das-Erste/Video?documentId=22673108&bcastId=5290', 'http://www.ardmediathek.de/tv/Sturm-der-Liebe/Folge-2036-Zu-Mann-und-Frau-erkl%C3%A4rt/Das-Erste/Video?documentId=22673108&bcastId=5290'
) )
self.assertEqual( self.assertEqual(
normalize_url('http://тест.рф/фрагмент'), escape_url('http://тест.рф/фрагмент'),
'http://xn--e1aybc.xn--p1ai/%D1%84%D1%80%D0%B0%D0%B3%D0%BC%D0%B5%D0%BD%D1%82', 'http://xn--e1aybc.xn--p1ai/%D1%84%D1%80%D0%B0%D0%B3%D0%BC%D0%B5%D0%BD%D1%82'
) )
self.assertEqual( self.assertEqual(
normalize_url('http://тест.рф/абв?абв=абв#абв'), escape_url('http://тест.рф/абв?абв=абв#абв'),
'http://xn--e1aybc.xn--p1ai/%D0%B0%D0%B1%D0%B2?%D0%B0%D0%B1%D0%B2=%D0%B0%D0%B1%D0%B2#%D0%B0%D0%B1%D0%B2', 'http://xn--e1aybc.xn--p1ai/%D0%B0%D0%B1%D0%B2?%D0%B0%D0%B1%D0%B2=%D0%B0%D0%B1%D0%B2#%D0%B0%D0%B1%D0%B2'
) )
self.assertEqual(normalize_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0') self.assertEqual(escape_url('http://vimeo.com/56015672#at=0'), 'http://vimeo.com/56015672#at=0')
self.assertEqual(normalize_url('http://www.example.com/../a/b/../c/./d.html'), 'http://www.example.com/a/c/d.html')
def test_remove_dot_segments(self):
self.assertEqual(remove_dot_segments('/a/b/c/./../../g'), '/a/g')
self.assertEqual(remove_dot_segments('mid/content=5/../6'), 'mid/6')
self.assertEqual(remove_dot_segments('/ad/../cd'), '/cd')
self.assertEqual(remove_dot_segments('/ad/../cd/'), '/cd/')
self.assertEqual(remove_dot_segments('/..'), '/')
self.assertEqual(remove_dot_segments('/./'), '/')
self.assertEqual(remove_dot_segments('/./a'), '/a')
self.assertEqual(remove_dot_segments('/abc/./.././d/././e/.././f/./../../ghi'), '/ghi')
self.assertEqual(remove_dot_segments('/'), '/')
self.assertEqual(remove_dot_segments('/t'), '/t')
self.assertEqual(remove_dot_segments('t'), 't')
self.assertEqual(remove_dot_segments(''), '')
self.assertEqual(remove_dot_segments('/../a/b/c'), '/a/b/c')
self.assertEqual(remove_dot_segments('../a'), 'a')
self.assertEqual(remove_dot_segments('./a'), 'a')
self.assertEqual(remove_dot_segments('.'), '')
self.assertEqual(remove_dot_segments('////'), '////')
def test_js_to_json_vars_strings(self): def test_js_to_json_vars_strings(self):
self.assertDictEqual( self.assertDictEqual(
@ -1010,7 +978,7 @@ class TestUtil(unittest.TestCase):
'e': 'false', 'e': 'false',
'f': '"false"', 'f': '"false"',
'g': 'var', 'g': 'var',
}, }
)), )),
{ {
'null': None, 'null': None,
@ -1019,8 +987,8 @@ class TestUtil(unittest.TestCase):
'trueStr': 'true', 'trueStr': 'true',
'false': False, 'false': False,
'falseStr': 'false', 'falseStr': 'false',
'unresolvedVar': 'var', 'unresolvedVar': 'var'
}, }
) )
self.assertDictEqual( self.assertDictEqual(
@ -1036,14 +1004,14 @@ class TestUtil(unittest.TestCase):
'b': '"123"', 'b': '"123"',
'c': '1.23', 'c': '1.23',
'd': '"1.23"', 'd': '"1.23"',
}, }
)), )),
{ {
'int': 123, 'int': 123,
'intStr': '123', 'intStr': '123',
'float': 1.23, 'float': 1.23,
'floatStr': '1.23', 'floatStr': '1.23',
}, }
) )
self.assertDictEqual( self.assertDictEqual(
@ -1059,14 +1027,14 @@ class TestUtil(unittest.TestCase):
'b': '"{}"', 'b': '"{}"',
'c': '[]', 'c': '[]',
'd': '"[]"', 'd': '"[]"',
}, }
)), )),
{ {
'object': {}, 'object': {},
'objectStr': '{}', 'objectStr': '{}',
'array': [], 'array': [],
'arrayStr': '[]', 'arrayStr': '[]',
}, }
) )
def test_js_to_json_realworld(self): def test_js_to_json_realworld(self):
@ -1112,7 +1080,7 @@ class TestUtil(unittest.TestCase):
def test_js_to_json_edgecases(self): def test_js_to_json_edgecases(self):
on = js_to_json("{abc_def:'1\\'\\\\2\\\\\\'3\"4'}") on = js_to_json("{abc_def:'1\\'\\\\2\\\\\\'3\"4'}")
self.assertEqual(json.loads(on), {'abc_def': "1'\\2\\'3\"4"}) self.assertEqual(json.loads(on), {"abc_def": "1'\\2\\'3\"4"})
on = js_to_json('{"abc": true}') on = js_to_json('{"abc": true}')
self.assertEqual(json.loads(on), {'abc': True}) self.assertEqual(json.loads(on), {'abc': True})
@ -1144,9 +1112,9 @@ class TestUtil(unittest.TestCase):
'c': 0, 'c': 0,
'd': 42.42, 'd': 42.42,
'e': [], 'e': [],
'f': 'abc', 'f': "abc",
'g': '', 'g': "",
'42': 42, '42': 42
}) })
on = js_to_json('["abc", "def",]') on = js_to_json('["abc", "def",]')
@ -1221,9 +1189,6 @@ class TestUtil(unittest.TestCase):
on = js_to_json('\'"\\""\'') on = js_to_json('\'"\\""\'')
self.assertEqual(json.loads(on), '"""', msg='Unnecessary quote escape should be escaped') self.assertEqual(json.loads(on), '"""', msg='Unnecessary quote escape should be escaped')
on = js_to_json('[new Date("spam"), \'("eggs")\']')
self.assertEqual(json.loads(on), ['spam', '("eggs")'], msg='Date regex should match a single string')
def test_js_to_json_malformed(self): def test_js_to_json_malformed(self):
self.assertEqual(js_to_json('42a1'), '42"a1"') self.assertEqual(js_to_json('42a1'), '42"a1"')
self.assertEqual(js_to_json('42a-1'), '42"a"-1') self.assertEqual(js_to_json('42a-1'), '42"a"-1')
@ -1235,14 +1200,6 @@ class TestUtil(unittest.TestCase):
self.assertEqual(js_to_json('`${name}"${name}"`', {'name': '5'}), '"5\\"5\\""') self.assertEqual(js_to_json('`${name}"${name}"`', {'name': '5'}), '"5\\"5\\""')
self.assertEqual(js_to_json('`${name}`', {}), '"name"') self.assertEqual(js_to_json('`${name}`', {}), '"name"')
def test_js_to_json_common_constructors(self):
self.assertEqual(json.loads(js_to_json('new Map([["a", 5]])')), {'a': 5})
self.assertEqual(json.loads(js_to_json('Array(5, 10)')), [5, 10])
self.assertEqual(json.loads(js_to_json('new Array(15,5)')), [15, 5])
self.assertEqual(json.loads(js_to_json('new Map([Array(5, 10),new Array(15,5)])')), {'5': 10, '15': 5})
self.assertEqual(json.loads(js_to_json('new Date("123")')), '123')
self.assertEqual(json.loads(js_to_json('new Date(\'2023-10-19\')')), '2023-10-19')
def test_extract_attributes(self): def test_extract_attributes(self):
self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'}) self.assertEqual(extract_attributes('<e x="y">'), {'x': 'y'})
self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'}) self.assertEqual(extract_attributes("<e x='y'>"), {'x': 'y'})
@ -1296,7 +1253,7 @@ class TestUtil(unittest.TestCase):
def test_args_to_str(self): def test_args_to_str(self):
self.assertEqual( self.assertEqual(
args_to_str(['foo', 'ba/r', '-baz', '2 be', '']), args_to_str(['foo', 'ba/r', '-baz', '2 be', '']),
'foo ba/r -baz \'2 be\' \'\'' if compat_os_name != 'nt' else 'foo ba/r -baz "2 be" ""', 'foo ba/r -baz \'2 be\' \'\'' if compat_os_name != 'nt' else 'foo ba/r -baz "2 be" ""'
) )
def test_parse_filesize(self): def test_parse_filesize(self):
@ -1379,10 +1336,10 @@ ffmpeg version 2.4.4 Copyright (c) 2000-2014 the FFmpeg ...'''), '2.4.4')
self.assertTrue(is_html( # UTF-8 with BOM self.assertTrue(is_html( # UTF-8 with BOM
b'\xef\xbb\xbf<!DOCTYPE foo>\xaaa')) b'\xef\xbb\xbf<!DOCTYPE foo>\xaaa'))
self.assertTrue(is_html( # UTF-16-LE self.assertTrue(is_html( # UTF-16-LE
b'\xff\xfe<\x00h\x00t\x00m\x00l\x00>\x00\xe4\x00', b'\xff\xfe<\x00h\x00t\x00m\x00l\x00>\x00\xe4\x00'
)) ))
self.assertTrue(is_html( # UTF-16-BE self.assertTrue(is_html( # UTF-16-BE
b'\xfe\xff\x00<\x00h\x00t\x00m\x00l\x00>\x00\xe4', b'\xfe\xff\x00<\x00h\x00t\x00m\x00l\x00>\x00\xe4'
)) ))
self.assertTrue(is_html( # UTF-32-BE self.assertTrue(is_html( # UTF-32-BE
b'\x00\x00\xFE\xFF\x00\x00\x00<\x00\x00\x00h\x00\x00\x00t\x00\x00\x00m\x00\x00\x00l\x00\x00\x00>\x00\x00\x00\xe4')) b'\x00\x00\xFE\xFF\x00\x00\x00<\x00\x00\x00h\x00\x00\x00t\x00\x00\x00m\x00\x00\x00l\x00\x00\x00>\x00\x00\x00\xe4'))
@ -1878,8 +1835,6 @@ Line 1
def test_clean_podcast_url(self): def test_clean_podcast_url(self):
self.assertEqual(clean_podcast_url('https://www.podtrac.com/pts/redirect.mp3/chtbl.com/track/5899E/traffic.megaphone.fm/HSW7835899191.mp3'), 'https://traffic.megaphone.fm/HSW7835899191.mp3') self.assertEqual(clean_podcast_url('https://www.podtrac.com/pts/redirect.mp3/chtbl.com/track/5899E/traffic.megaphone.fm/HSW7835899191.mp3'), 'https://traffic.megaphone.fm/HSW7835899191.mp3')
self.assertEqual(clean_podcast_url('https://play.podtrac.com/npr-344098539/edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3'), 'https://edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3') self.assertEqual(clean_podcast_url('https://play.podtrac.com/npr-344098539/edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3'), 'https://edge1.pod.npr.org/anon.npr-podcasts/podcast/npr/waitwait/2020/10/20201003_waitwait_wwdtmpodcast201003-015621a5-f035-4eca-a9a1-7c118d90bc3c.mp3')
self.assertEqual(clean_podcast_url('https://pdst.fm/e/2.gum.fm/chtbl.com/track/chrt.fm/track/34D33/pscrb.fm/rss/p/traffic.megaphone.fm/ITLLC7765286967.mp3?updated=1687282661'), 'https://traffic.megaphone.fm/ITLLC7765286967.mp3?updated=1687282661')
self.assertEqual(clean_podcast_url('https://pdst.fm/e/https://mgln.ai/e/441/www.buzzsprout.com/1121972/13019085-ep-252-the-deep-life-stack.mp3'), 'https://www.buzzsprout.com/1121972/13019085-ep-252-the-deep-life-stack.mp3')
def test_LazyList(self): def test_LazyList(self):
it = list(range(10)) it = list(range(10))
@ -1966,7 +1921,7 @@ Line 1
with locked_file(FILE, test_mode, False): with locked_file(FILE, test_mode, False):
pass pass
except (BlockingIOError, PermissionError): except (BlockingIOError, PermissionError):
if not testing_write: # FIXME: blocked read access if not testing_write: # FIXME
print(f'Known issue: Exclusive lock ({lock_mode}) blocks read access ({test_mode})') print(f'Known issue: Exclusive lock ({lock_mode}) blocks read access ({test_mode})')
continue continue
self.assertTrue(testing_write, f'{test_mode} is blocked by {lock_mode}') self.assertTrue(testing_write, f'{test_mode} is blocked by {lock_mode}')
@ -2034,7 +1989,7 @@ Line 1
msg='int fn with expected_type int should give int') msg='int fn with expected_type int should give int')
self.assertEqual(try_call(lambda: 1, expected_type=dict), None, self.assertEqual(try_call(lambda: 1, expected_type=dict), None,
msg='int fn with wrong expected_type should give None') msg='int fn with wrong expected_type should give None')
self.assertEqual(try_call(total, args=(0, 1, 0), expected_type=int), 1, self.assertEqual(try_call(total, args=(0, 1, 0, ), expected_type=int), 1,
msg='fn should accept arglist') msg='fn should accept arglist')
self.assertEqual(try_call(total, kwargs={'a': 0, 'b': 1, 'c': 0}, expected_type=int), 1, self.assertEqual(try_call(total, kwargs={'a': 0, 'b': 1, 'c': 0}, expected_type=int), 1,
msg='fn should accept kwargs') msg='fn should accept kwargs')
@ -2051,84 +2006,321 @@ Line 1
warnings.simplefilter('ignore') warnings.simplefilter('ignore')
self.assertEqual(variadic('spam', allowed_types=[dict]), 'spam') self.assertEqual(variadic('spam', allowed_types=[dict]), 'spam')
def test_http_header_dict(self): def test_traverse_obj(self):
headers = HTTPHeaderDict() _TEST_DATA = {
headers['ytdl-test'] = b'0' 100: 100,
self.assertEqual(list(headers.items()), [('Ytdl-Test', '0')]) 1.2: 1.2,
headers['ytdl-test'] = 1 'str': 'str',
self.assertEqual(list(headers.items()), [('Ytdl-Test', '1')]) 'None': None,
headers['Ytdl-test'] = '2' '...': ...,
self.assertEqual(list(headers.items()), [('Ytdl-Test', '2')]) 'urls': [
self.assertTrue('ytDl-Test' in headers) {'index': 0, 'url': 'https://www.example.com/0'},
self.assertEqual(str(headers), str(dict(headers))) {'index': 1, 'url': 'https://www.example.com/1'},
self.assertEqual(repr(headers), str(dict(headers))) ],
'data': (
headers.update({'X-dlp': 'data'}) {'index': 2},
self.assertEqual(set(headers.items()), {('Ytdl-Test', '2'), ('X-Dlp', 'data')}) {'index': 3},
self.assertEqual(dict(headers), {'Ytdl-Test': '2', 'X-Dlp': 'data'}) ),
self.assertEqual(len(headers), 2) 'dict': {},
self.assertEqual(headers.copy(), headers) }
headers2 = HTTPHeaderDict({'X-dlp': 'data3'}, **headers, **{'X-dlp': 'data2'})
self.assertEqual(set(headers2.items()), {('Ytdl-Test', '2'), ('X-Dlp', 'data2')}) # Test base functionality
self.assertEqual(len(headers2), 2) self.assertEqual(traverse_obj(_TEST_DATA, ('str',)), 'str',
headers2.clear() msg='allow tuple path')
self.assertEqual(len(headers2), 0) self.assertEqual(traverse_obj(_TEST_DATA, ['str']), 'str',
msg='allow list path')
# ensure we prefer latter headers self.assertEqual(traverse_obj(_TEST_DATA, (value for value in ("str",))), 'str',
headers3 = HTTPHeaderDict({'Ytdl-TeSt': 1}, {'Ytdl-test': 2}) msg='allow iterable path')
self.assertEqual(set(headers3.items()), {('Ytdl-Test', '2')}) self.assertEqual(traverse_obj(_TEST_DATA, 'str'), 'str',
del headers3['ytdl-tesT'] msg='single items should be treated as a path')
self.assertEqual(dict(headers3), {}) self.assertEqual(traverse_obj(_TEST_DATA, None), _TEST_DATA)
self.assertEqual(traverse_obj(_TEST_DATA, 100), 100)
headers4 = HTTPHeaderDict({'ytdl-test': 'data;'}) self.assertEqual(traverse_obj(_TEST_DATA, 1.2), 1.2)
self.assertEqual(set(headers4.items()), {('Ytdl-Test', 'data;')})
# Test Ellipsis behavior
# common mistake: strip whitespace from values self.assertCountEqual(traverse_obj(_TEST_DATA, ...),
# https://github.com/yt-dlp/yt-dlp/issues/8729 (item for item in _TEST_DATA.values() if item not in (None, {})),
headers5 = HTTPHeaderDict({'ytdl-test': ' data; '}) msg='`...` should give all non discarded values')
self.assertEqual(set(headers5.items()), {('Ytdl-Test', 'data;')}) self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', 0, ...)), _TEST_DATA['urls'][0].values(),
msg='`...` selection for dicts should select all values')
def test_extract_basic_auth(self): self.assertEqual(traverse_obj(_TEST_DATA, (..., ..., 'url')),
assert extract_basic_auth('http://:foo.bar') == ('http://:foo.bar', None) ['https://www.example.com/0', 'https://www.example.com/1'],
assert extract_basic_auth('http://foo.bar') == ('http://foo.bar', None) msg='nested `...` queries should work')
assert extract_basic_auth('http://@foo.bar') == ('http://foo.bar', 'Basic Og==') self.assertCountEqual(traverse_obj(_TEST_DATA, (..., ..., 'index')), range(4),
assert extract_basic_auth('http://:pass@foo.bar') == ('http://foo.bar', 'Basic OnBhc3M=') msg='`...` query result should be flattened')
assert extract_basic_auth('http://user:@foo.bar') == ('http://foo.bar', 'Basic dXNlcjo=') self.assertEqual(traverse_obj(iter(range(4)), ...), list(range(4)),
assert extract_basic_auth('http://user:pass@foo.bar') == ('http://foo.bar', 'Basic dXNlcjpwYXNz') msg='`...` should accept iterables')
@unittest.skipUnless(compat_os_name == 'nt', 'Only relevant on Windows') # Test function as key
def test_windows_escaping(self): self.assertEqual(traverse_obj(_TEST_DATA, lambda x, y: x == 'urls' and isinstance(y, list)),
tests = [ [_TEST_DATA['urls']],
'test"&', msg='function as query key should perform a filter based on (key, value)')
'%CMDCMDLINE:~-1%&', self.assertCountEqual(traverse_obj(_TEST_DATA, lambda _, x: isinstance(x[0], str)), {'str'},
'a\nb', msg='exceptions in the query function should be catched')
'"', self.assertEqual(traverse_obj(iter(range(4)), lambda _, x: x % 2 == 0), [0, 2],
'\\', msg='function key should accept iterables')
'!', if __debug__:
'^!', with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
'a \\ b', traverse_obj(_TEST_DATA, lambda a: ...)
'a \\" b', with self.assertRaises(Exception, msg='Wrong function signature should raise in debug'):
'a \\ b\\', traverse_obj(_TEST_DATA, lambda a, b, c: ...)
# We replace \r with \n
('a\r\ra', 'a\n\na'), # Test set as key (transformation/type, like `expected_type`)
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper}, )), ['STR'],
msg='Function in set should be a transformation')
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str})), ['str'],
msg='Type in set should be a type filter')
self.assertEqual(traverse_obj(_TEST_DATA, {dict}), _TEST_DATA,
msg='A single set should be wrapped into a path')
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str.upper})), ['STR'],
msg='Transformation function should not raise')
self.assertEqual(traverse_obj(_TEST_DATA, (..., {str_or_none})),
[item for item in map(str_or_none, _TEST_DATA.values()) if item is not None],
msg='Function in set should be a transformation')
if __debug__:
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, set())
with self.assertRaises(Exception, msg='Sets with length != 1 should raise in debug'):
traverse_obj(_TEST_DATA, {str.upper, str})
# Test `slice` as a key
_SLICE_DATA = [0, 1, 2, 3, 4]
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', slice(1))), None,
msg='slice on a dictionary should not throw')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1)), _SLICE_DATA[:1],
msg='slice key should apply slice to sequence')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 2)), _SLICE_DATA[1:2],
msg='slice key should apply slice to sequence')
self.assertEqual(traverse_obj(_SLICE_DATA, slice(1, 4, 2)), _SLICE_DATA[1:4:2],
msg='slice key should apply slice to sequence')
# Test alternative paths
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'str'), 'str',
msg='multiple `paths` should be treated as alternative paths')
self.assertEqual(traverse_obj(_TEST_DATA, 'str', 100), 'str',
msg='alternatives should exit early')
self.assertEqual(traverse_obj(_TEST_DATA, 'fail', 'fail'), None,
msg='alternatives should return `default` if exhausted')
self.assertEqual(traverse_obj(_TEST_DATA, (..., 'fail'), 100), 100,
msg='alternatives should track their own branching return')
self.assertEqual(traverse_obj(_TEST_DATA, ('dict', ...), ('data', ...)), list(_TEST_DATA['data']),
msg='alternatives on empty objects should search further')
# Test branch and path nesting
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', (3, 0), 'url')), ['https://www.example.com/0'],
msg='tuple as key should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', [3, 0], 'url')), ['https://www.example.com/0'],
msg='list as key should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', ((1, 'fail'), (0, 'url')))), ['https://www.example.com/0'],
msg='double nesting in path should be treated as paths')
self.assertEqual(traverse_obj(['0', [1, 2]], [(0, 1), 0]), [1],
msg='do not fail early on branching')
self.assertCountEqual(traverse_obj(_TEST_DATA, ('urls', ((1, ('fail', 'url')), (0, 'url')))),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='tripple nesting in path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, ('urls', ('fail', (..., 'url')))),
['https://www.example.com/0', 'https://www.example.com/1'],
msg='ellipsis as branch path start gets flattened')
# Test dictionary as key
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}), {0: 100, 1: 1.2},
msg='dict key should result in a dict with the same keys')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', 0, 'url')}),
{0: 'https://www.example.com/0'},
msg='dict key should allow paths')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', (3, 0), 'url')}),
{0: ['https://www.example.com/0']},
msg='tuple in dict path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, 'fail'), (0, 'url')))}),
{0: ['https://www.example.com/0']},
msg='double nesting in dict path should be treated as paths')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('urls', ((1, ('fail', 'url')), (0, 'url')))}),
{0: ['https://www.example.com/1', 'https://www.example.com/0']},
msg='tripple nesting in dict path should be treated as branches')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}), {},
msg='remove `None` values when top level dict key fails')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'fail'}, default=...), {0: ...},
msg='use `default` if key fails and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}), {},
msg='remove empty values when dict key')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 'dict'}, default=...), {0: ...},
msg='use `default` when dict key and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}), {},
msg='remove empty values when nested dict key fails')
self.assertEqual(traverse_obj(None, {0: 'fail'}), {},
msg='default to dict if pruned')
self.assertEqual(traverse_obj(None, {0: 'fail'}, default=...), {0: ...},
msg='default to dict if pruned and default is given')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 'fail'}}, default=...), {0: {0: ...}},
msg='use nested `default` when nested dict key fails and `default`')
self.assertEqual(traverse_obj(_TEST_DATA, {0: ('dict', ...)}), {},
msg='remove key if branch in dict key not successful')
# Testing default parameter behavior
_DEFAULT_DATA = {'None': None, 'int': 0, 'list': []}
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail'), None,
msg='default value should be `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail', 'fail', default=...), ...,
msg='chained fails should result in default')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'None', 'int'), 0,
msg='should not short cirquit on `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'fail', default=1), 1,
msg='invalid dict key should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, 'None', default=1), 1,
msg='`None` is a deliberate sentinel and should become `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', 10)), None,
msg='`IndexError` should result in `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=1), 1,
msg='if branched but not successful return `default` if defined, not `[]`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail'), default=None), None,
msg='if branched but not successful return `default` even if `default` is `None`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, (..., 'fail')), [],
msg='if branched but not successful return `[]`, not `default`')
self.assertEqual(traverse_obj(_DEFAULT_DATA, ('list', ...)), [],
msg='if branched but object is empty return `[]`, not `default`')
self.assertEqual(traverse_obj(None, ...), [],
msg='if branched but object is `None` return `[]`, not `default`')
self.assertEqual(traverse_obj({0: None}, (0, ...)), [],
msg='if branched but state is `None` return `[]`, not `default`')
branching_paths = [
('fail', ...),
(..., 'fail'),
100 * ('fail',) + (...,),
(...,) + 100 * ('fail',),
] ]
for branching_path in branching_paths:
def run_shell(args): self.assertEqual(traverse_obj({}, branching_path), [],
stdout, stderr, error = Popen.run( msg='if branched but state is `None`, return `[]` (not `default`)')
args, text=True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertEqual(traverse_obj({}, 'fail', branching_path), [],
assert not stderr msg='if branching in last alternative and previous did not match, return `[]` (not `default`)')
assert not error self.assertEqual(traverse_obj({0: 'x'}, 0, branching_path), 'x',
return stdout msg='if branching in last alternative and previous did match, return single value')
self.assertEqual(traverse_obj({0: 'x'}, branching_path, 0), 'x',
for argument in tests: msg='if branching in first alternative and non-branching path does match, return single value')
if isinstance(argument, str): self.assertEqual(traverse_obj({}, branching_path, 'fail'), None,
expected = argument msg='if branching in first alternative and non-branching path does not match, return `default`')
else:
argument, expected = argument # Testing expected_type behavior
_EXPECTED_TYPE_DATA = {'str': 'str', 'int': 0}
args = [sys.executable, '-c', 'import sys; print(end=sys.argv[1])', argument, 'end'] self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=str),
assert run_shell(args) == expected 'str', msg='accept matching `expected_type` type')
assert run_shell(shell_quote(args, shell=True)) == expected self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=int),
None, msg='reject non matching `expected_type` type')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'int', expected_type=lambda x: str(x)),
'0', msg='transform type using type function')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, 'str', expected_type=lambda _: 1 / 0),
None, msg='wrap expected_type fuction in try_call')
self.assertEqual(traverse_obj(_EXPECTED_TYPE_DATA, ..., expected_type=str),
['str'], msg='eliminate items that expected_type fails on')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2}, expected_type=int),
{0: 100}, msg='type as expected_type should filter dict values')
self.assertEqual(traverse_obj(_TEST_DATA, {0: 100, 1: 1.2, 2: 'None'}, expected_type=str_or_none),
{0: '100', 1: '1.2'}, msg='function as expected_type should transform dict values')
self.assertEqual(traverse_obj(_TEST_DATA, ({0: 1.2}, 0, {int_or_none}), expected_type=int),
1, msg='expected_type should not filter non final dict values')
self.assertEqual(traverse_obj(_TEST_DATA, {0: {0: 100, 1: 'str'}}, expected_type=int),
{0: {0: 100}}, msg='expected_type should transform deep dict values')
self.assertEqual(traverse_obj(_TEST_DATA, [({0: '...'}, {0: '...'})], expected_type=type(...)),
[{0: ...}, {0: ...}], msg='expected_type should transform branched dict values')
self.assertEqual(traverse_obj({1: {3: 4}}, [(1, 2), 3], expected_type=int),
[4], msg='expected_type regression for type matching in tuple branching')
self.assertEqual(traverse_obj(_TEST_DATA, ['data', ...], expected_type=int),
[], msg='expected_type regression for type matching in dict result')
# Test get_all behavior
_GET_ALL_DATA = {'key': [0, 1, 2]}
self.assertEqual(traverse_obj(_GET_ALL_DATA, ('key', ...), get_all=False), 0,
msg='if not `get_all`, return only first matching value')
self.assertEqual(traverse_obj(_GET_ALL_DATA, ..., get_all=False), [0, 1, 2],
msg='do not overflatten if not `get_all`')
# Test casesense behavior
_CASESENSE_DATA = {
'KeY': 'value0',
0: {
'KeY': 'value1',
0: {'KeY': 'value2'},
},
}
self.assertEqual(traverse_obj(_CASESENSE_DATA, 'key'), None,
msg='dict keys should be case sensitive unless `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, 'keY',
casesense=False), 'value0',
msg='allow non matching key case if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, (0, ('keY',)),
casesense=False), ['value1'],
msg='allow non matching key case in branch if `casesense`')
self.assertEqual(traverse_obj(_CASESENSE_DATA, (0, ((0, 'keY'),)),
casesense=False), ['value2'],
msg='allow non matching key case in branch path if `casesense`')
# Test traverse_string behavior
_TRAVERSE_STRING_DATA = {'str': 'str', 1.2: 1.2}
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0)), None,
msg='do not traverse into string if not `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', 0),
traverse_string=True), 's',
msg='traverse into string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, (1.2, 1),
traverse_string=True), '.',
msg='traverse into converted data if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', ...),
traverse_string=True), 'str',
msg='`...` should result in string (same value) if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', slice(0, None, 2)),
traverse_string=True), 'sr',
msg='`slice` should result in string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', lambda i, v: i or v == "s"),
traverse_string=True), 'str',
msg='function should result in string if `traverse_string`')
self.assertEqual(traverse_obj(_TRAVERSE_STRING_DATA, ('str', (0, 2)),
traverse_string=True), ['s', 'r'],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, ...), traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, lambda x, y: True), traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
self.assertEqual(traverse_obj({}, (0, slice(1)), traverse_string=True), [],
msg='branching should result in list if `traverse_string`')
# Test is_user_input behavior
_IS_USER_INPUT_DATA = {'range8': list(range(8))}
self.assertEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', '3'),
is_user_input=True), 3,
msg='allow for string indexing if `is_user_input`')
self.assertCountEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', '3:'),
is_user_input=True), tuple(range(8))[3:],
msg='allow for string slice if `is_user_input`')
self.assertCountEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':4:2'),
is_user_input=True), tuple(range(8))[:4:2],
msg='allow step in string slice if `is_user_input`')
self.assertCountEqual(traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':'),
is_user_input=True), range(8),
msg='`:` should be treated as `...` if `is_user_input`')
with self.assertRaises(TypeError, msg='too many params should result in error'):
traverse_obj(_IS_USER_INPUT_DATA, ('range8', ':::'), is_user_input=True)
# Test re.Match as input obj
mobj = re.fullmatch(r'0(12)(?P<group>3)(4)?', '0123')
self.assertEqual(traverse_obj(mobj, ...), [x for x in mobj.groups() if x is not None],
msg='`...` on a `re.Match` should give its `groups()`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 2)), ['0123', '3'],
msg='function on a `re.Match` should give groupno, value starting at 0')
self.assertEqual(traverse_obj(mobj, 'group'), '3',
msg='str key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 2), '3',
msg='int key on a `re.Match` should give group with that name')
self.assertEqual(traverse_obj(mobj, 'gRoUp', casesense=False), '3',
msg='str key on a `re.Match` should respect casesense')
self.assertEqual(traverse_obj(mobj, 'fail'), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 'gRoUpS', casesense=False), None,
msg='failing str key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, 8), None,
msg='failing int key on a `re.Match` should return `default`')
self.assertEqual(traverse_obj(mobj, lambda k, _: k in (0, 'group')), ['0123', '3'],
msg='function on a `re.Match` should give group name as well')
if __name__ == '__main__': if __name__ == '__main__':

@ -1,439 +0,0 @@
#!/usr/bin/env python3
# Allow direct execution
import os
import sys
import time
import pytest
from test.helper import verify_address_availability
from yt_dlp.networking.common import Features, DEFAULT_TIMEOUT
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import http.client
import http.cookiejar
import http.server
import json
import random
import ssl
import threading
from yt_dlp import socks, traverse_obj
from yt_dlp.cookies import YoutubeDLCookieJar
from yt_dlp.dependencies import websockets
from yt_dlp.networking import Request
from yt_dlp.networking.exceptions import (
CertificateVerifyError,
HTTPError,
ProxyError,
RequestError,
SSLError,
TransportError,
)
from yt_dlp.utils.networking import HTTPHeaderDict
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
def websocket_handler(websocket):
for message in websocket:
if isinstance(message, bytes):
if message == b'bytes':
return websocket.send('2')
elif isinstance(message, str):
if message == 'headers':
return websocket.send(json.dumps(dict(websocket.request.headers)))
elif message == 'path':
return websocket.send(websocket.request.path)
elif message == 'source_address':
return websocket.send(websocket.remote_address[0])
elif message == 'str':
return websocket.send('1')
return websocket.send(message)
def process_request(self, request):
if request.path.startswith('/gen_'):
status = http.HTTPStatus(int(request.path[5:]))
if 300 <= status.value <= 300:
return websockets.http11.Response(
status.value, status.phrase, websockets.datastructures.Headers([('Location', '/')]), b'')
return self.protocol.reject(status.value, status.phrase)
return self.protocol.accept(request)
def create_websocket_server(**ws_kwargs):
import websockets.sync.server
wsd = websockets.sync.server.serve(
websocket_handler, '127.0.0.1', 0,
process_request=process_request, open_timeout=2, **ws_kwargs)
ws_port = wsd.socket.getsockname()[1]
ws_server_thread = threading.Thread(target=wsd.serve_forever)
ws_server_thread.daemon = True
ws_server_thread.start()
return ws_server_thread, ws_port
def create_ws_websocket_server():
return create_websocket_server()
def create_wss_websocket_server():
certfn = os.path.join(TEST_DIR, 'testcert.pem')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None)
return create_websocket_server(ssl_context=sslctx)
MTLS_CERT_DIR = os.path.join(TEST_DIR, 'testdata', 'certificate')
def create_mtls_wss_websocket_server():
certfn = os.path.join(TEST_DIR, 'testcert.pem')
cacertfn = os.path.join(MTLS_CERT_DIR, 'ca.crt')
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.verify_mode = ssl.CERT_REQUIRED
sslctx.load_verify_locations(cafile=cacertfn)
sslctx.load_cert_chain(certfn, None)
return create_websocket_server(ssl_context=sslctx)
def ws_validate_and_send(rh, req):
rh.validate(req)
max_tries = 3
for i in range(max_tries):
try:
return rh.send(req)
except TransportError as e:
if i < (max_tries - 1) and 'connection closed during handshake' in str(e):
# websockets server sometimes hangs on new connections
continue
raise
@pytest.mark.skipif(not websockets, reason='websockets must be installed to test websocket request handlers')
@pytest.mark.parametrize('handler', ['Websockets'], indirect=True)
class TestWebsSocketRequestHandlerConformance:
@classmethod
def setup_class(cls):
cls.ws_thread, cls.ws_port = create_ws_websocket_server()
cls.ws_base_url = f'ws://127.0.0.1:{cls.ws_port}'
cls.wss_thread, cls.wss_port = create_wss_websocket_server()
cls.wss_base_url = f'wss://127.0.0.1:{cls.wss_port}'
cls.bad_wss_thread, cls.bad_wss_port = create_websocket_server(ssl_context=ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER))
cls.bad_wss_host = f'wss://127.0.0.1:{cls.bad_wss_port}'
cls.mtls_wss_thread, cls.mtls_wss_port = create_mtls_wss_websocket_server()
cls.mtls_wss_base_url = f'wss://127.0.0.1:{cls.mtls_wss_port}'
def test_basic_websockets(self, handler):
with handler() as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
assert 'upgrade' in ws.headers
assert ws.status == 101
ws.send('foo')
assert ws.recv() == 'foo'
ws.close()
# https://www.rfc-editor.org/rfc/rfc6455.html#section-5.6
@pytest.mark.parametrize('msg,opcode', [('str', 1), (b'bytes', 2)])
def test_send_types(self, handler, msg, opcode):
with handler() as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send(msg)
assert int(ws.recv()) == opcode
ws.close()
def test_verify_cert(self, handler):
with handler() as rh:
with pytest.raises(CertificateVerifyError):
ws_validate_and_send(rh, Request(self.wss_base_url))
with handler(verify=False) as rh:
ws = ws_validate_and_send(rh, Request(self.wss_base_url))
assert ws.status == 101
ws.close()
def test_ssl_error(self, handler):
with handler(verify=False) as rh:
with pytest.raises(SSLError, match=r'ssl(?:v3|/tls) alert handshake failure') as exc_info:
ws_validate_and_send(rh, Request(self.bad_wss_host))
assert not issubclass(exc_info.type, CertificateVerifyError)
@pytest.mark.parametrize('path,expected', [
# Unicode characters should be encoded with uppercase percent-encoding
('/中文', '/%E4%B8%AD%E6%96%87'),
# don't normalize existing percent encodings
('/%c7%9f', '/%c7%9f'),
])
def test_percent_encode(self, handler, path, expected):
with handler() as rh:
ws = ws_validate_and_send(rh, Request(f'{self.ws_base_url}{path}'))
ws.send('path')
assert ws.recv() == expected
assert ws.status == 101
ws.close()
def test_remove_dot_segments(self, handler):
with handler() as rh:
# This isn't a comprehensive test,
# but it should be enough to check whether the handler is removing dot segments
ws = ws_validate_and_send(rh, Request(f'{self.ws_base_url}/a/b/./../../test'))
assert ws.status == 101
ws.send('path')
assert ws.recv() == '/test'
ws.close()
# We are restricted to known HTTP status codes in http.HTTPStatus
# Redirects are not supported for websockets
@pytest.mark.parametrize('status', (200, 204, 301, 302, 303, 400, 500, 511))
def test_raise_http_error(self, handler, status):
with handler() as rh:
with pytest.raises(HTTPError) as exc_info:
ws_validate_and_send(rh, Request(f'{self.ws_base_url}/gen_{status}'))
assert exc_info.value.status == status
@pytest.mark.parametrize('params,extensions', [
({'timeout': sys.float_info.min}, {}),
({}, {'timeout': sys.float_info.min}),
])
def test_read_timeout(self, handler, params, extensions):
with handler(**params) as rh:
with pytest.raises(TransportError):
ws_validate_and_send(rh, Request(self.ws_base_url, extensions=extensions))
def test_connect_timeout(self, handler):
# nothing should be listening on this port
connect_timeout_url = 'ws://10.255.255.255'
with handler(timeout=0.01) as rh, pytest.raises(TransportError):
now = time.time()
ws_validate_and_send(rh, Request(connect_timeout_url))
assert time.time() - now < DEFAULT_TIMEOUT
# Per request timeout, should override handler timeout
request = Request(connect_timeout_url, extensions={'timeout': 0.01})
with handler() as rh, pytest.raises(TransportError):
now = time.time()
ws_validate_and_send(rh, request)
assert time.time() - now < DEFAULT_TIMEOUT
def test_cookies(self, handler):
cookiejar = YoutubeDLCookieJar()
cookiejar.set_cookie(http.cookiejar.Cookie(
version=0, name='test', value='ytdlp', port=None, port_specified=False,
domain='127.0.0.1', domain_specified=True, domain_initial_dot=False, path='/',
path_specified=True, secure=False, expires=None, discard=False, comment=None,
comment_url=None, rest={}))
with handler(cookiejar=cookiejar) as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('headers')
assert json.loads(ws.recv())['cookie'] == 'test=ytdlp'
ws.close()
with handler() as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('headers')
assert 'cookie' not in json.loads(ws.recv())
ws.close()
ws = ws_validate_and_send(rh, Request(self.ws_base_url, extensions={'cookiejar': cookiejar}))
ws.send('headers')
assert json.loads(ws.recv())['cookie'] == 'test=ytdlp'
ws.close()
def test_source_address(self, handler):
source_address = f'127.0.0.{random.randint(5, 255)}'
verify_address_availability(source_address)
with handler(source_address=source_address) as rh:
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('source_address')
assert source_address == ws.recv()
ws.close()
def test_response_url(self, handler):
with handler() as rh:
url = f'{self.ws_base_url}/something'
ws = ws_validate_and_send(rh, Request(url))
assert ws.url == url
ws.close()
def test_request_headers(self, handler):
with handler(headers=HTTPHeaderDict({'test1': 'test', 'test2': 'test2'})) as rh:
# Global Headers
ws = ws_validate_and_send(rh, Request(self.ws_base_url))
ws.send('headers')
headers = HTTPHeaderDict(json.loads(ws.recv()))
assert headers['test1'] == 'test'
ws.close()
# Per request headers, merged with global
ws = ws_validate_and_send(rh, Request(
self.ws_base_url, headers={'test2': 'changed', 'test3': 'test3'}))
ws.send('headers')
headers = HTTPHeaderDict(json.loads(ws.recv()))
assert headers['test1'] == 'test'
assert headers['test2'] == 'changed'
assert headers['test3'] == 'test3'
ws.close()
@pytest.mark.parametrize('client_cert', (
{'client_certificate': os.path.join(MTLS_CERT_DIR, 'clientwithkey.crt')},
{
'client_certificate': os.path.join(MTLS_CERT_DIR, 'client.crt'),
'client_certificate_key': os.path.join(MTLS_CERT_DIR, 'client.key'),
},
{
'client_certificate': os.path.join(MTLS_CERT_DIR, 'clientwithencryptedkey.crt'),
'client_certificate_password': 'foobar',
},
{
'client_certificate': os.path.join(MTLS_CERT_DIR, 'client.crt'),
'client_certificate_key': os.path.join(MTLS_CERT_DIR, 'clientencrypted.key'),
'client_certificate_password': 'foobar',
},
))
def test_mtls(self, handler, client_cert):
with handler(
# Disable client-side validation of unacceptable self-signed testcert.pem
# The test is of a check on the server side, so unaffected
verify=False,
client_cert=client_cert,
) as rh:
ws_validate_and_send(rh, Request(self.mtls_wss_base_url)).close()
def test_request_disable_proxy(self, handler):
for proxy_proto in handler._SUPPORTED_PROXY_SCHEMES or ['ws']:
# Given handler is configured with a proxy
with handler(proxies={'ws': f'{proxy_proto}://10.255.255.255'}, timeout=5) as rh:
# When a proxy is explicitly set to None for the request
ws = ws_validate_and_send(rh, Request(self.ws_base_url, proxies={'http': None}))
# Then no proxy should be used
assert ws.status == 101
ws.close()
@pytest.mark.skip_handlers_if(
lambda _, handler: Features.NO_PROXY not in handler._SUPPORTED_FEATURES, 'handler does not support NO_PROXY')
def test_noproxy(self, handler):
for proxy_proto in handler._SUPPORTED_PROXY_SCHEMES or ['ws']:
# Given the handler is configured with a proxy
with handler(proxies={'ws': f'{proxy_proto}://10.255.255.255'}, timeout=5) as rh:
for no_proxy in (f'127.0.0.1:{self.ws_port}', '127.0.0.1', 'localhost'):
# When request no proxy includes the request url host
ws = ws_validate_and_send(rh, Request(self.ws_base_url, proxies={'no': no_proxy}))
# Then the proxy should not be used
assert ws.status == 101
ws.close()
@pytest.mark.skip_handlers_if(
lambda _, handler: Features.ALL_PROXY not in handler._SUPPORTED_FEATURES, 'handler does not support ALL_PROXY')
def test_allproxy(self, handler):
supported_proto = traverse_obj(handler._SUPPORTED_PROXY_SCHEMES, 0, default='ws')
# This is a bit of a hacky test, but it should be enough to check whether the handler is using the proxy.
# 0.1s might not be enough of a timeout if proxy is not used in all cases, but should still get failures.
with handler(proxies={'all': f'{supported_proto}://10.255.255.255'}, timeout=0.1) as rh:
with pytest.raises(TransportError):
ws_validate_and_send(rh, Request(self.ws_base_url)).close()
with handler(timeout=0.1) as rh:
with pytest.raises(TransportError):
ws_validate_and_send(
rh, Request(self.ws_base_url, proxies={'all': f'{supported_proto}://10.255.255.255'})).close()
def create_fake_ws_connection(raised):
import websockets.sync.client
class FakeWsConnection(websockets.sync.client.ClientConnection):
def __init__(self, *args, **kwargs):
class FakeResponse:
body = b''
headers = {}
status_code = 101
reason_phrase = 'test'
self.response = FakeResponse()
def send(self, *args, **kwargs):
raise raised()
def recv(self, *args, **kwargs):
raise raised()
def close(self, *args, **kwargs):
return
return FakeWsConnection()
@pytest.mark.parametrize('handler', ['Websockets'], indirect=True)
class TestWebsocketsRequestHandler:
@pytest.mark.parametrize('raised,expected', [
# https://websockets.readthedocs.io/en/stable/reference/exceptions.html
(lambda: websockets.exceptions.InvalidURI(msg='test', uri='test://'), RequestError),
# Requires a response object. Should be covered by HTTP error tests.
# (lambda: websockets.exceptions.InvalidStatus(), TransportError),
(lambda: websockets.exceptions.InvalidHandshake(), TransportError),
# These are subclasses of InvalidHandshake
(lambda: websockets.exceptions.InvalidHeader(name='test'), TransportError),
(lambda: websockets.exceptions.NegotiationError(), TransportError),
# Catch-all
(lambda: websockets.exceptions.WebSocketException(), TransportError),
(lambda: TimeoutError(), TransportError),
# These may be raised by our create_connection implementation, which should also be caught
(lambda: OSError(), TransportError),
(lambda: ssl.SSLError(), SSLError),
(lambda: ssl.SSLCertVerificationError(), CertificateVerifyError),
(lambda: socks.ProxyError(), ProxyError),
])
def test_request_error_mapping(self, handler, monkeypatch, raised, expected):
import websockets.sync.client
import yt_dlp.networking._websockets
with handler() as rh:
def fake_connect(*args, **kwargs):
raise raised()
monkeypatch.setattr(yt_dlp.networking._websockets, 'create_connection', lambda *args, **kwargs: None)
monkeypatch.setattr(websockets.sync.client, 'connect', fake_connect)
with pytest.raises(expected) as exc_info:
rh.send(Request('ws://fake-url'))
assert exc_info.type is expected
@pytest.mark.parametrize('raised,expected,match', [
# https://websockets.readthedocs.io/en/stable/reference/sync/client.html#websockets.sync.client.ClientConnection.send
(lambda: websockets.exceptions.ConnectionClosed(None, None), TransportError, None),
(lambda: RuntimeError(), TransportError, None),
(lambda: TimeoutError(), TransportError, None),
(lambda: TypeError(), RequestError, None),
(lambda: socks.ProxyError(), ProxyError, None),
# Catch-all
(lambda: websockets.exceptions.WebSocketException(), TransportError, None),
])
def test_ws_send_error_mapping(self, handler, monkeypatch, raised, expected, match):
from yt_dlp.networking._websockets import WebsocketsResponseAdapter
ws = WebsocketsResponseAdapter(create_fake_ws_connection(raised), url='ws://fake-url')
with pytest.raises(expected, match=match) as exc_info:
ws.send('test')
assert exc_info.type is expected
@pytest.mark.parametrize('raised,expected,match', [
# https://websockets.readthedocs.io/en/stable/reference/sync/client.html#websockets.sync.client.ClientConnection.recv
(lambda: websockets.exceptions.ConnectionClosed(None, None), TransportError, None),
(lambda: RuntimeError(), TransportError, None),
(lambda: TimeoutError(), TransportError, None),
(lambda: socks.ProxyError(), ProxyError, None),
# Catch-all
(lambda: websockets.exceptions.WebSocketException(), TransportError, None),
])
def test_ws_recv_error_mapping(self, handler, monkeypatch, raised, expected, match):
from yt_dlp.networking._websockets import WebsocketsResponseAdapter
ws = WebsocketsResponseAdapter(create_fake_ws_connection(raised), url='ws://fake-url')
with pytest.raises(expected, match=match) as exc_info:
ws.recv()
assert exc_info.type is expected

@ -13,7 +13,7 @@ from yt_dlp.extractor import YoutubeIE
class TestYoutubeMisc(unittest.TestCase): class TestYoutubeMisc(unittest.TestCase):
def test_youtube_extract(self): def test_youtube_extract(self):
assertExtractId = lambda url, video_id: self.assertEqual(YoutubeIE.extract_id(url), video_id) assertExtractId = lambda url, id: self.assertEqual(YoutubeIE.extract_id(url), id)
assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc') assertExtractId('http://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc') assertExtractId('https://www.youtube.com/watch?&v=BaW_jenozKc', 'BaW_jenozKc')
assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc') assertExtractId('https://www.youtube.com/watch?feature=player_embedded&v=BaW_jenozKc', 'BaW_jenozKc')

@ -46,17 +46,17 @@ _SIG_TESTS = [
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflBb0OQx.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflBb0OQx.js',
84, 84,
'123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQ0STUVWXYZ!"#$%&\'()*+,@./:;<=>', '123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQ0STUVWXYZ!"#$%&\'()*+,@./:;<=>'
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl9FYC6l.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vfl9FYC6l.js',
83, 83,
'123456789abcdefghijklmnopqr0tuvwxyzABCDETGHIJKLMNOPQRS>UVWXYZ!"#$%&\'()*+,-./:;<=F', '123456789abcdefghijklmnopqr0tuvwxyzABCDETGHIJKLMNOPQRS>UVWXYZ!"#$%&\'()*+,-./:;<=F'
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflCGk6yw/html5player.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflCGk6yw/html5player.js',
'4646B5181C6C3020DF1D9C7FCFEA.AD80ABF70C39BD369CCCAE780AFBB98FA6B6CB42766249D9488C288', '4646B5181C6C3020DF1D9C7FCFEA.AD80ABF70C39BD369CCCAE780AFBB98FA6B6CB42766249D9488C288',
'82C8849D94266724DC6B6AF89BBFA087EACCD963.B93C07FBA084ACAEFCF7C9D1FD0203C6C1815B6B', '82C8849D94266724DC6B6AF89BBFA087EACCD963.B93C07FBA084ACAEFCF7C9D1FD0203C6C1815B6B'
), ),
( (
'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js', 'https://s.ytimg.com/yts/jsbin/html5player-en_US-vflKjOTVq/html5player.js',
@ -159,14 +159,6 @@ _NSIG_TESTS = [
'https://www.youtube.com/s/player/8c7583ff/player_ias.vflset/en_US/base.js', 'https://www.youtube.com/s/player/8c7583ff/player_ias.vflset/en_US/base.js',
'1wWCVpRR96eAmMI87L', 'KSkWAVv1ZQxC3A', '1wWCVpRR96eAmMI87L', 'KSkWAVv1ZQxC3A',
), ),
(
'https://www.youtube.com/s/player/b7910ca8/player_ias.vflset/en_US/base.js',
'_hXMCwMt9qE310D', 'LoZMgkkofRMCZQ',
),
(
'https://www.youtube.com/s/player/590f65a6/player_ias.vflset/en_US/base.js',
'1tm7-g_A9zsI8_Lay_', 'xI4Vem4Put_rOg',
),
] ]
@ -211,7 +203,7 @@ class TestSignature(unittest.TestCase):
def t_factory(name, sig_func, url_pattern): def t_factory(name, sig_func, url_pattern):
def make_tfunc(url, sig_input, expected_sig): def make_tfunc(url, sig_input, expected_sig):
m = url_pattern.match(url) m = url_pattern.match(url)
assert m, f'{url!r} should follow URL format' assert m, '%r should follow URL format' % url
test_id = m.group('id') test_id = m.group('id')
def test_func(self): def test_func(self):

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save