mirror of
https://github.com/yt-dlp/yt-dlp.git
synced 2024-11-14 20:38:11 -05:00
b827ee921f
* [scrippsnetworks] Add new extractor(closes #19857)(closes #22981) * [teachable] Improve locked lessons detection (#23528) * [teachable] Fail with error message if no video URL found * [extractors] add missing import for ScrippsNetworksIE * [brightcove] cache brightcove player policy keys * [prosiebensat1] improve geo restriction handling(closes #23571) * [soundcloud] automatically update client id on failing requests * [spankbang] Fix extraction (closes #23307, closes #23423, closes #23444) * [spankbang] Improve removed video detection (#23423) * [brightcove] update policy key on failing requests * [pornhub] Fix extraction and add support for m3u8 formats (closes #22749, closes #23082) * [pornhub] Improve locked videos detection (closes #22449, closes #22780) * [brightcove] invalidate policy key cache on failing requests * [soundcloud] fix client id extraction for non fatal requests * [ChangeLog] Actualize [ci skip] * [devscripts/create-github-release] Switch to using PAT for authentication Basic authentication will be deprecated soon * release 2020.01.01 * [redtube] Detect private videos (#23518) * [vice] improve extraction(closes #23631) * [devscripts/create-github-release] Remove unused import * [wistia] improve format extraction and extract subtitles(closes #22590) * [nrktv:seriebase] Fix extraction (closes #23625) (#23537) * [discovery] fix anonymous token extraction(closes #23650) * [scrippsnetworks] add support for www.discovery.com videos * [scrippsnetworks] correct test case URL * [dctp] fix format extraction(closes #23656) * [pandatv] Remove extractor (#23630) * [naver] improve extraction - improve geo-restriction handling - extract automatic captions - extract uploader metadata - extract VLive HLS formats * [naver] improve metadata extraction * [cloudflarestream] improve extraction - add support for bytehighway.net domain - add support for signed URLs - extract thumbnail * [cloudflarestream] import embed URL extraction * [lego] fix extraction and extract subtitle(closes #23687) * [safari] Fix kaltura session extraction (closes #23679) (#23670) * [orf:fm4] Fix extraction (#23599) * [orf:radio] Clean description and improve extraction * [twitter] add support for promo_video_website cards(closes #23711) * [vodplatform] add support for embed.kwikmotion.com domain * [ndr:base:embed] Improve thumbnails extraction (closes #23731) * [canvas] Add support for new API endpoint and update tests (closes #17680, closes #18629) * [travis] Add flake8 job (#23720) * [yourporn] Fix extraction (closes #21645, closes #22255, closes #23459) * [ChangeLog] Actualize [ci skip] * release 2020.01.15 * [soundcloud] Restore previews extraction (closes #23739) * [orf:tvthek] Improve geo restricted videos detection (closes #23741) * [zype] improve extraction - extract subtitles(closes #21258) - support URLs with alternative keys/tokens(#21258) - extract more metadata * [americastestkitchen] fix extraction * [nbc] add support for nbc multi network URLs(closes #23049) * [ard] improve extraction(closes #23761) - simplify extraction - extract age limit and series - bypass geo-restriction * [ivi:compilation] Fix entries extraction (closes #23770) * [24video] Add support for 24video.vip (closes #23753) * [businessinsider] Fix jwplatform id extraction (closes #22929) (#22954) * [ard] add a missing condition * [azmedien] fix extraction(closes #23783) * [voicerepublic] fix extraction * [stretchinternet] fix extraction(closes #4319) * [youtube] Fix sigfunc name extraction (closes #23819) * [ChangeLog] Actualize [ci skip] * release 2020.01.24 * [soundcloud] imporve private playlist/set tracks extraction https://github.com/ytdl-org/youtube-dl/issues/3707#issuecomment-577873539 * [svt] fix article extraction(closes #22897)(closes #22919) * [svt] fix series extraction(closes #22297) * [viewlift] improve extraction - fix extraction(closes #23851) - add add support for authentication - add support for more domains * [vimeo] fix album extraction(closes #23864) * [tva] Relax _VALID_URL (closes #23903) * [tv5mondeplus] Fix extraction (closes #23907, closes #23911) * [twitch:stream] Lowercase channel id for stream request (closes #23917) * [sportdeutschland] Update to new sportdeutschland API They switched to SSL, but under a different host AND path... Remove the old test cases because these videos have become unavailable. * [popcorntimes] Add extractor (closes #23949) * [thisoldhouse] fix extraction(closes #23951) * [toggle] Add support for mewatch.sg (closes #23895) (#23930) * [compat] Introduce compat_realpath (refs #23991) * [update] Fix updating via symlinks (closes #23991) * [nytimes] improve format sorting(closes #24010) * [abc:iview] Support 720p (#22907) (#22921) * [nova:embed] Fix extraction (closes #23672) * [nova:embed] Improve (closes #23690) * [nova] Improve extraction (refs #23690) * [jpopsuki] Remove extractor (closes #23858) * [YoutubeDL] Fix playlist entry indexing with --playlist-items (closes #10591, closes #10622) * [test_YoutubeDL] Fix get_ids * [test_YoutubeDL] Add tests for #10591 (closes #23873) * [24video] Add support for porn.24video.net (closes #23779, closes #23784) * [npr] Add support for streams (closes #24042) * [ChangeLog] Actualize [ci skip] * release 2020.02.16 * [tv2dk:bornholm:play] Fix extraction (#24076) * [imdb] Fix extraction (closes #23443) * [wistia] Add support for multiple generic embeds (closes #8347, closes #11385) * [teachable] Add support for multiple videos per lecture (closes #24101) * [pornhd] Fix extraction (closes #24128) * [options] Remove duplicate short option -v for --version (#24162) * [extractor/common] Convert ISM manifest to unicode before processing on python 2 (#24152) * [YoutubeDL] Force redirect URL to unicode on python 2 * Remove no longer needed compat_str around geturl * [youjizz] Fix extraction (closes #24181) * [test_subtitles] Remove obsolete test * [zdf:channel] Fix tests * [zapiks] Fix test * [xtube] Fix metadata extraction (closes #21073, closes #22455) * [xtube:user] Fix test * [telecinco] Fix extraction (refs #24195) * [telecinco] Add support for article opening videos * [franceculture] Fix extraction (closes #24204) * [xhamster] Fix extraction (closes #24205) * [ChangeLog] Actualize [ci skip] * release 2020.03.01 * [vimeo] Fix subtitles URLs (#24209) * [servus] Add support for new URL schema (closes #23475, closes #23583, closes #24142) * [youtube:playlist] Fix tests (closes #23872) (#23885) * [peertube] Improve extraction * [peertube] Fix issues and improve extraction (closes #23657) * [pornhub] Improve title extraction (closes #24184) * [vimeo] fix showcase password protected video extraction(closes #24224) * [youtube] Fix age-gated videos support without login (closes #24248) * [youtube] Fix tests * [ChangeLog] Actualize [ci skip] * release 2020.03.06 * [nhk] update API version(closes #24270) * [youtube] Improve extraction in 429 error conditions (closes #24283) * [youtube] Improve age-gated videos extraction in 429 error conditions (refs #24283) * [youtube] Remove outdated code Additional get_video_info requests don't seem to provide any extra itags any longer * [README.md] Clarify 429 error * [pornhub] Add support for pornhubpremium.com (#24288) * [utils] Add support for cookies with spaces used instead of tabs * [ChangeLog] Actualize [ci skip] * release 2020.03.08 * Revert "[utils] Add support for cookies with spaces used instead of tabs" According to [1] TABs must be used as separators between fields. Files produces by some tools with spaces as separators are considered malformed. 1. https://curl.haxx.se/docs/http-cookies.html This reverts commitcff99c91d1
. * [utils] Add reference to cookie file format * Revert "[vimeo] fix showcase password protected video extraction(closes #24224)" This reverts commit12ee431676
. * [nhk] Relax _VALID_URL (#24329) * [nhk] Remove obsolete rtmp formats (closes #24329) * [nhk] Update m3u8 URL and use native hls (#24329) * [ndr] Fix extraction (closes #24326) * [xtube] Fix formats extraction (closes #24348) * [xtube] Fix typo * [hellporno] Fix extraction (closes #24399) * [cbc:watch] Add support for authentication * [cbc:watch] Fix authenticated device token caching (closes #19160) * [soundcloud] fix download url extraction(closes #24394) * [limelight] remove disabled API requests(closes #24255) * [bilibili] Add support for new URL schema with BV ids (closes #24439, closes #24442) * [bilibili] Add support for player.bilibili.com (closes #24402) * [teachable] Extract chapter metadata (closes #24421) * [generic] Look for teachable embeds before wistia * [teachable] Update upskillcourses domain New version does not use teachable platform any longer * [teachable] Update gns3 domain * [teachable] Update test * [ChangeLog] Actualize [ci skip] * [ChangeLog] Actualize [ci skip] * release 2020.03.24 * [spankwire] Fix extraction (closes #18924, closes #20648) * [spankwire] Add support for generic embeds (refs #24633) * [youporn] Add support form generic embeds * [mofosex] Add support for generic embeds (closes #24633) * [tele5] Fix extraction (closes #24553) * [extractor/common] Skip malformed ISM manifest XMLs while extracting ISM formats (#24667) * [tv4] Fix ISM formats extraction (closes #24667) * [twitch:clips] Extend _VALID_URL (closes #24290) (#24642) * [motherless] Fix extraction (closes #24699) * [nova:embed] Fix extraction (closes #24700) * [youtube] Skip broken multifeed videos (closes #24711) * [soundcloud] Extract AAC format * [soundcloud] Improve AAC format extraction (closes #19173, closes #24708) * [thisoldhouse] Fix video id extraction (closes #24548) Added support for: with of without "www." and either ".chorus.build" or ".com" It now validated correctly on older URL's ``` <iframe src="https://thisoldhouse.chorus.build/videos/zype/5e33baec27d2e50001d5f52f ``` and newer ones ``` <iframe src="https://www.thisoldhouse.com/videos/zype/5e2b70e95216cc0001615120 ``` * [thisoldhouse] Improve video id extraction (closes #24549) * [youtube] Fix DRM videos detection (refs #24736) * [options] Clarify doc on --exec command (closes #19087) (#24883) * [prosiebensat1] Improve extraction and remove 7tv.de support (#24948) * [prosiebensat1] Extract series metadata * [tenplay] Relax _VALID_URL (closes #25001) * [tvplay] fix Viafree extraction(closes #15189)(closes #24473)(closes #24789) * [yahoo] fix GYAO Player extraction and relax title URL regex(closes #24178)(closes #24778) * [youtube] Use redirected video id if any (closes #25063) * [youtube] Improve player id extraction and add tests * [extractor/common] Extract multiple JSON-LD entries * [crunchyroll] Fix and improve extraction (closes #25096, closes #25060) * [ChangeLog] Actualize [ci skip] * release 2020.05.03 * [puhutv] Remove no longer available HTTP formats (closes #25124) * [utils] Improve cookie files support + Add support for UTF-8 in cookie files * Skip malformed cookie file entries instead of crashing (invalid entry len, invalid expires at) * [dailymotion] Fix typo * [compat] Introduce compat_cookiejar_Cookie * [extractor/common] Use compat_cookiejar_Cookie for _set_cookie (closes #23256, closes #24776) To always ensure cookie name and value are bytestrings on python 2. * [orf] Add support for more radio stations (closes #24938) (#24968) * [uol] fix extraction(closes #22007) * [downloader/http] Finish downloading once received data length matches expected Always do this if possible, i.e. if Content-Length or expected length is known, not only in test. This will save unnecessary last extra loop trying to read 0 bytes. * [downloader/http] Request last data block of exact remaining size Always request last data block of exact size remaining to download if possible not the current block size. * [iprima] Improve extraction (closes #25138) * [youtube] Improve signature cipher extraction (closes #25188) * [ChangeLog] Actualize [ci skip] * release 2020.05.08 * [spike] fix Bellator mgid extraction(closes #25195) * [bbccouk] PEP8 * [mailru] Fix extraction (closes #24530) (#25239) * [README.md] flake8 HTTPS URL (#25230) * [youtube] Add support for yewtu.be (#25226) * [soundcloud] reduce API playlist page limit(closes #25274) * [vimeo] improve format extraction and sorting(closes #25285) * [redtube] Improve title extraction (#25208) * [indavideo] Switch to HTTPS for API request (#25191) * [utils] Fix file permissions in write_json_file (closes #12471) (#25122) * [redtube] Improve formats extraction and extract m3u8 formats (closes #25311, closes #25321) * [ard] Improve _VALID_URL (closes #25134) (#25198) * [giantbomb] Extend _VALID_URL (#25222) * [postprocessor/ffmpeg] Embed series metadata with --add-metadata * [youtube] Add support for more invidious instances (#25417) * [ard:beta] Extend _VALID_URL (closes #25405) * [ChangeLog] Actualize [ci skip] * release 2020.05.29 * [jwplatform] Improve embeds extraction (closes #25467) * [periscope] Fix untitled broadcasts (#25482) * [twitter:broadcast] Add untitled periscope broadcast test * [malltv] Add support for sk.mall.tv (#25445) * [brightcove] Fix subtitles extraction (closes #25540) * [brightcove] Sort imports * [twitch] Pass v5 accept header and fix thumbnails extraction (closes #25531) * [twitch:stream] Fix extraction (closes #25528) * [twitch:stream] Expect 400 and 410 HTTP errors from API * [tele5] Prefer jwplatform over nexx (closes #25533) * [jwplatform] Add support for bypass geo restriction * [tele5] Bypass geo restriction * [ChangeLog] Actualize [ci skip] * release 2020.06.06 * [kaltura] Add support for multiple embeds on a webpage (closes #25523) * [youtube] Extract chapters from JSON (closes #24819) * [facebook] Support single-video ID links I stumbled upon this at https://www.facebook.com/bwfbadminton/posts/10157127020046316 . No idea how prevalent it is yet. * [youtube] Fix playlist and feed extraction (closes #25675) * [youtube] Fix thumbnails extraction and remove uploader id extraction warning (closes #25676) * [youtube] Fix upload date extraction * [youtube] Improve view count extraction * [youtube] Fix uploader id and uploader URL extraction * [ChangeLog] Actualize [ci skip] * release 2020.06.16 * [youtube] Fix categories and improve tags extraction * [youtube] Force old layout (closes #25682, closes #25683, closes #25680, closes #25686) * [ChangeLog] Actualize [ci skip] * release 2020.06.16.1 * [brightcove] Improve embed detection (closes #25674) * [bellmedia] add support for cp24.com clip URLs(closes #25764) * [youtube:playlists] Extend _VALID_URL (closes #25810) * [youtube] Prevent excess HTTP 301 (#25786) * [wistia] Restrict embed regex (closes #25969) * [youtube] Improve description extraction (closes #25937) (#25980) * [youtube] Fix sigfunc name extraction (closes #26134, closes #26135, closes #26136, closes #26137) * [ChangeLog] Actualize [ci skip] * release 2020.07.28 * [xhamster] Extend _VALID_URL (closes #25789) (#25804) * [xhamster] Fix extraction (closes #26157) (#26254) * [xhamster] Extend _VALID_URL (closes #25927) Co-authored-by: Remita Amine <remitamine@gmail.com> Co-authored-by: Sergey M․ <dstftw@gmail.com> Co-authored-by: nmeum <soeren+github@soeren-tempel.net> Co-authored-by: Roxedus <me@roxedus.dev> Co-authored-by: Singwai Chan <c.singwai@gmail.com> Co-authored-by: cdarlint <cdarlint@users.noreply.github.com> Co-authored-by: Johannes N <31795504+jonolt@users.noreply.github.com> Co-authored-by: jnozsc <jnozsc@gmail.com> Co-authored-by: Moritz Patelscheck <moritz.patelscheck@campus.tu-berlin.de> Co-authored-by: PB <3854688+uno20001@users.noreply.github.com> Co-authored-by: Philipp Hagemeister <phihag@phihag.de> Co-authored-by: Xaver Hellauer <software@hellauer.bayern> Co-authored-by: d2au <d2au.dev@gmail.com> Co-authored-by: Jan 'Yenda' Trmal <jtrmal@gmail.com> Co-authored-by: jxu <7989982+jxu@users.noreply.github.com> Co-authored-by: Martin Ström <name@my-domain.se> Co-authored-by: The Hatsune Daishi <nao20010128@gmail.com> Co-authored-by: tsia <github@tsia.de> Co-authored-by: 3risian <59593325+3risian@users.noreply.github.com> Co-authored-by: Tristan Waddington <tristan.waddington@gmail.com> Co-authored-by: Devon Meunier <devon.meunier@gmail.com> Co-authored-by: Felix Stupp <felix.stupp@outlook.com> Co-authored-by: tom <tomster954@gmail.com> Co-authored-by: AndrewMBL <62922222+AndrewMBL@users.noreply.github.com> Co-authored-by: willbeaufoy <will@willbeaufoy.net> Co-authored-by: Philipp Stehle <anderschwiedu@googlemail.com> Co-authored-by: hh0rva1h <61889859+hh0rva1h@users.noreply.github.com> Co-authored-by: comsomisha <shmelev1996@mail.ru> Co-authored-by: TotalCaesar659 <14265316+TotalCaesar659@users.noreply.github.com> Co-authored-by: Juan Francisco Cantero Hurtado <iam@juanfra.info> Co-authored-by: Dave Loyall <dave@the-good-guys.net> Co-authored-by: tlsssl <63866177+tlsssl@users.noreply.github.com> Co-authored-by: Rob <ankenyr@gmail.com> Co-authored-by: Michael Klein <github@a98shuttle.de> Co-authored-by: JordanWeatherby <47519158+JordanWeatherby@users.noreply.github.com> Co-authored-by: striker.sh <19488257+strikersh@users.noreply.github.com> Co-authored-by: Matej Dujava <mdujava@gmail.com> Co-authored-by: Glenn Slayden <5589855+glenn-slayden@users.noreply.github.com> Co-authored-by: MRWITEK <mrvvitek@gmail.com> Co-authored-by: JChris246 <43832407+JChris246@users.noreply.github.com> Co-authored-by: TheRealDude2 <the.real.dude@gmx.de>
611 lines
22 KiB
Python
611 lines
22 KiB
Python
# coding: utf-8
|
|
from __future__ import unicode_literals
|
|
|
|
import functools
|
|
import itertools
|
|
import operator
|
|
import re
|
|
|
|
from .common import InfoExtractor
|
|
from ..compat import (
|
|
compat_HTTPError,
|
|
compat_str,
|
|
compat_urllib_request,
|
|
)
|
|
from .openload import PhantomJSwrapper
|
|
from ..utils import (
|
|
determine_ext,
|
|
ExtractorError,
|
|
int_or_none,
|
|
NO_DEFAULT,
|
|
orderedSet,
|
|
remove_quotes,
|
|
str_to_int,
|
|
url_or_none,
|
|
)
|
|
|
|
|
|
class PornHubBaseIE(InfoExtractor):
|
|
def _download_webpage_handle(self, *args, **kwargs):
|
|
def dl(*args, **kwargs):
|
|
return super(PornHubBaseIE, self)._download_webpage_handle(*args, **kwargs)
|
|
|
|
webpage, urlh = dl(*args, **kwargs)
|
|
|
|
if any(re.search(p, webpage) for p in (
|
|
r'<body\b[^>]+\bonload=["\']go\(\)',
|
|
r'document\.cookie\s*=\s*["\']RNKEY=',
|
|
r'document\.location\.reload\(true\)')):
|
|
url_or_request = args[0]
|
|
url = (url_or_request.get_full_url()
|
|
if isinstance(url_or_request, compat_urllib_request.Request)
|
|
else url_or_request)
|
|
phantom = PhantomJSwrapper(self, required_version='2.0')
|
|
phantom.get(url, html=webpage)
|
|
webpage, urlh = dl(*args, **kwargs)
|
|
|
|
return webpage, urlh
|
|
|
|
|
|
class PornHubIE(PornHubBaseIE):
|
|
IE_DESC = 'PornHub and Thumbzilla'
|
|
_VALID_URL = r'''(?x)
|
|
https?://
|
|
(?:
|
|
(?:[^/]+\.)?(?P<host>pornhub(?:premium)?\.(?:com|net))/(?:(?:view_video\.php|video/show)\?viewkey=|embed/)|
|
|
(?:www\.)?thumbzilla\.com/video/
|
|
)
|
|
(?P<id>[\da-z]+)
|
|
'''
|
|
_TESTS = [{
|
|
'url': 'http://www.pornhub.com/view_video.php?viewkey=648719015',
|
|
'md5': '1e19b41231a02eba417839222ac9d58e',
|
|
'info_dict': {
|
|
'id': '648719015',
|
|
'ext': 'mp4',
|
|
'title': 'Seductive Indian beauty strips down and fingers her pink pussy',
|
|
'uploader': 'Babes',
|
|
'upload_date': '20130628',
|
|
'duration': 361,
|
|
'view_count': int,
|
|
'like_count': int,
|
|
'dislike_count': int,
|
|
'comment_count': int,
|
|
'age_limit': 18,
|
|
'tags': list,
|
|
'categories': list,
|
|
},
|
|
}, {
|
|
# non-ASCII title
|
|
'url': 'http://www.pornhub.com/view_video.php?viewkey=1331683002',
|
|
'info_dict': {
|
|
'id': '1331683002',
|
|
'ext': 'mp4',
|
|
'title': '重庆婷婷女王足交',
|
|
'uploader': 'Unknown',
|
|
'upload_date': '20150213',
|
|
'duration': 1753,
|
|
'view_count': int,
|
|
'like_count': int,
|
|
'dislike_count': int,
|
|
'comment_count': int,
|
|
'age_limit': 18,
|
|
'tags': list,
|
|
'categories': list,
|
|
},
|
|
'params': {
|
|
'skip_download': True,
|
|
},
|
|
}, {
|
|
# subtitles
|
|
'url': 'https://www.pornhub.com/view_video.php?viewkey=ph5af5fef7c2aa7',
|
|
'info_dict': {
|
|
'id': 'ph5af5fef7c2aa7',
|
|
'ext': 'mp4',
|
|
'title': 'BFFS - Cute Teen Girls Share Cock On the Floor',
|
|
'uploader': 'BFFs',
|
|
'duration': 622,
|
|
'view_count': int,
|
|
'like_count': int,
|
|
'dislike_count': int,
|
|
'comment_count': int,
|
|
'age_limit': 18,
|
|
'tags': list,
|
|
'categories': list,
|
|
'subtitles': {
|
|
'en': [{
|
|
"ext": 'srt'
|
|
}]
|
|
},
|
|
},
|
|
'params': {
|
|
'skip_download': True,
|
|
},
|
|
}, {
|
|
'url': 'http://www.pornhub.com/view_video.php?viewkey=ph557bbb6676d2d',
|
|
'only_matching': True,
|
|
}, {
|
|
# removed at the request of cam4.com
|
|
'url': 'http://fr.pornhub.com/view_video.php?viewkey=ph55ca2f9760862',
|
|
'only_matching': True,
|
|
}, {
|
|
# removed at the request of the copyright owner
|
|
'url': 'http://www.pornhub.com/view_video.php?viewkey=788152859',
|
|
'only_matching': True,
|
|
}, {
|
|
# removed by uploader
|
|
'url': 'http://www.pornhub.com/view_video.php?viewkey=ph572716d15a111',
|
|
'only_matching': True,
|
|
}, {
|
|
# private video
|
|
'url': 'http://www.pornhub.com/view_video.php?viewkey=ph56fd731fce6b7',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.thumbzilla.com/video/ph56c6114abd99a/horny-girlfriend-sex',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'http://www.pornhub.com/video/show?viewkey=648719015',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.net/view_video.php?viewkey=203640933',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhubpremium.com/view_video.php?viewkey=ph5e4acdae54a82',
|
|
'only_matching': True,
|
|
}]
|
|
|
|
@staticmethod
|
|
def _extract_urls(webpage):
|
|
return re.findall(
|
|
r'<iframe[^>]+?src=["\'](?P<url>(?:https?:)?//(?:www\.)?pornhub\.(?:com|net)/embed/[\da-z]+)',
|
|
webpage)
|
|
|
|
def _extract_count(self, pattern, webpage, name):
|
|
return str_to_int(self._search_regex(
|
|
pattern, webpage, '%s count' % name, fatal=False))
|
|
|
|
def _real_extract(self, url):
|
|
mobj = re.match(self._VALID_URL, url)
|
|
host = mobj.group('host') or 'pornhub.com'
|
|
video_id = mobj.group('id')
|
|
|
|
if 'premium' in host:
|
|
if not self._downloader.params.get('cookiefile'):
|
|
raise ExtractorError(
|
|
'PornHub Premium requires authentication.'
|
|
' You may want to use --cookies.',
|
|
expected=True)
|
|
|
|
self._set_cookie(host, 'age_verified', '1')
|
|
|
|
def dl_webpage(platform):
|
|
self._set_cookie(host, 'platform', platform)
|
|
return self._download_webpage(
|
|
'https://www.%s/view_video.php?viewkey=%s' % (host, video_id),
|
|
video_id, 'Downloading %s webpage' % platform)
|
|
|
|
webpage = dl_webpage('pc')
|
|
|
|
error_msg = self._html_search_regex(
|
|
r'(?s)<div[^>]+class=(["\'])(?:(?!\1).)*\b(?:removed|userMessageSection)\b(?:(?!\1).)*\1[^>]*>(?P<error>.+?)</div>',
|
|
webpage, 'error message', default=None, group='error')
|
|
if error_msg:
|
|
error_msg = re.sub(r'\s+', ' ', error_msg)
|
|
raise ExtractorError(
|
|
'PornHub said: %s' % error_msg,
|
|
expected=True, video_id=video_id)
|
|
|
|
# video_title from flashvars contains whitespace instead of non-ASCII (see
|
|
# http://www.pornhub.com/view_video.php?viewkey=1331683002), not relying
|
|
# on that anymore.
|
|
title = self._html_search_meta(
|
|
'twitter:title', webpage, default=None) or self._html_search_regex(
|
|
(r'(?s)<h1[^>]+class=["\']title["\'][^>]*>(?P<title>.+?)</h1>',
|
|
r'<div[^>]+data-video-title=(["\'])(?P<title>(?:(?!\1).)+)\1',
|
|
r'shareTitle["\']\s*[=:]\s*(["\'])(?P<title>(?:(?!\1).)+)\1'),
|
|
webpage, 'title', group='title')
|
|
|
|
video_urls = []
|
|
video_urls_set = set()
|
|
subtitles = {}
|
|
|
|
flashvars = self._parse_json(
|
|
self._search_regex(
|
|
r'var\s+flashvars_\d+\s*=\s*({.+?});', webpage, 'flashvars', default='{}'),
|
|
video_id)
|
|
if flashvars:
|
|
subtitle_url = url_or_none(flashvars.get('closedCaptionsFile'))
|
|
if subtitle_url:
|
|
subtitles.setdefault('en', []).append({
|
|
'url': subtitle_url,
|
|
'ext': 'srt',
|
|
})
|
|
thumbnail = flashvars.get('image_url')
|
|
duration = int_or_none(flashvars.get('video_duration'))
|
|
media_definitions = flashvars.get('mediaDefinitions')
|
|
if isinstance(media_definitions, list):
|
|
for definition in media_definitions:
|
|
if not isinstance(definition, dict):
|
|
continue
|
|
video_url = definition.get('videoUrl')
|
|
if not video_url or not isinstance(video_url, compat_str):
|
|
continue
|
|
if video_url in video_urls_set:
|
|
continue
|
|
video_urls_set.add(video_url)
|
|
video_urls.append(
|
|
(video_url, int_or_none(definition.get('quality'))))
|
|
else:
|
|
thumbnail, duration = [None] * 2
|
|
|
|
def extract_js_vars(webpage, pattern, default=NO_DEFAULT):
|
|
assignments = self._search_regex(
|
|
pattern, webpage, 'encoded url', default=default)
|
|
if not assignments:
|
|
return {}
|
|
|
|
assignments = assignments.split(';')
|
|
|
|
js_vars = {}
|
|
|
|
def parse_js_value(inp):
|
|
inp = re.sub(r'/\*(?:(?!\*/).)*?\*/', '', inp)
|
|
if '+' in inp:
|
|
inps = inp.split('+')
|
|
return functools.reduce(
|
|
operator.concat, map(parse_js_value, inps))
|
|
inp = inp.strip()
|
|
if inp in js_vars:
|
|
return js_vars[inp]
|
|
return remove_quotes(inp)
|
|
|
|
for assn in assignments:
|
|
assn = assn.strip()
|
|
if not assn:
|
|
continue
|
|
assn = re.sub(r'var\s+', '', assn)
|
|
vname, value = assn.split('=', 1)
|
|
js_vars[vname] = parse_js_value(value)
|
|
return js_vars
|
|
|
|
def add_video_url(video_url):
|
|
v_url = url_or_none(video_url)
|
|
if not v_url:
|
|
return
|
|
if v_url in video_urls_set:
|
|
return
|
|
video_urls.append((v_url, None))
|
|
video_urls_set.add(v_url)
|
|
|
|
if not video_urls:
|
|
FORMAT_PREFIXES = ('media', 'quality')
|
|
js_vars = extract_js_vars(
|
|
webpage, r'(var\s+(?:%s)_.+)' % '|'.join(FORMAT_PREFIXES),
|
|
default=None)
|
|
if js_vars:
|
|
for key, format_url in js_vars.items():
|
|
if any(key.startswith(p) for p in FORMAT_PREFIXES):
|
|
add_video_url(format_url)
|
|
if not video_urls and re.search(
|
|
r'<[^>]+\bid=["\']lockedPlayer', webpage):
|
|
raise ExtractorError(
|
|
'Video %s is locked' % video_id, expected=True)
|
|
|
|
if not video_urls:
|
|
js_vars = extract_js_vars(
|
|
dl_webpage('tv'), r'(var.+?mediastring.+?)</script>')
|
|
add_video_url(js_vars['mediastring'])
|
|
|
|
for mobj in re.finditer(
|
|
r'<a[^>]+\bclass=["\']downloadBtn\b[^>]+\bhref=(["\'])(?P<url>(?:(?!\1).)+)\1',
|
|
webpage):
|
|
video_url = mobj.group('url')
|
|
if video_url not in video_urls_set:
|
|
video_urls.append((video_url, None))
|
|
video_urls_set.add(video_url)
|
|
|
|
upload_date = None
|
|
formats = []
|
|
for video_url, height in video_urls:
|
|
if not upload_date:
|
|
upload_date = self._search_regex(
|
|
r'/(\d{6}/\d{2})/', video_url, 'upload data', default=None)
|
|
if upload_date:
|
|
upload_date = upload_date.replace('/', '')
|
|
ext = determine_ext(video_url)
|
|
if ext == 'mpd':
|
|
formats.extend(self._extract_mpd_formats(
|
|
video_url, video_id, mpd_id='dash', fatal=False))
|
|
continue
|
|
elif ext == 'm3u8':
|
|
formats.extend(self._extract_m3u8_formats(
|
|
video_url, video_id, 'mp4', entry_protocol='m3u8_native',
|
|
m3u8_id='hls', fatal=False))
|
|
continue
|
|
tbr = None
|
|
mobj = re.search(r'(?P<height>\d+)[pP]?_(?P<tbr>\d+)[kK]', video_url)
|
|
if mobj:
|
|
if not height:
|
|
height = int(mobj.group('height'))
|
|
tbr = int(mobj.group('tbr'))
|
|
formats.append({
|
|
'url': video_url,
|
|
'format_id': '%dp' % height if height else None,
|
|
'height': height,
|
|
'tbr': tbr,
|
|
})
|
|
self._sort_formats(formats)
|
|
|
|
video_uploader = self._html_search_regex(
|
|
r'(?s)From: .+?<(?:a\b[^>]+\bhref=["\']/(?:(?:user|channel)s|model|pornstar)/|span\b[^>]+\bclass=["\']username)[^>]+>(.+?)<',
|
|
webpage, 'uploader', fatal=False)
|
|
|
|
view_count = self._extract_count(
|
|
r'<span class="count">([\d,\.]+)</span> views', webpage, 'view')
|
|
like_count = self._extract_count(
|
|
r'<span class="votesUp">([\d,\.]+)</span>', webpage, 'like')
|
|
dislike_count = self._extract_count(
|
|
r'<span class="votesDown">([\d,\.]+)</span>', webpage, 'dislike')
|
|
comment_count = self._extract_count(
|
|
r'All Comments\s*<span>\(([\d,.]+)\)', webpage, 'comment')
|
|
|
|
def extract_list(meta_key):
|
|
div = self._search_regex(
|
|
r'(?s)<div[^>]+\bclass=["\'].*?\b%sWrapper[^>]*>(.+?)</div>'
|
|
% meta_key, webpage, meta_key, default=None)
|
|
if div:
|
|
return re.findall(r'<a[^>]+\bhref=[^>]+>([^<]+)', div)
|
|
|
|
return {
|
|
'id': video_id,
|
|
'uploader': video_uploader,
|
|
'upload_date': upload_date,
|
|
'title': title,
|
|
'thumbnail': thumbnail,
|
|
'duration': duration,
|
|
'view_count': view_count,
|
|
'like_count': like_count,
|
|
'dislike_count': dislike_count,
|
|
'comment_count': comment_count,
|
|
'formats': formats,
|
|
'age_limit': 18,
|
|
'tags': extract_list('tags'),
|
|
'categories': extract_list('categories'),
|
|
'subtitles': subtitles,
|
|
}
|
|
|
|
|
|
class PornHubPlaylistBaseIE(PornHubBaseIE):
|
|
def _extract_entries(self, webpage, host):
|
|
# Only process container div with main playlist content skipping
|
|
# drop-down menu that uses similar pattern for videos (see
|
|
# https://github.com/ytdl-org/youtube-dl/issues/11594).
|
|
container = self._search_regex(
|
|
r'(?s)(<div[^>]+class=["\']container.+)', webpage,
|
|
'container', default=webpage)
|
|
|
|
return [
|
|
self.url_result(
|
|
'http://www.%s/%s' % (host, video_url),
|
|
PornHubIE.ie_key(), video_title=title)
|
|
for video_url, title in orderedSet(re.findall(
|
|
r'href="/?(view_video\.php\?.*\bviewkey=[\da-z]+[^"]*)"[^>]*\s+title="([^"]+)"',
|
|
container))
|
|
]
|
|
|
|
def _real_extract(self, url):
|
|
mobj = re.match(self._VALID_URL, url)
|
|
host = mobj.group('host')
|
|
playlist_id = mobj.group('id')
|
|
|
|
webpage = self._download_webpage(url, playlist_id)
|
|
|
|
entries = self._extract_entries(webpage, host)
|
|
|
|
playlist = self._parse_json(
|
|
self._search_regex(
|
|
r'(?:playlistObject|PLAYLIST_VIEW)\s*=\s*({.+?});', webpage,
|
|
'playlist', default='{}'),
|
|
playlist_id, fatal=False)
|
|
title = playlist.get('title') or self._search_regex(
|
|
r'>Videos\s+in\s+(.+?)\s+[Pp]laylist<', webpage, 'title', fatal=False)
|
|
|
|
return self.playlist_result(
|
|
entries, playlist_id, title, playlist.get('description'))
|
|
|
|
|
|
class PornHubUserIE(PornHubPlaylistBaseIE):
|
|
_VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?(?P<host>pornhub(?:premium)?\.(?:com|net))/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/?#&]+))(?:[?#&]|/(?!videos)|$)'
|
|
_TESTS = [{
|
|
'url': 'https://www.pornhub.com/model/zoe_ph',
|
|
'playlist_mincount': 118,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious',
|
|
'info_dict': {
|
|
'id': 'liz-vicious',
|
|
},
|
|
'playlist_mincount': 118,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/users/russianveet69',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/channels/povd',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/model/zoe_ph?abc=1',
|
|
'only_matching': True,
|
|
}]
|
|
|
|
def _real_extract(self, url):
|
|
mobj = re.match(self._VALID_URL, url)
|
|
user_id = mobj.group('id')
|
|
return self.url_result(
|
|
'%s/videos' % mobj.group('url'), ie=PornHubPagedVideoListIE.ie_key(),
|
|
video_id=user_id)
|
|
|
|
|
|
class PornHubPagedPlaylistBaseIE(PornHubPlaylistBaseIE):
|
|
@staticmethod
|
|
def _has_more(webpage):
|
|
return re.search(
|
|
r'''(?x)
|
|
<li[^>]+\bclass=["\']page_next|
|
|
<link[^>]+\brel=["\']next|
|
|
<button[^>]+\bid=["\']moreDataBtn
|
|
''', webpage) is not None
|
|
|
|
def _real_extract(self, url):
|
|
mobj = re.match(self._VALID_URL, url)
|
|
host = mobj.group('host')
|
|
item_id = mobj.group('id')
|
|
|
|
page = int_or_none(self._search_regex(
|
|
r'\bpage=(\d+)', url, 'page', default=None))
|
|
|
|
entries = []
|
|
for page_num in (page, ) if page is not None else itertools.count(1):
|
|
try:
|
|
webpage = self._download_webpage(
|
|
url, item_id, 'Downloading page %d' % page_num,
|
|
query={'page': page_num})
|
|
except ExtractorError as e:
|
|
if isinstance(e.cause, compat_HTTPError) and e.cause.code == 404:
|
|
break
|
|
raise
|
|
page_entries = self._extract_entries(webpage, host)
|
|
if not page_entries:
|
|
break
|
|
entries.extend(page_entries)
|
|
if not self._has_more(webpage):
|
|
break
|
|
|
|
return self.playlist_result(orderedSet(entries), item_id)
|
|
|
|
|
|
class PornHubPagedVideoListIE(PornHubPagedPlaylistBaseIE):
|
|
_VALID_URL = r'https?://(?:[^/]+\.)?(?P<host>pornhub(?:premium)?\.(?:com|net))/(?P<id>(?:[^/]+/)*[^/?#&]+)'
|
|
_TESTS = [{
|
|
'url': 'https://www.pornhub.com/model/zoe_ph/videos',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'http://www.pornhub.com/users/rushandlia/videos',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos',
|
|
'info_dict': {
|
|
'id': 'pornstar/jenny-blighe/videos',
|
|
},
|
|
'playlist_mincount': 149,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos?page=3',
|
|
'info_dict': {
|
|
'id': 'pornstar/jenny-blighe/videos',
|
|
},
|
|
'playlist_mincount': 40,
|
|
}, {
|
|
# default sorting as Top Rated Videos
|
|
'url': 'https://www.pornhub.com/channels/povd/videos',
|
|
'info_dict': {
|
|
'id': 'channels/povd/videos',
|
|
},
|
|
'playlist_mincount': 293,
|
|
}, {
|
|
# Top Rated Videos
|
|
'url': 'https://www.pornhub.com/channels/povd/videos?o=ra',
|
|
'only_matching': True,
|
|
}, {
|
|
# Most Recent Videos
|
|
'url': 'https://www.pornhub.com/channels/povd/videos?o=da',
|
|
'only_matching': True,
|
|
}, {
|
|
# Most Viewed Videos
|
|
'url': 'https://www.pornhub.com/channels/povd/videos?o=vi',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'http://www.pornhub.com/users/zoe_ph/videos/public',
|
|
'only_matching': True,
|
|
}, {
|
|
# Most Viewed Videos
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=mv',
|
|
'only_matching': True,
|
|
}, {
|
|
# Top Rated Videos
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=tr',
|
|
'only_matching': True,
|
|
}, {
|
|
# Longest Videos
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=lg',
|
|
'only_matching': True,
|
|
}, {
|
|
# Newest Videos
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos?o=cm',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos/paid',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/pornstar/liz-vicious/videos/fanonly',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/video',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/video?page=3',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/video/search?search=123',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/categories/teen',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/categories/teen?page=3',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/hd',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/hd?page=3',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/described-video',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/described-video?page=2',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/video/incategories/60fps-1/hd-porn',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/playlist/44121572',
|
|
'info_dict': {
|
|
'id': 'playlist/44121572',
|
|
},
|
|
'playlist_mincount': 132,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/playlist/4667351',
|
|
'only_matching': True,
|
|
}, {
|
|
'url': 'https://de.pornhub.com/playlist/4667351',
|
|
'only_matching': True,
|
|
}]
|
|
|
|
@classmethod
|
|
def suitable(cls, url):
|
|
return (False
|
|
if PornHubIE.suitable(url) or PornHubUserIE.suitable(url) or PornHubUserVideosUploadIE.suitable(url)
|
|
else super(PornHubPagedVideoListIE, cls).suitable(url))
|
|
|
|
|
|
class PornHubUserVideosUploadIE(PornHubPagedPlaylistBaseIE):
|
|
_VALID_URL = r'(?P<url>https?://(?:[^/]+\.)?(?P<host>pornhub(?:premium)?\.(?:com|net))/(?:(?:user|channel)s|model|pornstar)/(?P<id>[^/]+)/videos/upload)'
|
|
_TESTS = [{
|
|
'url': 'https://www.pornhub.com/pornstar/jenny-blighe/videos/upload',
|
|
'info_dict': {
|
|
'id': 'jenny-blighe',
|
|
},
|
|
'playlist_mincount': 129,
|
|
}, {
|
|
'url': 'https://www.pornhub.com/model/zoe_ph/videos/upload',
|
|
'only_matching': True,
|
|
}]
|