yt-dlp/youtube_dl/extractor/svt.py
Aakash Gajjar b827ee921f
pull changes from remote master (#190)
* [scrippsnetworks] Add new extractor(closes #19857)(closes #22981)

* [teachable] Improve locked lessons detection (#23528)

* [teachable] Fail with error message if no video URL found

* [extractors] add missing import for ScrippsNetworksIE

* [brightcove] cache brightcove player policy keys

* [prosiebensat1] improve geo restriction handling(closes #23571)

* [soundcloud] automatically update client id on failing requests

* [spankbang] Fix extraction (closes #23307, closes #23423, closes #23444)

* [spankbang] Improve removed video detection (#23423)

* [brightcove] update policy key on failing requests

* [pornhub] Fix extraction and add support for m3u8 formats (closes #22749, closes #23082)

* [pornhub] Improve locked videos detection (closes #22449, closes #22780)

* [brightcove] invalidate policy key cache on failing requests

* [soundcloud] fix client id extraction for non fatal requests

* [ChangeLog] Actualize
[ci skip]

* [devscripts/create-github-release] Switch to using PAT for authentication

Basic authentication will be deprecated soon

* release 2020.01.01

* [redtube] Detect private videos (#23518)

* [vice] improve extraction(closes #23631)

* [devscripts/create-github-release] Remove unused import

* [wistia] improve format extraction and extract subtitles(closes #22590)

* [nrktv:seriebase] Fix extraction (closes #23625) (#23537)

* [discovery] fix anonymous token extraction(closes #23650)

* [scrippsnetworks] add support for www.discovery.com videos

* [scrippsnetworks] correct test case URL

* [dctp] fix format extraction(closes #23656)

* [pandatv] Remove extractor (#23630)

* [naver] improve extraction

- improve geo-restriction handling
- extract automatic captions
- extract uploader metadata
- extract VLive HLS formats

* [naver] improve metadata extraction

* [cloudflarestream] improve extraction

- add support for bytehighway.net domain
- add support for signed URLs
- extract thumbnail

* [cloudflarestream] import embed URL extraction

* [lego] fix extraction and extract subtitle(closes #23687)

* [safari] Fix kaltura session extraction (closes #23679) (#23670)

* [orf:fm4] Fix extraction (#23599)

* [orf:radio] Clean description and improve extraction

* [twitter] add support for promo_video_website cards(closes #23711)

* [vodplatform] add support for embed.kwikmotion.com domain

* [ndr:base:embed] Improve thumbnails extraction (closes #23731)

* [canvas] Add support for new API endpoint and update tests (closes #17680, closes #18629)

* [travis] Add flake8 job (#23720)

* [yourporn] Fix extraction (closes #21645, closes #22255, closes #23459)

* [ChangeLog] Actualize
[ci skip]

* release 2020.01.15

* [soundcloud] Restore previews extraction (closes #23739)

* [orf:tvthek] Improve geo restricted videos detection (closes #23741)

* [zype] improve extraction

- extract subtitles(closes #21258)
- support URLs with alternative keys/tokens(#21258)
- extract more metadata

* [americastestkitchen] fix extraction

* [nbc] add support for nbc multi network URLs(closes #23049)

* [ard] improve extraction(closes #23761)

- simplify extraction
- extract age limit and series
- bypass geo-restriction

* [ivi:compilation] Fix entries extraction (closes #23770)

* [24video] Add support for 24video.vip (closes #23753)

* [businessinsider] Fix jwplatform id extraction (closes #22929) (#22954)

* [ard] add a missing condition

* [azmedien] fix extraction(closes #23783)

* [voicerepublic] fix extraction

* [stretchinternet] fix extraction(closes #4319)

* [youtube] Fix sigfunc name extraction (closes #23819)

* [ChangeLog] Actualize
[ci skip]

* release 2020.01.24

* [soundcloud] imporve private playlist/set tracks extraction

https://github.com/ytdl-org/youtube-dl/issues/3707#issuecomment-577873539

* [svt] fix article extraction(closes #22897)(closes #22919)

* [svt] fix series extraction(closes #22297)

* [viewlift] improve extraction

- fix extraction(closes #23851)
- add add support for authentication
- add support for more domains

* [vimeo] fix album extraction(closes #23864)

* [tva] Relax _VALID_URL (closes #23903)

* [tv5mondeplus] Fix extraction (closes #23907, closes #23911)

* [twitch:stream] Lowercase channel id for stream request (closes #23917)

* [sportdeutschland] Update to new sportdeutschland API

They switched to SSL, but under a different host AND path...
Remove the old test cases because these videos have become unavailable.

* [popcorntimes] Add extractor (closes #23949)

* [thisoldhouse] fix extraction(closes #23951)

* [toggle] Add support for mewatch.sg (closes #23895) (#23930)

* [compat] Introduce compat_realpath (refs #23991)

* [update] Fix updating via symlinks (closes #23991)

* [nytimes] improve format sorting(closes #24010)

* [abc:iview] Support 720p (#22907) (#22921)

* [nova:embed] Fix extraction (closes #23672)

* [nova:embed] Improve (closes #23690)

* [nova] Improve extraction (refs #23690)

* [jpopsuki] Remove extractor (closes #23858)

* [YoutubeDL] Fix playlist entry indexing with --playlist-items (closes #10591, closes #10622)

* [test_YoutubeDL] Fix get_ids

* [test_YoutubeDL] Add tests for #10591 (closes #23873)

* [24video] Add support for porn.24video.net (closes #23779, closes #23784)

* [npr] Add support for streams (closes #24042)

* [ChangeLog] Actualize
[ci skip]

* release 2020.02.16

* [tv2dk:bornholm:play] Fix extraction (#24076)

* [imdb] Fix extraction (closes #23443)

* [wistia] Add support for multiple generic embeds (closes #8347, closes #11385)

* [teachable] Add support for multiple videos per lecture (closes #24101)

* [pornhd] Fix extraction (closes #24128)

* [options] Remove duplicate short option -v for --version (#24162)

* [extractor/common] Convert ISM manifest to unicode before processing on python 2 (#24152)

* [YoutubeDL] Force redirect URL to unicode on python 2

* Remove no longer needed compat_str around geturl

* [youjizz] Fix extraction (closes #24181)

* [test_subtitles] Remove obsolete test

* [zdf:channel] Fix tests

* [zapiks] Fix test

* [xtube] Fix metadata extraction (closes #21073, closes #22455)

* [xtube:user] Fix test

* [telecinco] Fix extraction (refs #24195)

* [telecinco] Add support for article opening videos

* [franceculture] Fix extraction (closes #24204)

* [xhamster] Fix extraction (closes #24205)

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.01

* [vimeo] Fix subtitles URLs (#24209)

* [servus] Add support for new URL schema (closes #23475, closes #23583, closes #24142)

* [youtube:playlist] Fix tests (closes #23872) (#23885)

* [peertube] Improve extraction

* [peertube] Fix issues and improve extraction (closes #23657)

* [pornhub] Improve title extraction (closes #24184)

* [vimeo] fix showcase password protected video extraction(closes #24224)

* [youtube] Fix age-gated videos support without login (closes #24248)

* [youtube] Fix tests

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.06

* [nhk] update API version(closes #24270)

* [youtube] Improve extraction in 429 error conditions (closes #24283)

* [youtube] Improve age-gated videos extraction in 429 error conditions (refs #24283)

* [youtube] Remove outdated code

Additional get_video_info requests don't seem to provide any extra itags any longer

* [README.md] Clarify 429 error

* [pornhub] Add support for pornhubpremium.com (#24288)

* [utils] Add support for cookies with spaces used instead of tabs

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.08

* Revert "[utils] Add support for cookies with spaces used instead of tabs"

According to [1] TABs must be used as separators between fields.
Files produces by some tools with spaces as separators are considered
malformed.

1. https://curl.haxx.se/docs/http-cookies.html

This reverts commit cff99c91d1.

* [utils] Add reference to cookie file format

* Revert "[vimeo] fix showcase password protected video extraction(closes #24224)"

This reverts commit 12ee431676.

* [nhk] Relax _VALID_URL (#24329)

* [nhk] Remove obsolete rtmp formats (closes #24329)

* [nhk] Update m3u8 URL and use native hls (#24329)

* [ndr] Fix extraction (closes #24326)

* [xtube] Fix formats extraction (closes #24348)

* [xtube] Fix typo

* [hellporno] Fix extraction (closes #24399)

* [cbc:watch] Add support for authentication

* [cbc:watch] Fix authenticated device token caching (closes #19160)

* [soundcloud] fix download url extraction(closes #24394)

* [limelight] remove disabled API requests(closes #24255)

* [bilibili] Add support for new URL schema with BV ids (closes #24439, closes #24442)

* [bilibili] Add support for player.bilibili.com (closes #24402)

* [teachable] Extract chapter metadata (closes #24421)

* [generic] Look for teachable embeds before wistia

* [teachable] Update upskillcourses domain

New version does not use teachable platform any longer

* [teachable] Update gns3 domain

* [teachable] Update test

* [ChangeLog] Actualize
[ci skip]

* [ChangeLog] Actualize
[ci skip]

* release 2020.03.24

* [spankwire] Fix extraction (closes #18924, closes #20648)

* [spankwire] Add support for generic embeds (refs #24633)

* [youporn] Add support form generic embeds

* [mofosex] Add support for generic embeds (closes #24633)

* [tele5] Fix extraction (closes #24553)

* [extractor/common] Skip malformed ISM manifest XMLs while extracting ISM formats (#24667)

* [tv4] Fix ISM formats extraction (closes #24667)

* [twitch:clips] Extend _VALID_URL (closes #24290) (#24642)

* [motherless] Fix extraction (closes #24699)

* [nova:embed] Fix extraction (closes #24700)

* [youtube] Skip broken multifeed videos (closes #24711)

* [soundcloud] Extract AAC format

* [soundcloud] Improve AAC format extraction (closes #19173, closes #24708)

* [thisoldhouse] Fix video id extraction (closes #24548)

Added support for:
with of without "www."
and either  ".chorus.build" or ".com"

It now validated correctly on older URL's
```
<iframe src="https://thisoldhouse.chorus.build/videos/zype/5e33baec27d2e50001d5f52f
```
and newer ones
```
<iframe src="https://www.thisoldhouse.com/videos/zype/5e2b70e95216cc0001615120
```

* [thisoldhouse] Improve video id extraction (closes #24549)

* [youtube] Fix DRM videos detection (refs #24736)

* [options] Clarify doc on --exec command (closes #19087) (#24883)

* [prosiebensat1] Improve extraction and remove 7tv.de support (#24948)

* [prosiebensat1] Extract series metadata

* [tenplay] Relax _VALID_URL (closes #25001)

* [tvplay] fix Viafree extraction(closes #15189)(closes #24473)(closes #24789)

* [yahoo] fix GYAO Player extraction and relax title URL regex(closes #24178)(closes #24778)

* [youtube] Use redirected video id if any (closes #25063)

* [youtube] Improve player id extraction and add tests

* [extractor/common] Extract multiple JSON-LD entries

* [crunchyroll] Fix and improve extraction (closes #25096, closes #25060)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.03

* [puhutv] Remove no longer available HTTP formats (closes #25124)

* [utils] Improve cookie files support

+ Add support for UTF-8 in cookie files
* Skip malformed cookie file entries instead of crashing (invalid entry len, invalid expires at)

* [dailymotion] Fix typo

* [compat] Introduce compat_cookiejar_Cookie

* [extractor/common] Use compat_cookiejar_Cookie for _set_cookie (closes #23256, closes #24776)

To always ensure cookie name and value are bytestrings on python 2.

* [orf] Add support for more radio stations (closes #24938) (#24968)

* [uol] fix extraction(closes #22007)

* [downloader/http] Finish downloading once received data length matches expected

Always do this if possible, i.e. if Content-Length or expected length is known, not only in test.
This will save unnecessary last extra loop trying to read 0 bytes.

* [downloader/http] Request last data block of exact remaining size

Always request last data block of exact size remaining to download if possible not the current block size.

* [iprima] Improve extraction (closes #25138)

* [youtube] Improve signature cipher extraction (closes #25188)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.08

* [spike] fix Bellator mgid extraction(closes #25195)

* [bbccouk] PEP8

* [mailru] Fix extraction (closes #24530) (#25239)

* [README.md] flake8 HTTPS URL (#25230)

* [youtube] Add support for yewtu.be (#25226)

* [soundcloud] reduce API playlist page limit(closes #25274)

* [vimeo] improve format extraction and sorting(closes #25285)

* [redtube] Improve title extraction (#25208)

* [indavideo] Switch to HTTPS for API request (#25191)

* [utils] Fix file permissions in write_json_file (closes #12471) (#25122)

* [redtube] Improve formats extraction and extract m3u8 formats (closes #25311, closes #25321)

* [ard] Improve _VALID_URL (closes #25134) (#25198)

* [giantbomb] Extend _VALID_URL (#25222)

* [postprocessor/ffmpeg] Embed series metadata with --add-metadata

* [youtube] Add support for more invidious instances (#25417)

* [ard:beta] Extend _VALID_URL (closes #25405)

* [ChangeLog] Actualize
[ci skip]

* release 2020.05.29

* [jwplatform] Improve embeds extraction (closes #25467)

* [periscope] Fix untitled broadcasts (#25482)

* [twitter:broadcast] Add untitled periscope broadcast test

* [malltv] Add support for sk.mall.tv (#25445)

* [brightcove] Fix subtitles extraction (closes #25540)

* [brightcove] Sort imports

* [twitch] Pass v5 accept header and fix thumbnails extraction (closes #25531)

* [twitch:stream] Fix extraction (closes #25528)

* [twitch:stream] Expect 400 and 410 HTTP errors from API

* [tele5] Prefer jwplatform over nexx (closes #25533)

* [jwplatform] Add support for bypass geo restriction

* [tele5] Bypass geo restriction

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.06

* [kaltura] Add support for multiple embeds on a webpage (closes #25523)

* [youtube] Extract chapters from JSON (closes #24819)

* [facebook] Support single-video ID links

I stumbled upon this at https://www.facebook.com/bwfbadminton/posts/10157127020046316 . No idea how prevalent it is yet.

* [youtube] Fix playlist and feed extraction (closes #25675)

* [youtube] Fix thumbnails extraction and remove uploader id extraction warning (closes #25676)

* [youtube] Fix upload date extraction

* [youtube] Improve view count extraction

* [youtube] Fix uploader id and uploader URL extraction

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.16

* [youtube] Fix categories and improve tags extraction

* [youtube] Force old layout (closes #25682, closes #25683, closes #25680, closes #25686)

* [ChangeLog] Actualize
[ci skip]

* release 2020.06.16.1

* [brightcove] Improve embed detection (closes #25674)

* [bellmedia] add support for cp24.com clip URLs(closes #25764)

* [youtube:playlists] Extend _VALID_URL (closes #25810)

* [youtube] Prevent excess HTTP 301 (#25786)

* [wistia] Restrict embed regex (closes #25969)

* [youtube] Improve description extraction (closes #25937) (#25980)

* [youtube] Fix sigfunc name extraction (closes #26134, closes #26135, closes #26136, closes #26137)

* [ChangeLog] Actualize
[ci skip]

* release 2020.07.28

* [xhamster] Extend _VALID_URL (closes #25789) (#25804)

* [xhamster] Fix extraction (closes #26157) (#26254)

* [xhamster] Extend _VALID_URL (closes #25927)

Co-authored-by: Remita Amine <remitamine@gmail.com>
Co-authored-by: Sergey M․ <dstftw@gmail.com>
Co-authored-by: nmeum <soeren+github@soeren-tempel.net>
Co-authored-by: Roxedus <me@roxedus.dev>
Co-authored-by: Singwai Chan <c.singwai@gmail.com>
Co-authored-by: cdarlint <cdarlint@users.noreply.github.com>
Co-authored-by: Johannes N <31795504+jonolt@users.noreply.github.com>
Co-authored-by: jnozsc <jnozsc@gmail.com>
Co-authored-by: Moritz Patelscheck <moritz.patelscheck@campus.tu-berlin.de>
Co-authored-by: PB <3854688+uno20001@users.noreply.github.com>
Co-authored-by: Philipp Hagemeister <phihag@phihag.de>
Co-authored-by: Xaver Hellauer <software@hellauer.bayern>
Co-authored-by: d2au <d2au.dev@gmail.com>
Co-authored-by: Jan 'Yenda' Trmal <jtrmal@gmail.com>
Co-authored-by: jxu <7989982+jxu@users.noreply.github.com>
Co-authored-by: Martin Ström <name@my-domain.se>
Co-authored-by: The Hatsune Daishi <nao20010128@gmail.com>
Co-authored-by: tsia <github@tsia.de>
Co-authored-by: 3risian <59593325+3risian@users.noreply.github.com>
Co-authored-by: Tristan Waddington <tristan.waddington@gmail.com>
Co-authored-by: Devon Meunier <devon.meunier@gmail.com>
Co-authored-by: Felix Stupp <felix.stupp@outlook.com>
Co-authored-by: tom <tomster954@gmail.com>
Co-authored-by: AndrewMBL <62922222+AndrewMBL@users.noreply.github.com>
Co-authored-by: willbeaufoy <will@willbeaufoy.net>
Co-authored-by: Philipp Stehle <anderschwiedu@googlemail.com>
Co-authored-by: hh0rva1h <61889859+hh0rva1h@users.noreply.github.com>
Co-authored-by: comsomisha <shmelev1996@mail.ru>
Co-authored-by: TotalCaesar659 <14265316+TotalCaesar659@users.noreply.github.com>
Co-authored-by: Juan Francisco Cantero Hurtado <iam@juanfra.info>
Co-authored-by: Dave Loyall <dave@the-good-guys.net>
Co-authored-by: tlsssl <63866177+tlsssl@users.noreply.github.com>
Co-authored-by: Rob <ankenyr@gmail.com>
Co-authored-by: Michael Klein <github@a98shuttle.de>
Co-authored-by: JordanWeatherby <47519158+JordanWeatherby@users.noreply.github.com>
Co-authored-by: striker.sh <19488257+strikersh@users.noreply.github.com>
Co-authored-by: Matej Dujava <mdujava@gmail.com>
Co-authored-by: Glenn Slayden <5589855+glenn-slayden@users.noreply.github.com>
Co-authored-by: MRWITEK <mrvvitek@gmail.com>
Co-authored-by: JChris246 <43832407+JChris246@users.noreply.github.com>
Co-authored-by: TheRealDude2 <the.real.dude@gmx.de>
2020-08-25 20:23:34 +05:30

380 lines
13 KiB
Python
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# coding: utf-8
from __future__ import unicode_literals
import re
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
determine_ext,
dict_get,
int_or_none,
str_or_none,
strip_or_none,
try_get,
)
class SVTBaseIE(InfoExtractor):
_GEO_COUNTRIES = ['SE']
def _extract_video(self, video_info, video_id):
is_live = dict_get(video_info, ('live', 'simulcast'), default=False)
m3u8_protocol = 'm3u8' if is_live else 'm3u8_native'
formats = []
for vr in video_info['videoReferences']:
player_type = vr.get('playerType') or vr.get('format')
vurl = vr['url']
ext = determine_ext(vurl)
if ext == 'm3u8':
formats.extend(self._extract_m3u8_formats(
vurl, video_id,
ext='mp4', entry_protocol=m3u8_protocol,
m3u8_id=player_type, fatal=False))
elif ext == 'f4m':
formats.extend(self._extract_f4m_formats(
vurl + '?hdcore=3.3.0', video_id,
f4m_id=player_type, fatal=False))
elif ext == 'mpd':
if player_type == 'dashhbbtv':
formats.extend(self._extract_mpd_formats(
vurl, video_id, mpd_id=player_type, fatal=False))
else:
formats.append({
'format_id': player_type,
'url': vurl,
})
if not formats and video_info.get('rights', {}).get('geoBlockedSweden'):
self.raise_geo_restricted(
'This video is only available in Sweden',
countries=self._GEO_COUNTRIES)
self._sort_formats(formats)
subtitles = {}
subtitle_references = dict_get(video_info, ('subtitles', 'subtitleReferences'))
if isinstance(subtitle_references, list):
for sr in subtitle_references:
subtitle_url = sr.get('url')
subtitle_lang = sr.get('language', 'sv')
if subtitle_url:
if determine_ext(subtitle_url) == 'm3u8':
# TODO(yan12125): handle WebVTT in m3u8 manifests
continue
subtitles.setdefault(subtitle_lang, []).append({'url': subtitle_url})
title = video_info.get('title')
series = video_info.get('programTitle')
season_number = int_or_none(video_info.get('season'))
episode = video_info.get('episodeTitle')
episode_number = int_or_none(video_info.get('episodeNumber'))
duration = int_or_none(dict_get(video_info, ('materialLength', 'contentDuration')))
age_limit = None
adult = dict_get(
video_info, ('inappropriateForChildren', 'blockedForChildren'),
skip_false_values=False)
if adult is not None:
age_limit = 18 if adult else 0
return {
'id': video_id,
'title': title,
'formats': formats,
'subtitles': subtitles,
'duration': duration,
'age_limit': age_limit,
'series': series,
'season_number': season_number,
'episode': episode,
'episode_number': episode_number,
'is_live': is_live,
}
class SVTIE(SVTBaseIE):
_VALID_URL = r'https?://(?:www\.)?svt\.se/wd\?(?:.*?&)?widgetId=(?P<widget_id>\d+)&.*?\barticleId=(?P<id>\d+)'
_TEST = {
'url': 'http://www.svt.se/wd?widgetId=23991&sectionId=541&articleId=2900353&type=embed&contextSectionId=123&autostart=false',
'md5': '33e9a5d8f646523ce0868ecfb0eed77d',
'info_dict': {
'id': '2900353',
'ext': 'mp4',
'title': 'Stjärnorna skojar till det - under SVT-intervjun',
'duration': 27,
'age_limit': 0,
},
}
@staticmethod
def _extract_url(webpage):
mobj = re.search(
r'(?:<iframe src|href)="(?P<url>%s[^"]*)"' % SVTIE._VALID_URL, webpage)
if mobj:
return mobj.group('url')
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
widget_id = mobj.group('widget_id')
article_id = mobj.group('id')
info = self._download_json(
'http://www.svt.se/wd?widgetId=%s&articleId=%s&format=json&type=embed&output=json' % (widget_id, article_id),
article_id)
info_dict = self._extract_video(info['video'], article_id)
info_dict['title'] = info['context']['title']
return info_dict
class SVTPlayBaseIE(SVTBaseIE):
_SVTPLAY_RE = r'root\s*\[\s*(["\'])_*svtplay\1\s*\]\s*=\s*(?P<json>{.+?})\s*;\s*\n'
class SVTPlayIE(SVTPlayBaseIE):
IE_DESC = 'SVT Play and Öppet arkiv'
_VALID_URL = r'''(?x)
(?:
svt:(?P<svt_id>[^/?#&]+)|
https?://(?:www\.)?(?:svtplay|oppetarkiv)\.se/(?:video|klipp|kanaler)/(?P<id>[^/?#&]+)
)
'''
_TESTS = [{
'url': 'http://www.svtplay.se/video/5996901/flygplan-till-haile-selassie/flygplan-till-haile-selassie-2',
'md5': '2b6704fe4a28801e1a098bbf3c5ac611',
'info_dict': {
'id': '5996901',
'ext': 'mp4',
'title': 'Flygplan till Haile Selassie',
'duration': 3527,
'thumbnail': r're:^https?://.*[\.-]jpg$',
'age_limit': 0,
'subtitles': {
'sv': [{
'ext': 'wsrt',
}]
},
},
}, {
# geo restricted to Sweden
'url': 'http://www.oppetarkiv.se/video/5219710/trollflojten',
'only_matching': True,
}, {
'url': 'http://www.svtplay.se/klipp/9023742/stopptid-om-bjorn-borg',
'only_matching': True,
}, {
'url': 'https://www.svtplay.se/kanaler/svt1',
'only_matching': True,
}, {
'url': 'svt:1376446-003A',
'only_matching': True,
}, {
'url': 'svt:14278044',
'only_matching': True,
}]
def _adjust_title(self, info):
if info['is_live']:
info['title'] = self._live_title(info['title'])
def _extract_by_video_id(self, video_id, webpage=None):
data = self._download_json(
'https://api.svt.se/videoplayer-api/video/%s' % video_id,
video_id, headers=self.geo_verification_headers())
info_dict = self._extract_video(data, video_id)
if not info_dict.get('title'):
title = dict_get(info_dict, ('episode', 'series'))
if not title and webpage:
title = re.sub(
r'\s*\|\s*.+?$', '', self._og_search_title(webpage))
if not title:
title = video_id
info_dict['title'] = title
self._adjust_title(info_dict)
return info_dict
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id, svt_id = mobj.group('id', 'svt_id')
if svt_id:
return self._extract_by_video_id(svt_id)
webpage = self._download_webpage(url, video_id)
data = self._parse_json(
self._search_regex(
self._SVTPLAY_RE, webpage, 'embedded data', default='{}',
group='json'),
video_id, fatal=False)
thumbnail = self._og_search_thumbnail(webpage)
if data:
video_info = try_get(
data, lambda x: x['context']['dispatcher']['stores']['VideoTitlePageStore']['data']['video'],
dict)
if video_info:
info_dict = self._extract_video(video_info, video_id)
info_dict.update({
'title': data['context']['dispatcher']['stores']['MetaStore']['title'],
'thumbnail': thumbnail,
})
self._adjust_title(info_dict)
return info_dict
svt_id = self._search_regex(
r'<video[^>]+data-video-id=["\']([\da-zA-Z-]+)',
webpage, 'video id')
return self._extract_by_video_id(svt_id, webpage)
class SVTSeriesIE(SVTPlayBaseIE):
_VALID_URL = r'https?://(?:www\.)?svtplay\.se/(?P<id>[^/?&#]+)(?:.+?\btab=(?P<season_slug>[^&#]+))?'
_TESTS = [{
'url': 'https://www.svtplay.se/rederiet',
'info_dict': {
'id': '14445680',
'title': 'Rederiet',
'description': 'md5:d9fdfff17f5d8f73468176ecd2836039',
},
'playlist_mincount': 318,
}, {
'url': 'https://www.svtplay.se/rederiet?tab=season-2-14445680',
'info_dict': {
'id': 'season-2-14445680',
'title': 'Rederiet - Säsong 2',
'description': 'md5:d9fdfff17f5d8f73468176ecd2836039',
},
'playlist_mincount': 12,
}]
@classmethod
def suitable(cls, url):
return False if SVTIE.suitable(url) or SVTPlayIE.suitable(url) else super(SVTSeriesIE, cls).suitable(url)
def _real_extract(self, url):
series_slug, season_id = re.match(self._VALID_URL, url).groups()
series = self._download_json(
'https://api.svt.se/contento/graphql', series_slug,
'Downloading series page', query={
'query': '''{
listablesBySlug(slugs: ["%s"]) {
associatedContent(include: [productionPeriod, season]) {
items {
item {
... on Episode {
videoSvtId
}
}
}
id
name
}
id
longDescription
name
shortDescription
}
}''' % series_slug,
})['data']['listablesBySlug'][0]
season_name = None
entries = []
for season in series['associatedContent']:
if not isinstance(season, dict):
continue
if season_id:
if season.get('id') != season_id:
continue
season_name = season.get('name')
items = season.get('items')
if not isinstance(items, list):
continue
for item in items:
video = item.get('item') or {}
content_id = video.get('videoSvtId')
if not content_id or not isinstance(content_id, compat_str):
continue
entries.append(self.url_result(
'svt:' + content_id, SVTPlayIE.ie_key(), content_id))
title = series.get('name')
season_name = season_name or season_id
if title and season_name:
title = '%s - %s' % (title, season_name)
elif season_id:
title = season_id
return self.playlist_result(
entries, season_id or series.get('id'), title,
dict_get(series, ('longDescription', 'shortDescription')))
class SVTPageIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?svt\.se/(?P<path>(?:[^/]+/)*(?P<id>[^/?&#]+))'
_TESTS = [{
'url': 'https://www.svt.se/sport/ishockey/bakom-masken-lehners-kamp-mot-mental-ohalsa',
'info_dict': {
'id': '25298267',
'title': 'Bakom masken Lehners kamp mot mental ohälsa',
},
'playlist_count': 4,
}, {
'url': 'https://www.svt.se/nyheter/utrikes/svenska-andrea-ar-en-mil-fran-branderna-i-kalifornien',
'info_dict': {
'id': '24243746',
'title': 'Svenska Andrea redo att fly sitt hem i Kalifornien',
},
'playlist_count': 2,
}, {
# only programTitle
'url': 'http://www.svt.se/sport/ishockey/jagr-tacklar-giroux-under-intervjun',
'info_dict': {
'id': '8439V2K',
'ext': 'mp4',
'title': 'Stjärnorna skojar till det - under SVT-intervjun',
'duration': 27,
'age_limit': 0,
},
}, {
'url': 'https://www.svt.se/nyheter/lokalt/vast/svt-testar-tar-nagon-upp-skrapet-1',
'only_matching': True,
}, {
'url': 'https://www.svt.se/vader/manadskronikor/maj2018',
'only_matching': True,
}]
@classmethod
def suitable(cls, url):
return False if SVTIE.suitable(url) else super(SVTPageIE, cls).suitable(url)
def _real_extract(self, url):
path, display_id = re.match(self._VALID_URL, url).groups()
article = self._download_json(
'https://api.svt.se/nss-api/page/' + path, display_id,
query={'q': 'articles'})['articles']['content'][0]
entries = []
def _process_content(content):
if content.get('_type') in ('VIDEOCLIP', 'VIDEOEPISODE'):
video_id = compat_str(content['image']['svtId'])
entries.append(self.url_result(
'svt:' + video_id, SVTPlayIE.ie_key(), video_id))
for media in article.get('media', []):
_process_content(media)
for obj in article.get('structuredBody', []):
_process_content(obj.get('content') or {})
return self.playlist_result(
entries, str_or_none(article.get('id')),
strip_or_none(article.get('title')))