>>1045204 (OP)
Reminded me of the hell that was downloading a channel with 500+ videos. But yeah, as >>1045220 said, you just need to pass the -i flag which will ignore the errors and continue to download other videos.
Using it from within Python gives you more control but if you don't want to go that deep, then you could use something like this:
CHANNEL='UCD6VugMZKRhSyzWEWA9W2fg'
youtube-dl \
--no-progress --no-warnings \
-i --ignore-config --prefer-ffmpeg \
-R 'infinite' --fragment-retries 'infinite' \
--abort-on-unavailable-fragment --geo-bypass --no-check-certificate \
--match-filter "channel_id = '${CHANNEL}'" --playlist-reverse \
-w -o '%(upload_date)s.%(id)s/%(id)s.%(ext)s' -f 'bestvideo+bestaudio/best' \
--download-archive 'ARCHIVE' --merge-output-format 'mkv' \
--no-continue --write-info-json --write-thumbnail --all-subs \
-- "https://www.youtube.com/channel/${CHANNEL}" "https://www.youtube.com/channel/${CHANNEL}/playlists" \
2>&1 | tee "youtube-dl.log"
This will first download all the videos on the channel and then look through its playlists and download all the videos that belong to this channel (useful for cases when the channel has unlisted videos in some playlist) ignoring all errors. Downloaded videos will be written in the
ARCHIVE file, so you could skip the already downloaded videos and easily update the archive as the channel uploads new videos or retry the download if some videos weren't downloaded due to errors. All videos will be in best possible quality contained withing Matroska (.mkv). Besides the videos themselves, this will also download the metadata in a form of JSON file (.info.json), the thumbnail in max. resolution and all available closed captions or subtitles.
The result would look something like this:
.
├── 20190127.0FW23bamIZI
│ ├── 0FW23bamIZI.info.json
│ ├── 0FW23bamIZI.jpg
│ └── 0FW23bamIZI.mkv
├── 20190226.wXo24imR_54
│ ├── wXo24imR_54.info.json
│ ├── wXo24imR_54.jpg
│ └── wXo24imR_54.mkv
├── 20190317.URJ_qSXruW0
│ ├── URJ_qSXruW0.info.json
│ ├── URJ_qSXruW0.jpg
│ └── URJ_qSXruW0.mkv
└── ARCHIVE
The only problem this way of doing things has is that when scanning the playlists for videos that belong to this channel,
youtube-dl has to first download the video page and check the metadata which is slow and a waste of traffic. On top of that, there's no internal mechanism to prevent
youtube-dl from analyzing already rejected videos. What you can do, is get all the rejected videos from
youtube-dl.log and keep them in the
ARCHIVE file so they'll be ignored entirely when you update the archive.
They all have the same message:
[youtube] 8UD50tPFCYo: Downloading webpage
[youtube] 8UD50tPFCYo: Downloading video info webpage
[download] Mario Tennis does not pass filter channel_id = 'UCPcIwIn5WO6_o_vXF8SXx3w', skipping ..
You can also do the same thing with the videos that weren't downloaded due to copyright errors or some other shit. Just find them in the log, and write in
ARCHIVE before reattempting to archive the channel.
Personally, I just kept an EXCLUDE file with all the videos I don't need and each time I needed to update a channel, before actually starting youtube-dl, I was switching to the directory and doing something like this:
jq -r '"youtube \(.id)"' */*.info.json | cat - EXCLUDE > ARCHIVE
This is how I used to do it before switching to Python.