You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 25, 2022. It is now read-only.
Hey, I tried to use your script but whenever I try to download a playlist I get an error for every track.
Thats the output:
Getting... + url
Some error happened somewhere???
list index out of range
Ack
I tried different playlists but its the same all the time.
I guess there's maybe a problem at line 138 "search_page = soup.find_all('ol', 'item-section')[0]" because after this, the script directly jumps to the last exception which prints the result mentioned above.
Greetings
The text was updated successfully, but these errors were encountered:
I'm trying to get past this as well. It doesn't seem like search_page gets used at all, so I just removed it. It's still failing at the next line though, I'm assuming YT changed their HTML structure and the find_all selections need to be updated.
Looks like YouTube has changed the way it renders its pages and skeletons are used as placeholders during loaded - this means the content is no longer there. There are a couple of options to solve this:
Load the page, let the JavaScript run and then parse the videos out (slow)
Use the YouTube Data API. This will require an extra token.
Also you're right, search_page = soup.find_all('ol', 'item-section')[0] isn't used at all.
For the meantime, checkout youtube-dl; it pretty much does what this tool does/did.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hey, I tried to use your script but whenever I try to download a playlist I get an error for every track.
Thats the output:
Getting... + url
Some error happened somewhere???
list index out of range
Ack
I tried different playlists but its the same all the time.
I guess there's maybe a problem at line 138 "search_page = soup.find_all('ol', 'item-section')[0]" because after this, the script directly jumps to the last exception which prints the result mentioned above.
Greetings
The text was updated successfully, but these errors were encountered: