Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
off topic: it's ridiculous how much data you can hold on modern data storage device nowadays. my brother has a external drive for his PS4 that can hold the entirety of 4chan twice and still have space for a few games.
Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
IDK how useful this will be for anyone but you can download Youtube videos by using invidious. All you need to do is go to the invidious website and search for the video there or install this extension which will redirect you to the invidious website; when in the video, you'll be able to right click and save the video to your computer. I only tested on firefox, not sure if it works in the other browsers. This is also a good option in case you want to watch youtube without being tracked.
Here's what I've tried. (I'm on windows). Within youtube-dl, I've tried updating/refreshing the cookie file with --cookies, logging in manually with -u, -p (with multiple accounts), confirmed I'm running the latest version. I ran it with and without my VPN, and set the VPN to a couple different countries to see if that made a difference (it didn't). I also tried using JDownloader2 and the youtube-dlc fork. I also tried a few browser based methods, none could find the download link.
It looks like the issue is caused from the age-gating on the video based on the research I've done. This post on github describes the issue as well as a fix/work around. Dropping the text of that in the spoiler below.
1. I simply used the curl as stated above with actual parameters (replace REDACTED with your values).
2. Copied the response (html object).
3. Put that into a file (e.g. youtube_html.txt)
4. and searched for the age restriction line which I then removed - otherwise the age restriction check fires up and wants to use an alternative approach / workaround but since we have all the data in the html object we don't want that.
<meta property="og:restrictions:age" content="18+"> <- remove that in your txt file.
5. Overwrite video_webpage (do not remove the line but add a new line with same variable name)
youtube-dlc/youtube_dlc/extractor/youtube.py
Line 1800 in 2045de7
video_webpage, urlh = self._download_webpage_handle(url, video_id)
To this by defining the variable again
video_webpage, urlh = self._download_webpage_handle(url, video_id)
video_webpage = open("FULL PATH TO YOUR SAVED FILE", "r", encoding="utf-8").read()
python3 -m youtube_dlc 7takIh1nK0s -F -v
Alternative to Step 1 - Ctrl + U (show source) and copy that. It should do the same but haven't tested it.
Keep in mind that you cannot download any other youtube video then. This is just a POC (proof of concept). So someone may be able or want to fix that issue.
Another solution would be to add a feature to use files when declared instead of actual web pages. That way you can feed data by another tool. That should also circumvent when videos are geo restricted and someone is providing that html object. But that is just theory and would need to be tested.
I believe this video is a special case where it requires you to be logged in to view it and in addition it's age restricted.
This is where I'm stuck. Actually got past that, leaving it though so in case anyone actually looks at this they can check my work, lol.
---- "curl 'https://www.youtube.com/watch?v=ouvjY5RrzXk'-H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed
Invoke-WebRequest : Cannot bind parameter 'Headers'. Cannot convert the "cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;" value of type "System.String" to type "System.Collections.IDictionary".
At line:1 char:57
+ ... RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID= ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: ([Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.InvokeWebRequestCommand "
(After invalidargument, there's a left parentheses, colon, and a right parentheses. I can't make it not show up as a smiley. Sorry! Also, I don't actually have "redacted" in the cookie section of that command, I copied the info from my cookies.txt file after doing a fresh export with an account I know to be 18+.)
If I enter the command as: "curl.exe 'https://www.youtube.com/watch?v=ouvjY5RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed" I get: "curl: option --compressed: the installed libcurl version doesn't support this"
If I run it without the --compressed option, I get a goddamn novel's worth of information. (162 pages in a word doc) Does anyone know, do I need to update something in regards to the libcurl version, run the command differently, or should I just paste that whole thing into a word doc, search for what I need to change, & figure out where the youtube.html text file I'm supposed to edit resides? Guessing that I need to save a copy of the original text to revert it after successfully downloading this, right?I misunderstood what I was reading, now I think I just need to paste it into a document and switch some stuff around.
Okay I figured out the syntax of the command, got that to execute properly and give me what I need. Also, I'm a dumbass. I could have also just opened the page the video is on and CTRL+U to show the source. LMAO. For some reason I insist on doing things the hard way.
I'm on step 5 of the fix now, trying to figure out how to replicate this step. I literally have no idea what I'm doing.
Where do I go to overwrite that line / how do i get to it? I also think I may have an issue with the way I installed python so I'll look into that. But in the mean time, should anyone take pity on my retarded ass and feel like pointing me in the right direction, I'd be ever grateful. I still have no idea what I'm doing but after reading it again and again and again, I figured out what I needed to do on step 5. Most recent edit:
Okay so I had to download the youtube dlc registry and manually edit the youtube.py code, which I did. This is now where I'm stuck python3 -m youtube_dlc ouvjY5RrzXk -F -v
CUsers\redacted\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\python.exe: No module named youtube_dlc
Anyone know if I need to move a file around or something?
I humbly accept the impending autistic ratings, for this is truly embarrassing. Both the fact that I haven't solved this and that I've spent this long attempting to archive a video of a death fat talking about other death fats. When I do figure it out, I'll update this post with what I did in case another farmer runs into the same problem.
I know I can screen record it from my phone, I've already done so. I just want to understand/fix this so the next time I run into an error like this, I'll know what to do. <3
EDIT:
Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
9 times out of 10 for me, if I add -u, -p, this will fix the ones that were unable to extract. Whenever I see that unable to extract error, I cancel the entire thing, delete everything it’s downloaded thus far, and run the command again with my username and password attached. Now I just always add those credentials whenever I’m doing playlists.I know that doesn’t really answer your question but just throwing that out there in case it helps.
Here's what I've tried. (I'm on windows). Within youtube-dl, I've tried updating/refreshing the cookie file with --cookies, logging in manually with -u, -p (with multiple accounts), confirmed I'm running the latest version. I ran it with and without my VPN, and set the VPN to a couple different countries to see if that made a difference (it didn't). I also tried using JDownloader2 and the youtube-dlc fork. I also tried a few browser based methods, none could find the download link.
It looks like the issue is caused from the age-gating on the video based on the research I've done. This post on github describes the issue as well as a fix/work around. Dropping the text of that in the spoiler below.
1. I simply used the curl as stated above with actual parameters (replace REDACTED with your values).
2. Copied the response (html object).
3. Put that into a file (e.g. youtube_html.txt)
4. and searched for the age restriction line which I then removed - otherwise the age restriction check fires up and wants to use an alternative approach / workaround but since we have all the data in the html object we don't want that.
<meta property="og:restrictions:age" content="18+"> <- remove that in your txt file.
5. Overwrite video_webpage (do not remove the line but add a new line with same variable name)
youtube-dlc/youtube_dlc/extractor/youtube.py
Line 1800 in 2045de7
video_webpage, urlh = self._download_webpage_handle(url, video_id)
To this by defining the variable again
video_webpage, urlh = self._download_webpage_handle(url, video_id)
video_webpage = open("FULL PATH TO YOUR SAVED FILE", "r", encoding="utf-8").read()
python3 -m youtube_dlc 7takIh1nK0s -F -v
Alternative to Step 1 - Ctrl + U (show source) and copy that. It should do the same but haven't tested it.
Keep in mind that you cannot download any other youtube video then. This is just a POC (proof of concept). So someone may be able or want to fix that issue.
Another solution would be to add a feature to use files when declared instead of actual web pages. That way you can feed data by another tool. That should also circumvent when videos are geo restricted and someone is providing that html object. But that is just theory and would need to be tested.
I believe this video is a special case where it requires you to be logged in to view it and in addition it's age restricted.
This is where I'm stuck. Actually got past that, leaving it though so in case anyone actually looks at this they can check my work, lol.
---- "curl 'https://www.youtube.com/watch?v=ouvjY5RrzXk'-H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed
Invoke-WebRequest : Cannot bind parameter 'Headers'. Cannot convert the "cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;" value of type "System.String" to type "System.Collections.IDictionary".
At line:1 char:57
+ ... RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID= ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: ([Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.InvokeWebRequestCommand "
(After invalidargument, there's a left parentheses, colon, and a right parentheses. I can't make it not show up as a smiley. Sorry! Also, I don't actually have "redacted" in the cookie section of that command, I copied the info from my cookies.txt file after doing a fresh export with an account I know to be 18+.)
If I enter the command as: "curl.exe 'https://www.youtube.com/watch?v=ouvjY5RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed" I get: "curl: option --compressed: the installed libcurl version doesn't support this"
If I run it without the --compressed option, I get a goddamn novel's worth of information.
Does anyone know - do I need to update something in regards to the libcurl version, run the command differently, or should I just paste that whole thing into a word doc, search for what I need to change, & figure out where the youtube.html text file I'm supposed to edit resides? Guessing that I need to save a copy of the original text to revert it after successfully downloading this, right? I misunderstood what I was reading, now I think I just need to paste it into a document and switch some stuff around.
I'm on step 5 of the fix now, trying to figure out how to replicate this step, but I literally have no idea what I'm doing.View attachment 1630941
Where do I go to overwrite that line / how do i get to it? I also think I may have an issue with the way I installed python so I'll look into that. But in the mean time, should anyone take pity on my retarded ass and feel like pointing me in the right direction, I'd be ever grateful.
I humbly accept the impending autistic ratings, for this is truly embarrassing. Both the fact that I haven't solved this and that I've spent this long attempting to archive a video of a death fat talking about other death fats. When I do figure it out, I'll update this post with what I did in case another farmer runs into the same problem.
I know I can screen record it from my phone, I've already done so. I just want to understand/fix this so the next time I run into an error like this, I'll know what to do. <3
EDIT:
9 times out of 10 for me, if I add -u, -p, this will fix the ones that were unable to extract. Whenever I see that unable to extract error, I cancel the entire thing, delete everything it’s downloaded thus far, and run the command again with my username and password attached. Now I just always add those credentials whenever I’m doing playlists.I know that doesn’t really answer your question but just throwing that out there in case it helps.
You can use this extension here: Easy Youtube Video Downloader Express. You will need to be logged in on youtube, ofc. (It's not available for chrome, though)
You can use this extension here: Easy Youtube Video Downloader Express. You will need to be logged in on youtube, ofc. (It's not available for chrome, though)
Thanks for that, I'll definitely keep that as an option in my back pocket for future reference. I prefer not to fuck with extensions but as a last resort it's nice to have another method. Confirmed that it does work on this video. <3 Thank you so much, @Blitzkrieger! <3
I'm close to figuring it out using youtube-dl, whenever I actually manage that I'll go ahead and post an ELI5 version in the above post.
Here's what I've tried. (I'm on windows). Within youtube-dl, I've tried updating/refreshing the cookie file with --cookies, logging in manually with -u, -p (with multiple accounts), confirmed I'm running the latest version. I ran it with and without my VPN, and set the VPN to a couple different countries to see if that made a difference (it didn't). I also tried using JDownloader2 and the youtube-dlc fork. I also tried a few browser based methods, none could find the download link.
It looks like the issue is caused from the age-gating on the video based on the research I've done. This post on github describes the issue as well as a fix/work around. Dropping the text of that in the spoiler below.
1. I simply used the curl as stated above with actual parameters (replace REDACTED with your values).
2. Copied the response (html object).
3. Put that into a file (e.g. youtube_html.txt)
4. and searched for the age restriction line which I then removed - otherwise the age restriction check fires up and wants to use an alternative approach / workaround but since we have all the data in the html object we don't want that.
<meta property="og:restrictions:age" content="18+"> <- remove that in your txt file.
5. Overwrite video_webpage (do not remove the line but add a new line with same variable name)
youtube-dlc/youtube_dlc/extractor/youtube.py
Line 1800 in 2045de7
video_webpage, urlh = self._download_webpage_handle(url, video_id)
To this by defining the variable again
video_webpage, urlh = self._download_webpage_handle(url, video_id)
video_webpage = open("FULL PATH TO YOUR SAVED FILE", "r", encoding="utf-8").read()
python3 -m youtube_dlc 7takIh1nK0s -F -v
Alternative to Step 1 - Ctrl + U (show source) and copy that. It should do the same but haven't tested it.
Keep in mind that you cannot download any other youtube video then. This is just a POC (proof of concept). So someone may be able or want to fix that issue.
Another solution would be to add a feature to use files when declared instead of actual web pages. That way you can feed data by another tool. That should also circumvent when videos are geo restricted and someone is providing that html object. But that is just theory and would need to be tested.
I believe this video is a special case where it requires you to be logged in to view it and in addition it's age restricted.
This is where I'm stuck. Actually got past that, leaving it though so in case anyone actually looks at this they can check my work, lol.
---- "curl 'https://www.youtube.com/watch?v=ouvjY5RrzXk'-H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed
Invoke-WebRequest : Cannot bind parameter 'Headers'. Cannot convert the "cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;" value of type "System.String" to type "System.Collections.IDictionary".
At line:1 char:57
+ ... RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID= ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: ([Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.InvokeWebRequestCommand "
(After invalidargument, there's a left parentheses, colon, and a right parentheses. I can't make it not show up as a smiley. Sorry! Also, I don't actually have "redacted" in the cookie section of that command, I copied the info from my cookies.txt file after doing a fresh export with an account I know to be 18+.)
If I enter the command as: "curl.exe 'https://www.youtube.com/watch?v=ouvjY5RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed" I get: "curl: option --compressed: the installed libcurl version doesn't support this"
If I run it without the --compressed option, I get a goddamn novel's worth of information. (162 pages in a word doc) Does anyone know, do I need to update something in regards to the libcurl version, run the command differently, or should I just paste that whole thing into a word doc, search for what I need to change, & figure out where the youtube.html text file I'm supposed to edit resides? Guessing that I need to save a copy of the original text to revert it after successfully downloading this, right?I misunderstood what I was reading, now I think I just need to paste it into a document and switch some stuff around.
Okay I figured out the syntax of the command, got that to execute properly and give me what I need. Also, I'm a dumbass. I could have also just opened the page the video is on and CTRL+U to show the source. LMAO. For some reason I insist on doing things the hard way.
I'm on step 5 of the fix now, trying to figure out how to replicate this step. I literally have no idea what I'm doing.View attachment 1630941 Where do I go to overwrite that line / how do i get to it? I also think I may have an issue with the way I installed python so I'll look into that. But in the mean time, should anyone take pity on my retarded ass and feel like pointing me in the right direction, I'd be ever grateful. I still have no idea what I'm doing but after reading it again and again and again, I figured out what I needed to do on step 5. Most recent edit:
Okay so I had to download the youtube dlc registry and manually edit the youtube.py code, which I did. This is now where I'm stuck python3 -m youtube_dlc ouvjY5RrzXk -F -v
CUsers\redacted\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\python.exe: No module named youtube_dlc
Anyone know if I need to move a file around or something?
I humbly accept the impending autistic ratings, for this is truly embarrassing. Both the fact that I haven't solved this and that I've spent this long attempting to archive a video of a death fat talking about other death fats. When I do figure it out, I'll update this post with what I did in case another farmer runs into the same problem.
I know I can screen record it from my phone, I've already done so. I just want to understand/fix this so the next time I run into an error like this, I'll know what to do. <3
EDIT:
9 times out of 10 for me, if I add -u, -p, this will fix the ones that were unable to extract. Whenever I see that unable to extract error, I cancel the entire thing, delete everything it’s downloaded thus far, and run the command again with my username and password attached. Now I just always add those credentials whenever I’m doing playlists.I know that doesn’t really answer your question but just throwing that out there in case it helps.
It doesn't work for me with a fresh cookies file, though. I tried that initially. This post on github describes the exact issue so I'm wondering if I can downgrade my youtube-dl version if it would work, since clearly it works for you. Sorry to blow this thread up with my idiocy, guys. Thank you all for trying to help me, I really appreciate you taking the time to look at it. <3
It doesn't work for me with a fresh cookies file, though. I tried that initially. This post on github describes the exact issue so I'm wondering if I can downgrade my youtube-dl version if it would work, since clearly it works for you. Sorry to blow this thread up with my idiocy, guys. Thank you all for trying to help me, I really appreciate you taking the time to look at it. <3
In which case I can figure out how to downgrade my version or keep plugging along at the fix. At least now I have some options! Thank youuuuu!!!
Edit: Didn't need to actually downgrade, I just grabbed the version @Gustav Schuchardt used [2020.07.28] from here.
Code:
PS C:\Users\redacted> .\youtube-dl-2020.07.28.exe --cookies C:\Users\redacted\cookies.txt ouvjY5RrzXk -f 18
[youtube] ouvjY5RrzXk: Downloading webpage
[youtube] ouvjY5RrzXk: Downloading MPD manifest
[download] Destination: on piggy situation-ouvjY5RrzXk.mp4
[download] 100% of 10.95MiB in 00:06
It's lowkey annoying that I've wasted so long trying to figure out how to do the workaround referenced in the github article but at least it confirms everything else I did was correct and it was just the version I was running. I'll still try to figure out how to make it work on the newer version as I'd like to understand it better. If anyone runs into this post later down the line and has a similar issue, feel free to PM me and I'll do my best to help you.
Massive thanks to you all for your patience and wisdom. <3
How can I un-redact names from screenshots that were redacted using a shitty markup tool in a mobile OS?
I've seen people do some image-editing fuckery to reveal the stuff that was rubbed out of screenshots like this:
But I don't know what steps to take or what settings to manipulate in tools such as GIMP, in order to recover anything that's recoverable. In that example, the right-hand name is pretty readable without any manipulation at all, but I'm not sure if the left-hand name can be saved. Any advice on shit like this?
But I don't know what steps to take or what settings to manipulate in tools such as GIMP, in order to recover anything that's recoverable. In that example, the right-hand name is pretty readable without any manipulation at all, but I'm not sure if the left-hand name can be saved. Any advice on shit like this?
You can use any image editing program that allows you to adjust levels, contrast, or color values. "Kathryn" is clearly visible while "Stephanie" only has "st-p-nie" visible.
Here's two examples:
Just adjusting levels: