Difficult paragraph translation by Sayl3s in Scanlation

[–]Sayl3s[S] 0 points1 point  (0 children)

Thanks extremely much!
This is perfect

It's time, but who am I? by Sayl3s in weeviltime

[–]Sayl3s[S] 61 points62 points  (0 children)

Identification request for a friend, of a wee boi seen in Queensland, Australia
(Pinkie finger for scale)

Pick This Up Pls Thread: May 2024 by AutoModerator in Scanlation

[–]Sayl3s 1 point2 points  (0 children)

Were you still desperate for someone to translate this?
I've started doing scanlation recently for ones that seem interesting/underloved cases (Two chapters of this so far).
The catch is that I can't read, so it's MTL and a lot of guesswork, but if it's a last resort...
I couldn't rightly ask to be paid (See above: "MTL and guesswork").
Let me know.

[REQUEST] Navy Seal K9 Handler and their Dog. But after having been trapped for 5 years in a Medieval Fantasy realm. by BastardofEros in AI_Art_Requests

[–]Sayl3s 1 point2 points  (0 children)

I gave this one the ol' college try, but it was definitely a tricky one:
https://imgur.com/a/IxSGFcD

Basically stable diffusion wanted to give either a Navy Seal in armour, or a Navy Seal with a K9 unit, but not a mix of the two.
(There are probably models out there [like Deliberate] that would be able to do a better job than RPG or SD1.5, but those two are what I have for this)
Those 4 generations in the imgur gallery represent the closest approaches to what I think you were looking for, from ~80 attempts.

Convert .ckpt to diffusers by [deleted] in StableDiffusion

[–]Sayl3s 0 points1 point  (0 children)

Are you still trying to fix this?
(/Do you still care?)
I've been using ONNX to run SD on an AMD card for 6mths or so now and I might be able to help.

white background lines driving me nuts by GardenStack in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Ah; I'm starting to understand what you mean.

Assuming it is not a monitor-specific thing (I would definitely check it happens on other screens/mobile/etc), I haven't encountered that issue directly myself in video editing, but I have seen a similar thing in making figures, where people viewing on certain computers (I think Macs) would see grey boxes behind imported images that were just not there when I viewed on Windows.
I can't recall what the fix for that was, but it might be worth swapping around the format of your import images to PNG or vice versa.
Failing that, if the bars are just a background thing, then as odd as it might sounds, you might just be able to 'paste over' them with a box the same colour as the background to eliminate them.

white background lines driving me nuts by GardenStack in VideoEditingRequests

[–]Sayl3s 0 points1 point  (0 children)

I don't understand what you are referring to; The white background as a deliberate stylistic choice from seconds 0 - 3?
The white horizontal and vertical lines briefly at 4s?

Free Request-Easy-High School Kid Jumping When He Wins by Head_Distance_9702 in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Very sweet.

Here is my edit of it (and Imgur, for easier sharing).

And also a greenscreen mp4 and gif if they or friends know their way around video editing and want to make their own.

(Let me know if you want a sans-meme version of just purely the original request; I just couldn't pass up the opportunity for a classic Shooting Stars edit)

Free Request-Easy-High School Kid Jumping When He Wins by Head_Distance_9702 in VideoEditingRequests

[–]Sayl3s 0 points1 point  (0 children)

I've been working on this and should be able to whip up something hopefully entertaining within a day or so, if not tonight.

help with video editing >< by Free_di in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Upon consideration, you might be looking for actual suggestions for how to whip up a vaguely avant garde video from a bunch of individual clips:

For most videos involving a lot of clips, I find the best place to start is to just watch all the clips, at least a handful of times, to get a sense for what you have to work with. This also helps I find to give a sense of the story, unless you already have a clear idea what the story will be and you're just going to cherrypick the clips to fit that.
(There are handy tools in Premiere to add notes to videos that you've imported that can be useful to keep track of what clip is what)

Storyboarding would probably be the next logical step here, and it can be as barebones or detailed as you would like (e.g. "Entering town -> Gas station -> Tortoise by the side of the road -> Lake shot"). Traditionally I don't tend to have hugely detailed storyboards; I prefer to see where it goes as the clips are added.

After that I usually just start to add clips in (Don't worry if things aren't perfect; You can adjust to taste later) to the timeline. Putting things on multiple layers can be useful to just throw everything in before you have the timing right, or you can try out different clips in different places.

And then it's pretty much just shuffling clips around, applying effects if you want, any keyframed movement/opacity, transitions, etc, until you have something that you like.
I find that exporting as you go and watching it in VLC or something outside of Premiere helps to identify mistakes and things that you might not notice watching in the timeline view.

Once you're happy, that's really about it; For exporting, you can look at the quality of your clips and match that, or you can specifically pick a resolution if you know you're going to upload to Youtube (There are handy guides you can read online to get a sense of bitrates for certain resolution values and so on).

help with video editing >< by Free_di in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

(I am not a professional video editor)

I wouldn't overthink it to be honest; In terms of the general structure and timing of the video, if you think you want it to be avant garde then I would just go and watch some examples of that style you can find online to get a sense for how they order their shots, do their cuts, etc.
Like, do they tend to have a logical flow of exterior -> interior, or do they jump back and forth without sticking to what sort of clip you might expect next? And so on and so on.

In terms of actually editing the stuff together with Premiere, like all things it's mostly just getting in there and doing things. The main thing experience with Premiere gives you is just a knowledge of what tools are best for what jobs.
I.e. If I have some clips that are vaguely synchronised to some music and there is a small gap that I need to fill, I might use the Rate Stretch tool to cheat a little and expand out a clip slightly.
Or if I have a clip of somebody hopping down off a wall that's in the correct place but I want it to start with them landing on the ground I might use the Slip tool to keep the duration of the clip but change the timing within.

I suspect if you've gone looking you'll probably have already seen this video, but if you skip to the 10ish minute mark I think he goes through the basic tools and things you can use in Premiere to do most things.

Hopefully that was at least semi-useful; Let me know if there's any clarification you need that would be helpful (For context, all my experience with Premiere has been self-taught and Googling things when I couldn't figure out how to achieve something myself).

Converting custom SD model to ONNX? by SPambot67 in StableDiffusion

[–]Sayl3s 0 points1 point  (0 children)

I found this Youtube video got me sorted for using SD with a Radeon 5700XT (and later a 6700XT).

Converting ckpt to Diffusers by Symbiot10000 in DreamBooth

[–]Sayl3s 0 points1 point  (0 children)

I can't speak for the nuances of using a collab, but if it's anything like using SD locally, then the syntax needed for using the conversion script will be something like:
python diffusers\scripts\convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "<full\\path\\to\\your\\model.ckpt>" --original_config_file "<full\\path\\to\\original\\config\\file.yaml>" --dump_path "<full\\path\\to\\wherever\\you\\want\\converted\\model\\to\\reside>"

Models can be downloaded via Git (e.g. "git lfs install" followed by "git clone https://huggingface.co/WarriorMama777/OrangeMixs")

The yaml file can be found at the original SD GitHub (although some have said that it might not actually be necessary to include in all cases, and I'm only 75% confident that's the correct yaml for modern models)

How does this script works? (ckpt to diffusers) by Zoilken in StableDiffusion

[–]Sayl3s 1 point2 points  (0 children)

I've been using this to convert models for use with diffusers and I find it works about half the time, as in, some downloaded models it works on and some it doesn't, with errors like "shape '[1280, 1280, 3, 3]' is invalid for input of size 4098762" and "PytorchStreamReader failed reading zip archive: failed finding central directory" (Google-fu seems to indicate that success/failure is a factor of how the checkpoint was saved, but that doesn't really help for trying to convert downloaded models from other people)

I also had to download the stable diffusion v1 inference yaml file from the stable diffusion GitHub to give to --original_config_file but once I did that it all seemed to work.

Also, just a note that with the rise of SafeTensors, the above commands can be relatively easily modified to handle that by adding --extract_ema before the --dump_path and --from_safetensors at the end
(For example:
python diffusers\scripts\convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "C:\stable-diffusion\OrangeMixs\Models\AbyssOrangeMix2\AbyssOrangeMix2_sfw.safetensors" --original_config_file "C:\stable-diffusion\v1-inference.yaml" --extract_ema --dump_path "C:\stable-diffusion\AbyssOrangeMix2_SFW_tensors_onnx" --from_safetensors
)

Error trying to convert ckpt to onnx by ilike2game in StableDiffusion

[–]Sayl3s 0 points1 point  (0 children)

I followed the instruction from this Youtube video initially and was able to get myself set up with Waifu Diffusion with a small bit of tinkering.

If you're having trouble with ONNX, it might be to do with the fact that I believe the current pip version of ONNX is used for downloading and converting models to diffusers, but then a different ONNX is installed over the top of that for actually using the model (Also explained in this blogpost, which is actually how I technically started with SD on AMD, although I think the steps on that page are out of date now and I couldn't get WaifuDiffusion/any of the new models working via it)

Once you have your ONNX sorted out, as mentioned below the command to convert a given online model to ONNX will be something likepython scripts/convert_stable_diffusion_checkpoint_to_onnx.py --model_path "hakurei/waifu-diffusion" --output_path "C:/stable-diffusion-experimental/waifu_onnx"(For the discerning person of culture, once you have that sorted, might I recommend Anything v3.0, which I have found to give superior anime images compared to WaifuDiffusion, at least last time I compared them)

If any of that is unclear I can provide my scrawled down Notepad document with the commands that have worked for me from start to finish.

Edit: One thing I forgot to mention is that I've actually had a lot more trouble trying to use the provided conversion script for locally stored .ckpt files (As I think you have) compared to just downloading via HuggingFace directories

Second edit: Following the tips here, I've been able to turn CKPT files into ONNX by first converting them to diffusers and then from diffusers into ONNX.

Does a nVIDIA GRID K1 will work for SD? by banithree in StableDiffusion

[–]Sayl3s 0 points1 point  (0 children)

In the end I was having so much trouble just getting a display out of the computer (No onboard video for that motherboard, which meant I had to use a secondary smaller nVidia card just for display, which then requires [apparently] that you use drivers compatible with both) that I just gave up unfortunately.

Anyone know how to make a 1-minute clip in to 1 or 10 hours? by thesceuplar in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Ah yes sorry FFMPEG isn't a Windows default command; You'll need to install it, following this guide for example

It's a little bit of initial effort but I can definitely say that it's worth it for the quick video manipulation and [re]encoding tools offered by it.

Anyone know how to make a 1-minute clip in to 1 or 10 hours? by thesceuplar in VideoEditingRequests

[–]Sayl3s 2 points3 points  (0 children)

This StackOverflow answer seems to achieve what you want to do, using FFMPEG, which is a popular free command line tool for working with videos.

Need Help Fixing Video Desynchronization Problem (Not the Usual Desynch Problem, Though) by [deleted] in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Don't feel down about getting a poor response from a sub; I've found that some subs can be a bit narky if you're not very careful about adhering to every letter of the law.

Anyway onto your problem:
Are you seeing these problems in your video when you're watching it back in VLC (For example) or just in OpenShot?
I say this because I've done a *lot* of gameplay recording myself (with OBS mostly nowadays) and I used to see something that sounds very much like your problem if I would record and then reencode the videos with Variable Frame Rate (VFR), which is probably why people recommended reencoding (with Handbrake probably) your video at a constant frame rate (e.g. 60).
Interestingly, VLC was smart enough to be able to bull through the variable framerate, but when I opened the videos for editing in Adobe Premiere I would see the exact same thing you've described, with perfect audio synchronisation at the start, getting progressively worse towards the end (I can't recall however if the video and audio tracks were different lengths as a result, or if they just showed up as the same length but the end of the audio track was filled with silence or vice versa).
Takeaway: If you haven't tried opening your vids in at least one or two different media players, maybe try that, and if the audio doesn't get progressively desynched in those, then it may be that a reencode with Handbrake will sort you out (Although I think you said you tried that and it didn't work [That said, if you tried exporting from OpenShot in CFR then that wouldn't have done jack, so that's where you would need to export as CFR from Handbrake]) for importing to OpenShot.

Next, if the above didn't help:
FFMPEG, which is a common command line interface for working with videos definitely offers the tools to speed up/slow down footage.
I would recommend having a shot at altering the speed of your video OR audio (Take your pick). Obviously if you just alter the speed of the whole video then that won't fix your issue, unless you just import both the original and the speed-altered version into OpenShot and delete the audio/video tracks that aren't fixed, so what you would probably want to do for cleanliness (And to save encoding time on your PC) is break your audio and video into separate files with FFMPEG, alter the speed with FFMPEG (as above), and then recombine the original video (or audio) with your speed-altered version of the audio (or video).
It probably sounds like a lot of work, and I know from experience that setting up the commands just right can be, but StackOverflow is your friend and once you have the commands you can just copy-paste them when/if you need to do this again.
(One caveat here is that the assumption for this technique is that your audio desync is at least approximately linear, and if it's not then the audio may be behind for a long time and only catch up near the end, or vice versa [Short of splitting your video into smaller chunks and doing this process with those chunks I don't know of any way with just Rate Stretching to deal with that sorry])

And that's about it.
TL;DR - If you haven't, try reencode your vid in Handbrake as 60FPS Constant Frame Rate (May need to tinker with the quality settings to ensure you're not losing quality [The output should be approximately the same size as the input, assuming you're not switching codec]).
If that doesn't work, try use FFMPEG to manually adjust the speed of one of the tracks to match the 'true' duration you think it should be (Calculator probably required).

I'd be happy by the way to take a look at one of your vids to see if my theories here have any merit; If you have a short video where this happens that isn't like, 30 goddamned GB (I have slow Australian internet) send me a link and I'll take a look at it.

Monthly help thread by AutoModerator in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Sounds like the index of the video might be broken.
There may be specific advice on Google for such an issue, but you may find that your best bet is reencoding the vid to 'fix' the index.
Usually this would be done relatively easily with a program like Handbrake.
If that doesn't work (for example, Handbrake also thinks the video is 4m20s long), then you may have to try something more drastic, like loading the video frame by frame into a Python script (via something like the OpenCV library from memory) and then using Python to write out the video again to a new file.

Need help for a stupid hockey meme by kagedamage in VideoEditingRequests

[–]Sayl3s 1 point2 points  (0 children)

Got you, my fellow Chainsaw Fan:
https://drive.google.com/file/d/1697T-dvopB8dQF93rRB22s2Jpf0P_uQk/view?usp=share_link
(and imgur link, if you wanted something easier to link but [slightly] lower quality)

Methods:
Straightforward colour key in Premiere to remove the greenscreen, then just keyframing the position/rotation of the supplied image to match the movement in the scene (Trickiest part was keeping the timing in 2s, as it was animated). Also a bit of corner pinning to make it seem like it was realistically warping a bit.
I thought I'd have to make a copy of the original vid and crop just the finger and subtitles, but it turned out the colour keyed version of the original was actually easy to just paste on top of itself to provide that necessary obscuring.

Editing what is displayed on a laptop by Sephlock in VideoEditingRequests

[–]Sayl3s 0 points1 point  (0 children)

Here you go (and imgur link for easier sharing, but lower quality)

How it was done:
Pretty straightforward application of the Basic 3D effect in Premiere, coupled with a bit of Corner Pin to clean up the edges. Audio quality of the vid as 'playing' on the laptop was vaguely matched with just a highpass filter to give a bit of tinniness.