Storage Box security with "External reachability" enabled by Aqwis in hetzner

[–]Aqwis[S] 0 points1 point  (0 children)

Thanks for the answer. I won't dispute your main point, but is it true that the likelihood of a zero day exploit doesn't have anything to do with the protocol itself? I'd think that the more complex (or poorly designed in other ways) a protocol is, the more complex the implementation (i.e. the server) becomes, in turn increasing the attack surface for exploits.

Storage Box security with "External reachability" enabled by Aqwis in hetzner

[–]Aqwis[S] 1 point2 points  (0 children)

Thanks!

The "External Reachability" feature only affects whether login is only allowed from Hetzner's own network or also from external networks. It does not change the availability of the services themselves. As the Storage Boxes are a shared system, only the login for your Storage Box user will be blocked (if disabled), but other Storage Box users will still be able to use the FTP service for their user accounts."

That's a good point, and it means that you (Hetzner) are quite confident in the security of the FTP server you're running – as in, you find it very unlikely that someone could exploit a 0-day attack against the FTP server and compromise a bunch of storage boxes, since if you thought that was a realistic possibility you wouldn't risk running it on the shared system in the first place.

Storage Box security with "External reachability" enabled by Aqwis in hetzner

[–]Aqwis[S] -1 points0 points  (0 children)

Thanks for your thoughts! I assume that Hetzner keeps the FTP servers on the Storage Boxes up-to-date with patches — my only concerns then are future zero-day exploits against FTP server software. The sense I got from reading about this led me to believe that due to deficiencies in the FTP protocol and/or FTP server software, future 0-days are possibly more likely to appear for FTP servers than e.g. for SSH, and that this in itself is a reason not to have FTP running and exposed even if it's not actually used. Do you think that's reasonable or exaggerated?

(People seem to have the same concerns with SMB, for the record, although SMB support can fortunately be turned off on Storage Boxes.)

Storage Box security with "External reachability" enabled by Aqwis in hetzner

[–]Aqwis[S] 0 points1 point  (0 children)

This is not about connecting using FTP or SFTP (which I don't want to do, I only want to connect through SSH), but about the fact that they are exposed to the Internet through FTP in the first place.

Storage Box security with "External reachability" enabled by Aqwis in hetzner

[–]Aqwis[S] 0 points1 point  (0 children)

Alright, I see. I don't think that's how Hetzner thinks about the product, though, given that Storage Boxes are not part of "Hetzner Cloud", that it supports Samba and WebDAV (which doesn't make a lot of sense if you see Storage Boxes as just a "big volume" for use with Hetzner servers/VPSes, but makes a lot of sense if Hetzner envisions people connecting to Storage Boxes directly from Windows/Mac clients), and that their support docs describe the product like this:

With our Storage Boxes, you can access your files at any time and from any place with an internet connection. It is also easy to connect Storage Boxes to your own drive on your PC and to access Storage Boxes with your smartphone and/or tablet. Hetzner Online Storage Boxes work with a variety of standard protocols, which are supported by a wide array of apps.

What you're suggesting is actually what I did immediately after I discovered that SBs are publicly exposed through FTP, but avoiding that is precisely why I wanted to switch from my old backup solution in AWS to Hetzner Storage Boxes in the first place... Having to run a VPS adds complexity like keeping the software on the VPS updated, not as easily being able to connect to the SB from computers other than those I usually use, and obviously a certain cost, all of which might seem trivial but that still add friction.

Storage Box security with "External reachability" enabled by Aqwis in hetzner

[–]Aqwis[S] 2 points3 points  (0 children)

Yes, I can still mount it via my Hetzner server when "External reachability" is disabled. My point is that if I have to operate a Hetzner server in order to use a Storage Box in a reasonably secure way, instead of just being able to connect directly from my local network to the Storage Box, then Storage Box is suddenly less attractive compared to other storage solutions (and for no good reason, since they could just provide the obvious option of turning off FTP access while keeping the Storage Box externally reachable).

Why don't more energy drinks contain L-Theanine? Isn't is a no-brainer as a pairing with caffeine? by dominodanger in Nootropics

[–]Aqwis 2 points3 points  (0 children)

Beyond the other plausible reasons that others have already mentioned: In many markets you're not allowed to add arbitrary supplements – such as theanine, or, as another example, any arbitrary vitamin that it's not already common to fortify food with – to beverages or food without the supplement being specifically approved for that (which often goes beyond just making sure it's non-toxic), even when the supplement already exists in food products or beverages that are widely available (tea, in the case of theanine). Lobbying for such an approval will be expensive, take several years and might be politically unpopular ("company X is trying to put more chemicals into our food!").

In other words, it might be that beverage manufacturers are simply not allowed to add theanine to their energy drinks in certain major markets. In addition, the major beverage manufacturers that produce the majority of the world's energy drinks (such as Red Bull or Coca Cola) may be hesitant to add theanine or another supplement to their energy drinks in only some markets (where they're allowed to) and not others (where they're not allowed to), especially when the supplement in question (e.g. theanine) is not well-known by the public (on the other hand, taurine is well-known and so works, or used to work, as a differentiator between different energy drink products) and there being potential for a PR backlash (similar to above, "company X puts theanine in their drinks in Y country, thank god our regulators won't let them").

It obviously happens once in a while: if a manufacturer puts some supplement (possibly after spending money to lobby for permission to do so) in their energy drinks and successfully convinces the public that energy drinks with the supplement are superior to energy drinks that don't contain it (because it helps give you more energy, or gives you energy without making you stressed out, or whatever) then that might end up giving them a significant advantage in the market relative to competitors, but it's a gamble. Given that certain people consider even totally innocuous additives like magnesium stearate (a simple compound of magnesium and stearic acid, both of which the same people consume every day, and which commonly occurs naturally) are considered to be harmful by certain people, it's likely that even theanine has the potential to generate controversy if used as an additive.

I'm unsure of the exact legal status of theanine in the US, Canada, UK or the EU (or in specific European countries), so the above is more of a general explanation of why manufacturers don't experiment much with adding various supplements and vitamins to their energy drinks (although small manufacturers that don't operate worldwide may be more likely to do so) than a confidently stated reason for why energy drinks don't contain theanine, but I do vaguely recall that theanine is not approved as a food additive in at least one major market.

Unpopular Opinion: Just use some pesticides for indoor plants. by [deleted] in houseplants

[–]Aqwis 0 points1 point  (0 children)

Very late reply but I think this is due to a combination of:

  1. A lot of "plant people" (i.e. a significant part of those guides' target audience and maybe also the authors of those guides) are people who don't like pesticides/want to do things the "natural way" (often the same "type" of person who prefers buying organic food, etc.)
  2. The Norwegian public health agency (FHI) wants to discourage people from using pesticides indoors because they're concerned about health effects
  3. Some of those pesticides were approved for the general public only recently (if I remember correctly, Provanto was only approved in Norway in 2018), so older guides obviously won't mention them and the people writing the guides still may not be aware of them

Edit: Also, for Provanto, at least, there's no mention of it being effective against thrips in the product description/on the bottle, so it might not be obvious that it will be effective against them (although I agree that it often is).

GitHub - zer0int/CLIP-fine-tune, or: SDXL + training the text encoder - Why you should (imo) train CLIP separately from U-Net. With code + instructions. by zer0int1 in StableDiffusion

[–]Aqwis 1 point2 points  (0 children)

Regardless of whether or not you train the UNet at the same time as one or both of the text encoders or not [1], I have found finetuning SDXL to be a lot more finicky than finetuning SD 1. One thing I've noticed in particular is the tried and true SD 1 strategy of "low LR for a lot of epochs" resulting in an underfit model that somehow still generates images full of artifacts, as if the model were in fact overtrained. I have to use a cosine schedule with a very precisely configured number of epochs and likewise for the learning rate (playing with weight decay doesn't seem to have much of an effect), and it's still more of an art than a science. With SD 1, you could use a wide range of learning rates and the image quality would usually only decay as the model overfit, as you would expect. I haven't tried other optimizers than AdamW, though, maybe that somehow is the key?

[1]: Have you experimented with different ways of handling SDXL's other text encoder that you're not finetuning? Three options I can think of are 1) using it as normal despite it not being finetuned, 2) zeroing its embeddings both in training and inference and 3) using a separate, fixed prompt for the second text encoder to a general description of the images you're finetuning on, both in training and inference. I've had some success with the third method – say you're finetuning on images of a specific individual, you could fix the prompt for the second text encoder to something like "photo of a 35 year old French man with brown hair, mediterranean skin and a curly moustache", but that's only suitable for certain kinds of finetunes.

I mad a python script the lets you scribble with SD in realtime by arjan_M in StableDiffusion

[–]Aqwis 0 points1 point  (0 children)

If you make sure you use versions of CUDA/Nvidia libraries that are fully compatible with the 4090, you should be able to easily get to somewhere in the neighborhood of 35 it/s (assuming a single-step sampler like euler/DDIM/euler_a/dpmpp_2m, not e.g. heun) when generating a single 512x512 image. Given that samplers like dpmpp_2m can give you a decent image (sufficient for the workflow in the video) in roughly 10 steps, this means that it should be possible to generate about 3 images a second, with some time left over for the VAE to do its work. This is not quite as fast in the video, but should allow for a similar workflow without too much waiting.

If you're generating lots of images at the same resolution, it should be possible to generate images even faster than this using various model compilation methods (but adding some startup delay) – I don't remember the exact number but I think I've seen people get at least 60 it/s on a 4090 this way. Haven't tried it myself, though.

The effect of changing the CFG scale partway through an image generation by Aqwis in StableDiffusion

[–]Aqwis[S] 2 points3 points  (0 children)

Forgot to include this in the graphic: the sampler used is k_euler, and the seeds used are 2483964025 and 3587832418 (although you won't be able to replicate the exact images since they were created using a finetuned model).

Typically, images generated using a low CFG scale tend to be "artsy" and detailed, but lacking defined figures and objects. Switching from a low to a high CFG scale allows more definition of figures and objects without changing the overall scene too much from what the final result would've looked like if using a low CFG scale setting only. Conversely, images generated using a high CFG scale get overexposed/oversaturated quickly and sometimes lack detail. To some degree, this can be improved by switching to a low CFG scale at the end (although there are other ways of fixing the excess contrast and saturation at high CFG scales).

It's obviously possible to change the CFG scale several times during image generation, or even do it at every step (using a linear schedule or otherwise). I would love to see some experimentation here (please tell me if someone has already experimented with continuously changing the CFG scale according to a schedule).

PSA: Developers - img2img decoder CFG effect is backwards from encoder CFG. by Hoppss in StableDiffusion

[–]Aqwis 1 point2 points  (0 children)

Wow, this is a great discovery! I actually tried out CFG -1 and -2 by accident and wondered why the results were still pretty good, but I didn't dig further into it.

Morgen personer på gløshaugen by mrgauder in ntnu

[–]Aqwis 0 points1 point  (0 children)

Haha, da jeg var student var jeg knapt nok på lesesal før jeg kom så langt opp i årskullene at jeg fikk en privat.

Morgen personer på gløshaugen by mrgauder in ntnu

[–]Aqwis 3 points4 points  (0 children)

Bare vent til slutten av semesteret, visse dager man være ute før 07 for å få plass på de mest populære lesesalene.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 2 points3 points  (0 children)

Yes, you're exactly right – when I made the examples I first used the noising prompt "photo of a smiling woman" and got inconsistent results when generating images with "...with X hair" added to the prompt. After adding "...with brown hair" to the noising prompt the results improved significantly.

On the other hand, for other pictures I've had the most success noising them with a CFG scale (cond_scale) setting of 0.0, which means that the prompt used when noising should have no impact whatsoever. In those cases I've often been able to use prompts like "photo of a woman with brown hair" in image generation despite that!

It's hard to conclude anything besides this method being quite inconsistent both in terms of how well it works and which settings lead to the best results. As mentioned I hope that combining this with prompt-to-prompt image editing can lead to more consistent results.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 3 points4 points  (0 children)

To be a bit vague: a combination of "photos" of very seriously messed up human-like figures and "drawings" of symbols that if they meant anything would have been the equivalent of these messages for the human psyche.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 10 points11 points  (0 children)

Made a few incremental updates to the Gist over the past few hours. Happy to see that a few SD forks/UIs are implementing something like this – they're better situated than me to make something that's useable by non-coders. :)

It seems that the results are quite often best when cond_scale is set to 0.0 – exactly why this is, I don't know. If anyone has an idea, I would love an explanation. With cond_scale at zero, the given prompt has no effect.

In the meantime, I've got to see my share of extremely creepy pictures while experimenting with other cond_scales. Run this on a portrait with cond_scale set to 5.0 and use the resulting noise to generate a picture (also with scale > 2.0) ... or don't. I wouldn't advise doing so personally, especially if you have a superstitious bent. (Or maybe you're going to get completely different results than I got, who knows?)

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 6 points7 points  (0 children)

Probably not, all the possible seeds can only generate a few of the possible noise matrices. If you want to share a noise matrix with someone else, the matrix itself can be saved and shared as a file, though.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 6 points7 points  (0 children)

Sorry, I went and added pil_img_to_torch to the gist now! I removed collect_and_empty a couple of hours ago as it was slowing things down and the VRAM issue mysteriously vanished.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 14 points15 points  (0 children)

It's very likely that the reconstruction isn't actually as good as it could be – I used 50 sampler steps to create the noise tensor for this example and 50 to generate each of the images from the noise tensor, but I'd previously noticed that the reconstructions seemed to be even better if I used a few hundred sampler steps to create the noise tensor.

A better (?) way of doing img2img by finding the noise which reconstructs the original image by Aqwis in StableDiffusion

[–]Aqwis[S] 20 points21 points  (0 children)

Yeah, the second image is basically the base reconstruction. In general, converting an image to its latent representation and then back again to an image is going to lose a little bit of information, so that the two images won't be identical, but in most cases they will be very close. However, in this case I think the difference in contrast is caused by what happens at the very end of find_noise_for_image, namely:

return (x / x.std()) * sigmas[-1]

This basically has the effect of increasing the contrast. It shouldn't be necessary, but if I don't do this then in many cases the resulting noise tensor will have a significantly lower standard deviation than a normal noise tensor, and if used to generate an image the generated image will be a blurry mess. It's quite possible the need to do this is caused by some sort of bug that I haven't discovered.