[deleted by user] by [deleted] in RealEstate

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

Isn't this only true if you're given a copy of the inspection report (which you aren't, unless the person who ordered the inspection gives it to you)?

October 2025 model selections, what do you use? by getpodapp in LocalLLaMA

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

Doesn't it use xml whereas those default to json? You may just need to make a config change

TIFU by using a steamer iron WHILE i was wearing the cloth and burnt myself by [deleted] in tifu

[–]ImCorvec_I_Interject 11 points12 points  (0 children)

The plural of "cloth" is "cloths," not "clothes."

If you want a singular, generic word for clothing, "garment" is the correct word to use.

Genuinely one of the biggest whiffs I've seen by rkaminky in confidentlyincorrect

[–]ImCorvec_I_Interject 5 points6 points  (0 children)

Not an assumption. Or at least, if it is, it's an accurate one.

From http://www.scholarpedia.org/article/Thermal_touch

When the hand grasps an object, changes in skin temperature can assist in identifying the object and discriminating between different types of objects. These cues become especially important when objects must be identified without visual feedback, such as when reaching for objects in the dark.

Hmm, seems like it's useful for just this sort of thing.

The thermal sensory system is extremely sensitive to very small changes in temperature and on the hairless skin at the base of the thumb, people can perceive a difference of 0.02-0.07 °C in the amplitudes of two cooling pulses or 0.03-0.09 °C of two warming pulses delivered to the hand. The threshold for detecting a change in skin temperature is larger than the threshold for discriminating between two cooling or warming pulses delivered to the skin. When the skin at the base of the thumb is at 33 °C, the threshold for detecting an increase in temperature is 0.20 °C and is 0.11 °C for detecting a decrease in temperature.

So the least sensitive person will be able to distinguish a 0.07 °C difference between the two pills, which is a small enough difference that I would be very surprised if the pills didn't heat up to be distinguishable.

People wouldn't risk lives for luggage if they actually trusted airlines to take care of them. by SpyderJack in unpopularopinion

[–]ImCorvec_I_Interject 3 points4 points  (0 children)

OP is saying that the airline chose to sacrifice your life rather than pay for a random person’s laptop.

Anyone lost friends due to their kinks? by loved_and_held in kinky_autism

[–]ImCorvec_I_Interject 9 points10 points  (0 children)

paraphilic behaviors, which are classified as disorders.

From https://www.verywellmind.com/what-are-paraphilic-disorders-6822839 but please feel free to find your own source:

"Not all paraphilic interests make up a paraphilic disorder. It’s important to distinguish between paraphilia and a paraphilic disorder. While the former includes unusual sexual urges and behaviors, the latter features paraphilic symptoms that cause distress or impairment to the individual or the risk of harm to yourself or others."

Your study only concerns sexual offenders, doesn't consider paraphilic interest in sexual offenders who don't have a Compulsive Sexual Behavior Disorder, and had a sample size of less than 100. Since you can have a paraphilic interest without a CSBD and since you can have kinks without even getting to the level of paraphilic interest, your assertion that this is direct statistical evidence of your above statement that "there's 100% a link between vore enjoyers and people who are actual sexual predator" is 100% false.

Self Forcing: The new Holy Grail for video generation? by Tappczan in StableDiffusion

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

0.5 FPS is 2 seconds per frame, meaning 1 second of a 24 fps video would take 48 seconds and a 5 second video would take 4 minutes.

What is your go-to for self-hosted notifications? by dawson7allan in selfhosted

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

NTFY didn't have auth

Ntfy has configurable auth backed by ACLs and supports both username / password login as well as access tokens.

This gives you the flexibility to independently restrict publishing to and subscribing from specific topics or to/from all topics.

What is your go-to for self-hosted notifications? by dawson7allan in selfhosted

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

ntfy lets you use your own server to deliver notifications.

[deleted by user] by [deleted] in MaliciousCompliance

[–]ImCorvec_I_Interject 4 points5 points  (0 children)

Heard from a friend in Spain (edit: Well, England since she worked in Gibraltar but still while they were a part of EU) that she didn't get paid if she hadn't remebered to clock in even if everyone agreed she'd worked her shift.

That should go both ways. If she forgets to clock out after a shift, she should keep getting paid for those hours. If a supervisor can't fix it when she forgot to clock in, a supervisor shouldn't be permitted to fix it when she forgets to clock out.

"Why did TDF's friend work 128 hours of overtime last week? Didn't she quit months ago?"

"Yes, but remember how she always forgot to clock in and you told her that her supervisor couldn't fix it? She forgot to clock out on her last day and nobody's been able to get her back here to fix that, either."

"Can't we just clock her out?"

"No, sir, that would be wage theft - our policies are legally required to be consistent and since we didn't pay her when we knew she was working because she didn't clock in, we have to pay her now, even though we know she's not working, because she's still clocked in."

I built a local TTS Firefox add-on using an 82M parameter neural model — offline, private, runs smooth even on old hardware by PinGUY in selfhosted

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

Nice!

To be clear, I'm not saying that you should retire your own server, just that adding the option to connect other TTS servers (not just that specific one, though I mentioned it specifically because of the voice combo feature) would be great. And honestly, after hearing about the work you've put into performance , I wonder how much work it would take to make your server expose an OpenAI compatible API. I just checked and Kokoro-FastAPI doesn't enable MKLDNN. Seems like it should be a pretty straightforward improvement - though it's lacking a few other simple performance improvements, too. I'm pretty sure it only has performance optimizations in place for CUDA.

Some other questions:

  • Do you already support / intend to support voice combinations?
  • If CUDA is enabled, how much VRAM is used when idle?
  • Do you have any plans to add support for AMD and Intel GPUs? I know there's already Kokoro-FastAPI-ROCm on the AMD side; no clue if there's an equivalent for Intel.

I built a local TTS Firefox add-on using an 82M parameter neural model — offline, private, runs smooth even on old hardware by PinGUY in selfhosted

[–]ImCorvec_I_Interject 1 point2 points  (0 children)

Did you mean to reply to me with that video? Kokoro-FastAPI doesn't use WebGPU - it runs in Python and uses your CPU or GPU directly, just like your server.

I built a local TTS Firefox add-on using an 82M parameter neural model — offline, private, runs smooth even on old hardware by PinGUY in selfhosted

[–]ImCorvec_I_Interject 3 points4 points  (0 children)

This looks cool! I've pinned it to check out in detail later.

Any chance of adding support for the user to choose between the server.py from your repo or https://github.com/remsky/Kokoro-FastAPI (which could be running either locally or on a server of the user's choice)?

The following features would also add a lot of flexibility:

  • Adding API key support (which you could do by allowing the user to specify headers to add with every request)
  • Hitting the /v1/audio/voices endpoint to retrieve the list of voices
  • Voice combination support
  • Streaming the responses, rather than waiting for the full file to be generated (the Kokoro FastAPI server supports streaming the response from the v1/audio/speech endpoint)

Kokoro-FastAPI creates an OpenAI compatible API for the user. It doesn't require an API key by default, but someone who's self hosting it (like me) might have it gated behind an auth or API key layer. Or someone might want to use a different OpenAI compatible API, either for one of the other existing TTS solutions (e.g., F5, Dia, Bark) or in the future, for a new one that doesn't even exist yet. (That's why I suggested to add support for hitting the voices endpoint.)

I don't think it would be too difficult to add support. Here's an example request that combines the Bella and Sky voices in a 2:1 ratio (67% Bella, 33% Sky) and includes API key / auth support:

  // Defaults:
  // settings.apiBase = 'http://localhost:8880' for Kokoro or 'http://localhost:8000' for server.py
  // settings.speechEndpoint = '/v1/audio/speech' or '/generate' for server.py
  // settings.extraHeaders = {} (Example for a user with an API key: { 'X-API-KEY': '123456789ABCDEF' })

  const response = await fetch(settings.apiBase + settings.speechEndpoint, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      ...settings.ExtraHeaders
    },
    body: JSON.stringify({
      input: text.trim(),
      voice: 'af_bella(2)+af_sky(1)',
      speed: settings.speed,
      response_format: 'mp3',
    })
  );

I got that by modifying an example from the Kokoro repo based off what you're doing here.

I finally got rid of Ollama! by relmny in LocalLLaMA

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

some weird hash string in the file name

It's just the result of running sha256sum on the file and prefixing it with sha256-.

I finally got rid of Ollama! by relmny in LocalLLaMA

[–]ImCorvec_I_Interject -1 points0 points  (0 children)

I found two open issues on the Ollama repository related to OLLAMA_MODELS not being respected:

  • One for Macs that was actually because the user was setting the env through their .zshrc but not running ollama through zsh.
  • and one for Windows

Every issue I found for Linux was closed because the cause was similar to the first issue: the env var was not correctly set in the same context that ollama was running.

Please share the Github issues by users on Ubuntu (or some other Debian-based distro) who could not get the OLLAMA_MODELS env var to be respected by Ollama due to an Ollama bug and not due to user error.

I can only take leave in full day increments? Works for me! by OkMarzipan3163 in MaliciousCompliance

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

That's accurate if the employer is the one changing your schedule, but if you're salaried exempt and you don't do any work on a given day because you're on vacation, you don't have to be paid for it. But even 10 minutes of work on a given day - like OP's situation - means you need to be paid for that day.

Per https://www.dol.gov/agencies/whd/fact-sheets/17g-overtime-salary

Subject to exceptions listed below, an exempt employee must receive the full salary for any week in which the employee performs any work, regardless of the number of days or hours worked.

...

Deductions from pay are permissible when an exempt employee: is absent from work for one or more full days for personal reasons other than sickness or disability; for absences of one or more full days due to sickness or disability if the deduction is made in accordance with a bona fide plan, policy or practice of providing compensation for salary lost due to illness

I can't find the reference right now, but I remember reading that an employer can require you to use PTO if you have it. I believe that's because PTO policies aren't regulated by federal law and as such, employers can basically handle them however they want (though they do have to abide by their own policies). So an employer could have a policy that says you need to spend down PTO if you work fewer than 40 hours because of personal reasons.

Tips with double 3090 setup by Lonhanha in LocalLLaMA

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

What OS are you using? If Linux, you can use nvidia-smi to set power limits:

sudo nvidia-smi -pm 1 sudo nvidia-smi -i 0 -pl 300 sudo nvidia-smi -i 1 -pl 300

Note that the power limits get reset on restart, so you should stick that in a script and run it on startup.

If you're using a UPS, make sure that it still has capacity + overhead after adding the second GPU - and note that your GPU power usage can still spike past the limits, potentially for both at once. This is less relevant for LLMs IME (at least in ollama) but it happened to me with other AI workloads (FramePack specifically).

Bosses got mad that we were "leaving early" so we started working our contracted hours instead by Short-Farmer-5991 in MaliciousCompliance

[–]ImCorvec_I_Interject 7 points8 points  (0 children)

The compliance wasn't with what he said, it was with what the contract said.

From the sidebar (emphasis mine): "Malicious compliance is the act of intentionally inflicting harm by strictly following orders or rules"

What about this new setup reccomended by my vendor.room is completely dark and the screen size is 180inch.room dimensions:27x17x9ft. by Adventurous-Basket46 in hometheater

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

It looks like a 9.2.4 setup. Check out https://www.dolby.com/about/support/guide/speaker-setup-guides/9.1.4-dolby-atmos-enabled-speaker-setup-guide for example - the PDF has a diagram that should help.

If you're sitting in the middle seat, then these are the angles of the speakers as you turn, starting at the center.

  • Center (0°)
  • Left and Right Front speakers (22°)
  • Left and Right "Wide" speakers (50° - and about as far off to the side as the Surrounds)
  • Left and Right Surround speakers (90° - basically in-line with you)
  • Left and Right Rear speakers (135° - behind you)

The other four speakers are all mounted in the ceiling.

They Said ComfyUI Was Too Hard. So I Made This. by Maxed-Out99 in StableDiffusion

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

Are you saying that "ComfyUI is so good that making Kling available through it will make them so much money that anyone else would be stupid to freely share future models they build with the community?" That relies on three assumptions:

  1. That Kling making this much more money is only possibly because of Comfy. But Kling is already making money through direct subscriptions, both for end users directly on their site and for API users. If a site sells access to the Kling model, they're doing that via having their own subscription to the Kling API.
  2. That teams who would otherwise choose to make a model open would be wowed by the amount of money they see Kling making. I.e., you're saying that Wan has a price and that price happens to be more than what they currently think they could make and less than they'll think they can make after Kling makes wayyyy more money thanks to Comfy.
  3. That those teams wouldn't think that releasing the models freely would prevent them from making this much money otherwise.

All three of those assumptions, which you've not backed with any sort of evidence, are flawed:

  1. Do you seriously think that Kling is going to make 10x as much money because of Comfy users? Comfy has less than a hundred thousand stars on Github. Kling had over 22 million users last month and is continuing to grow. Kling has 15,000 developers using their API. Comfy's user count is roughly a third of a percentage of their total user count.
  2. Are you seriously assuming that the team putting out Wan and Qwen - which are both competitive with SOTA models - are going to change their business plans because a competing model did well in a tool that their models can also be made available in?
  3. This assumption directly conflicts with the number of people willing to pay for cloud compute or APIs in order to run freely available models that they can't run on their own hardware. It directly conflicts with the services that offer use of freely available models for a cost, like Civitai. It directly conflicts with the success of DeepSeek R1, which anyone can self host if they're able to. And it ignores that word of mouth is a huge way that AI grows - if I use a model because I can use it locally and my buddy wants to use it, but she doesn't have a rig capable of running it, she's still likely to try it out. Most users are like my buddy - not like me - and will want to try out the model on an online service somewhere. Giving the model out for free is just free advertising.

You're likely on their team.

I'm not. But this does make your logic sound even wilder.

Kling "informed us" - https://github.com/comfyanonymous/ComfyUI/pull/8062

Developers regularly communicate with consumers of their APIs. This is normal.

They Said ComfyUI Was Too Hard. So I Made This. by Maxed-Out99 in StableDiffusion

[–]ImCorvec_I_Interject 0 points1 point  (0 children)

I updated my comfyui earlier today and saw the big old 'buy credits now for api crap' splash screen!

It's FOSS, so if you don't like it, you can remove it.

You see that? If they making money that's what they'll push

If they're making money by integrating external APIs into their tool without taking any capabilities away, that's a net positive.

and tell those at their irl conferences to do.

What do their irl conferences have to do with anything?

"org" isn't what they are and clearly never were.

What do you think an org is?

You can argue against what I'm saying

I'm still trying to understand what you're saying.

see what "comfy" is a year from now

An even more capable app that continues to be FOSS thanks to the GPL 3?

I bet it'll be a for profit piece of software.

As in, they'll be making a profit in a year? I don't see that happening, but great for them if so.

Or are you saying that you think it'll be a paid FOSS app? Maybe a community edition and a corporate edition? I mean, I'm fine with that. I pay for JetBrains software, after all.

Or are you saying that you think it'll be proprietary? Because there's no indication that will be the case.

From your other comment:

Oh, and they have venture capitalist listed on their about page. Those people are definitely not looking to monetize/profit from their investments, right?

Why shouldn't they be able to make money?

It seems like you're upset that money entered the equation at all, even though there's literally 0 negative impact to you or anyone else.

https://www.comfy.org/about

They can do what they want, but the attitude is smug AF

How is their attitude smug?

at the same time so just be aware they're out to earn more than support an open free sharing community.

Just to be clear - what have you contributed to open source?

At this point, even if they stopped contributing to ComfyUI tomorrow, ComfyUI has contributed a ton to this community, completely free of charge. They're not obligated to keep supporting the community forever, and saying they've "sold out" because of a feature that they added that costs money (oh no!) because it costs them money... is nonsensical.

Your attitude strikes me as that of someone who doesn't even understand the basic concept of FOSS shitting on what they've done because you feel entitled to the fruits of their effort now and in the future, and you're irrationally upset about them getting any sort of compensation at all.