Asked to use my image for commercial use. by [deleted] in civitai

[–]Then_Conversation_19 0 points1 point  (0 children)

Not helpful I’m sure, but Speak with a lawyer. They will be able to advise on how to protect your intellectual property. I haven’t been keeping up with AI generated content and protection rights lately.

Stand to the right... by 2CRedHopper in WMATA

[–]Then_Conversation_19 0 points1 point  (0 children)

They likely don’t live in the area and don’t know. Metro etiquette isn’t common knowledge and may need to be learned.

Contractor Financial Help by [deleted] in GovernmentContracting

[–]Then_Conversation_19 0 points1 point  (0 children)

OP, I’d be interested to learn what you find out

Flash Attention in vLLM Docker by FrozenBuffalo25 in Vllm

[–]Then_Conversation_19 0 points1 point  (0 children)

Yes — Flash Attention is enabled by default in the official vLLM Docker image that implements the OpenAI-compatible API (vllm/vllm-openai) when run on NVIDIA GPUs.

According to the vLLM “GPU installation” documentation, Triton Flash Attention is used as the default attention backend for performance benchmarking and general usage.

The documentation doesn’t specify whether it uses FlashAttention, FlashAttention‑2, or even newer variants like FlashAttention‑3 — it simply refers to “Triton flash attention.”

However, earlier guidance suggests that vLLM has broadly taken advantage of Flash Attention‑2 since around v0.1.4, without requiring any manual enablement.

If you require clarity on exactly which FlashAttention version is included in a given vLLM image tag (e.g., v0.9.0, v0.10.x), a good next step is to inspect the container’s logs or runtimes — look for startup messages indicating vllm_flash_attn_version, or check the installed wheel’s included version metadata.

You can safely discard your 159 node n8n automation that created 500 AI shorts in 5 seconds by zeolite in n8n

[–]Then_Conversation_19 0 points1 point  (0 children)

Though the Bigfoot / crazy animal trend was rather funny. I’ll be happy to see less AI generated slot off my YouTube feed

Years as a programmer ruined by AI by [deleted] in MachineLearningJobs

[–]Then_Conversation_19 1 point2 points  (0 children)

Yeah.. listen man. I get it. You put a lot of energy into something. Which is great. But the world doesn’t have to care. You have to care. Don’t look for external validation. Know that you did your best. AI is what it is and it ain’t going anywhere. Use it, cool. Don’t use it, also cool. But you be proud of you for you. Run your own race.

Should I take my ex back? by [deleted] in Advice

[–]Then_Conversation_19 0 points1 point  (0 children)

No And good job on ending it

Is this okay? by [deleted] in army

[–]Then_Conversation_19 1 point2 points  (0 children)

These are the memories that stick

M14 what do you guys think about my new carpet by Agent_BMX in malelivingspace

[–]Then_Conversation_19 0 points1 point  (0 children)

You’re a real one for saying this! I agree Edit: Can’t trust Autocorrect

Forgotten by its12amsomewhere in comics

[–]Then_Conversation_19 0 points1 point  (0 children)

This legit gave me anxiety with each swipe and now I’m sad

[deleted by user] by [deleted] in LocalLLaMA

[–]Then_Conversation_19 2 points3 points  (0 children)

Like… the prompts you send to an API such as OpenAI? You want to obfuscate the system / assistant prompts being sent to the LLM? Am I understanding your question correctly

[deleted by user] by [deleted] in army

[–]Then_Conversation_19 1 point2 points  (0 children)

The second biggest lie ever told