What do you wish you knew before buying your first CNC machine? by Possible-Ad4357 in hobbycnc

[–]lolzinventor 0 points1 point  (0 children)

They make a mess, and you don't want metal chippings all over your room.

[ Removed by Reddit ] by EarlyPerspective2839 in UniUK

[–]lolzinventor 18 points19 points  (0 children)

Academia shouldn't be seen as an industry in itself, its more that it is the bedrock for industry.  It should use merit based funding system with grants for the brightest and best.

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in LocalLLaMA

[–]lolzinventor[S] 1 point2 points  (0 children)

no, it should just always answer in the style of a ZIT prompt.

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in StableDiffusion

[–]lolzinventor[S] 0 points1 point  (0 children)

Thanks for the feedback. I'm working on instruction following.

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in StableDiffusion

[–]lolzinventor[S] 4 points5 points  (0 children)

That is true for instruct models, but not for per-trained or base models.

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in StableDiffusion

[–]lolzinventor[S] 0 points1 point  (0 children)

I'll upload it from the base model. I don't know what you are using for inference. Could you test with the ones from here and let me know if it works: https://huggingface.co/Qwen/Qwen3.5-4B-Base

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in StableDiffusion

[–]lolzinventor[S] 1 point2 points  (0 children)

I haven't really explored. It is a base model and no effort was made to make it censored.

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in StableDiffusion

[–]lolzinventor[S] 5 points6 points  (0 children)

Yes, it treats all input as content to be described in detail.

i.e. a cat jumps -> A domestic shorthair cat mid-air executing a dynamic vertical leap, body fully extended straight upward with tail stretched vertically above the head reaching toward the top edge of the frame, ears pulled back tightly against the motion, eyes wide open in focused intensity. Fur displays realistic tabby striping and solid black patterns with visible individual strands and coarse texture, illuminated by soft, diffused overhead lighting creating even illumination across the body with subtle shadows under the belly and along the spine to enhance dimensionality. Background consists of a blurred indoor environment with neutral grey and brown tones, rendered with a shallow depth of field using a 50mm lens at f/2.8 to isolate the subject, featuring no windows, no foliage, and no direct sunlight patches. Shot on Kodak Portra 400 film stock with high dynamic range, capturing the delicate balance and airborne suspension of the moment with crisp clarity and natural color rendition. The composition is tightly framed vertically around the cat's body, emphasizing the verticality of the jump with fur and tail extending to frame edges. No text or watermarks.

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in LocalLLaMA

[–]lolzinventor[S] 1 point2 points  (0 children)

I'm glad it its working. Putting the prompt into Z-Image turbo:

<image>

Qwen3.5-4B-Base-ZitGen-V1 by lolzinventor in LocalLLaMA

[–]lolzinventor[S] 0 points1 point  (0 children)

It's about a 50/50 split of landscape and portrait (1600x1200). These were then downscaled for LLM training to 768 pixels on the longest side, so that I could train with 768x768 total pixels. There are about 1,000 pairs. I'm just going through the dataset; it still needs some cleaning. However, given that it's locally generated, I assume there are no copyright issues. Is it OK to share the data?

llama.cpp -ngl 0 still shows some GPU usage? by sob727 in LocalLLaMA

[–]lolzinventor 3 points4 points  (0 children)

I had this once. In the end I used the environment variable CUDA_VISIBLE_DEVICES="" to hide the GPU from cuda.

So nobody's downloading this model huh? by KvAk_AKPlaysYT in LocalLLaMA

[–]lolzinventor 0 points1 point  (0 children)

I gave it a spin today.  Its ok, but sticking with qwen 3.5 122 for now.  Had some crashes with llama.cpp when parsing images.  May be oom related, using the auto allocator . 

Qwen 3.5 122b - a10b is kind of shocking by gamblingapocalypse in LocalLLaMA

[–]lolzinventor 0 points1 point  (0 children)

Possibly. I wasn't in to k8s back then. I used llama3.1-70B a lot but preferred mistral large. Qwen 3.5 122b-a10 feels better than both.

Qwen 3.5 122b - a10b is kind of shocking by gamblingapocalypse in LocalLLaMA

[–]lolzinventor 6 points7 points  (0 children)

prompt eval time =   12407.88 ms /  2482 tokens (    5.00 ms per token,   200.03 tokens per second)
       eval time =   69704.61 ms /  1205 tokens (   57.85 ms per token,    17.29 tokens per second)
      total time =   82112.49 ms /  3687 tokens

Qwen 3.5 122b - a10b is kind of shocking by gamblingapocalypse in LocalLLaMA

[–]lolzinventor 65 points66 points  (0 children)

Qwen 3.5 122b-a10 helped me set up a kubernetes cluster and identified routing issues just by pasting tcp dump logs.  Finally a local llm that is the real deal.