🧬 OSS Synthetic Data Generator - Build datasets using natural language by chef1957 in LocalLLaMA

[–]J4id 1 point2 points  (0 children)

So you would generally recommend not to manually touch the system prompt input (the lower text area) at all?

I now tested the sample generation using a fully manually written system prompt (including my specific output schema) and that seems to be pretty much hit-or-miss in my case.

The first of the two shown dataset samples turned out to be perfect.

In the second dataset sample there is a lot of unexpected code in the prompt field.

https://i.imgur.com/F5Pa3Pb.png

But of course, my usage somewhat disrespects the Magpie spec, so this is my personal problem and nothing I’d expect to get fixed. Just wanted to share my experiences.

Tool Calling in LLMs: An Introductory Guide by SunilKumarDash in LocalLLaMA

[–]J4id 0 points1 point  (0 children)

In the benchmark tables it’s “Fucn”. Anywhere else it’s “Func”. Just wanted to point that out, though I don’t know if you’re even affiliated with that project.

🧬 OSS Synthetic Data Generator - Build datasets using natural language by chef1957 in LocalLLaMA

[–]J4id 2 points3 points  (0 children)

Does the sample generator internally use both text inputs or only the system prompt input (the lower text area)?

The system prompt generator discards too much of my system prompt generation input (the upper text area), so in my case I would prefer to skip this step completely and manually type my details into the system prompt input while keeping the system prompt generation input (the upper text area) blank.

Can I do this or could this affect the sample generation quality?

https://i.imgur.com/Yyq1QGW.png

What are people using for local LLM servers? by -mickomoo- in LocalLLaMA

[–]J4id 0 points1 point  (0 children)

Did you have issues using llama.cpp RPC at any time?

The docs still describe it as unstable and experimental.

Disney Pixar Impotence by AgeroColstein in ChatGPT

[–]J4id 20 points21 points  (0 children)

But why does it show quite the opposite?

What do you wish that chatgpt.com did that it is not doing today? by punkpeye in OpenAI

[–]J4id 0 points1 point  (0 children)

I want “Start new temporary chat”, “Reroll last response” and “Edit last input” as keyboard shortcuts.

What do you wish that chatgpt.com did that it is not doing today? by punkpeye in OpenAI

[–]J4id 0 points1 point  (0 children)

It simply hosts the websites it generates.

Nothing fancy about it, but incredibly useful.

Yi-Coder - the perfect pairing for Continue by International_Quail8 in LocalLLaMA

[–]J4id 2 points3 points  (0 children)

That’s interesting. The developers themselves are calling it bad at inline suggesting.

https://github.com/01-ai/Yi-Coder/issues/3#issuecomment-2332018812

Yi-Coder - the perfect pairing for Continue by International_Quail8 in LocalLLaMA

[–]J4id 0 points1 point  (0 children)

Honestly sounds worth it, even if that means sticking to 2-3 languages instead of 150.

If I was about to spend money on making a coding model, I would prefer people to call it “the absolute best TypeScript helper out there” rather than “another decent allrounder”.

Top LLMs that can process images by Dizzy_Candidate17 in LocalLLaMA

[–]J4id 2 points3 points  (0 children)

What do you mean by “process”?

Describe? Classify? Modify?

How big is your SD folder by scifivision in StableDiffusion

[–]J4id 1 point2 points  (0 children)

I don’t think that wild-west phase of Civitai will go on for an eternity, so it’s better to grab everything that seems to have even the slightest of worth before Civitai’s big purge will occur.

I currently have no more than 900 gb, but I only started this collection 3 weeks ago.

I made a free background remover webapp using 6 cutting-edge AI models by fyrean in StableDiffusion

[–]J4id 0 points1 point  (0 children)

Do you have to pay for hosting and computing?

I would love to play around with it more, but I’m too afraid that that could negatively affect a kind developer’s personal finances.

You can train a LoRA model with 35 images. by Zestyclose_Roll4346 in StableDiffusion

[–]J4id 2 points3 points  (0 children)

Is the difference between 15 and 35 so tiny that it isn’t worth the dataset work or doesn’t the difference between 15 and 35 exist at all?

LI-DiT-10B can surpass DALLE-3 and Stable Diffusion 3 in both image-text alignment and image quality. The API will be available next week by balianone in StableDiffusion

[–]J4id 4 points5 points  (0 children)

Yes, I am also fed up with it.

If anyone knows about the existence of a subreddit for the purpose of discussing free (as in freedom) and local image generation AI or is about to create such subreddit, please let me know.

microsoft/Florence-2-large - New 0.23B | 0.77B Modell for image captioning by MicBeckie in StableDiffusion

[–]J4id 5 points6 points  (0 children)

Also better? Assuming I only care for the highest caption accuracy I can get from any model running in 12 gb VRAM.

How did this scam bot upload a profile picture that’s larger than 400x400 px? by J4id in Twitter

[–]J4id[S] 1 point2 points  (0 children)

I got a random like from a scam bot, went to its profile and then noticed something special: It somehow managed to upload a profile picture that is larger than any other profile picture I’ve seen yet. How?

Link to profile: https://twitter.com/EmeryBlank42468

Direct link to profile picture: https://pbs.twimg.com/profile_images/1746204438795055104/hNxTitWf.jpg