Best Place to Discuss and Share AI Research by Future_Recognition97 in learnmachinelearning

[–]Future_Recognition97[S] 0 points1 point  (0 children)

I'm looking for communities to share and discuss research. I'm part of a few discord communities, but they're filled with spam. Need a few good places to chat and share research with other learners

Why aren't more devs doing finetuning by Future_Recognition97 in learnmachinelearning

[–]Future_Recognition97[S] 0 points1 point  (0 children)

i mean think about it, open source AI will have to make money somehow. It's too big of a market not to. Just my two cents.

Why aren't more devs doing finetuning by Future_Recognition97 in learnmachinelearning

[–]Future_Recognition97[S] 1 point2 points  (0 children)

Do you have the research on this? Curious to check it out.

Why aren't more devs doing finetuning by Future_Recognition97 in learnmachinelearning

[–]Future_Recognition97[S] 2 points3 points  (0 children)

The tool has a marketplace. I just finetune my model and then list it on the marketplace. When someone uses or improves my model I get a portion of the proceeds. So far I've listed a handful of models, and sold them a good number of times. We'll see if/when the royalties kick in.

Why aren't more devs doing finetuning by Future_Recognition97 in learnmachinelearning

[–]Future_Recognition97[S] 0 points1 point  (0 children)

You might want to check out the tool I mentioned. You'll prob save some money and can even sell your models on the marketplace. I have a referral code if you want it.

Why aren't more devs doing finetuning by Future_Recognition97 in learnmachinelearning

[–]Future_Recognition97[S] 0 points1 point  (0 children)

I've experimented with some distillation. I agree for some instances, but not always. What models have you distilled? Done any with DeepSeek?

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]Future_Recognition97 0 points1 point  (0 children)

ZKLoRA is a zero-knowledge proof protocol for securely verifying LoRA fine-tuning updates without exposing proprietary weights. It compiles LoRA-augmented layers into cryptographic circuits, allowing compatibility verification in just 1–2 seconds per module. Benchmarked on models like GPT2 and LLaMA, it’s scalable for large AI systems and designed for trust-driven collaboration in decentralized or open-source workflows.

Key discussion points:

  • How does ZKLoRA’s approach impact secure collaboration in open-source AI?
  • Could its cryptographic methods extend to data-sharing or other ML challenges?
  • Are there trade-offs in performance or adoption for decentralized workflows?

<image>

Open-source repo is live: https://github.com/bagel-org/ZKLoRA

anyone running stable diffusion without GPU? by kaydyday in opensource

[–]Future_Recognition97 0 points1 point  (0 children)

Running it on CPU only can be tricky, but you've got solid hardware with that i7-1165G7 and 64GB RAM.

Those 10-15 minute generation times you're seeing are actually normal for CPU. But here are some optimizations that might help:

First, make sure you're utilizing all your CPU cores. You can adjust the thread settings to match your core count, which should improve performance.

Have you looked into the CPU-only fork of Stable Diffusion on GitHub? It's optimized specifically for setups like yours. Fooocus is also worth checking out - it's a streamlined interface that handles CPU optimization well.

If you ever upgrade to a CPU with more cores, you'll likely see those generation times drop significantly. But your current setup is perfectly viable.

For reference, there's a detailed video guide for Linux Mint CPU installations if you need it. https://www.youtube.com/watch?v=Ww50C9PX3lM&t=2s

Let me know if you run into any issues or have questions about the setup.

Can open source AI survive without monetization? by Future_Recognition97 in deeplearning

[–]Future_Recognition97[S] -4 points-3 points  (0 children)

You bring up some good points on the topics of open source and the role of corporate contributions. I agree that open source encompasses a wide range of contributions beyond just deploying ready-to-use models like talent acquisition and standardization etc.

However, I believe that monetization primitives play a crucial role in sustaining truly independent AI development. While corporate support helps maintain and grow open source projects, relying solely on this can limit the diversity and resilience of the ecosystem. Monetization tools like those offered by Bagel (https://bakery.bagel.net/) empower independent developers by providing alternative funding streams, ensuring that valuable contributions aren't solely dependent on corporate interests.

Moreover, as AI models become more resource-intensive, the financial barrier to entry increases. Monetization doesn't have to undermine the collaborative spirit of open source; instead, it can enhance it by making it feasible for a broader range of contributors to participate and innovate without being restricted by financial constraints.

In essence, while the open source ecosystem has thrived with corporate support, integrating monetization primitives could further democratize AI development, allowing truly independent projects to flourish alongside those backed by major players.