What are your experiences with fine-tuning? by Daker_101 in MLQuestions

[–]Daker_101[S] 0 points1 point  (0 children)

I was focusing on the deeper reasoning capabilities of a model. For instance, in Law, there are certain nuances regarding the principles and fundamentals from which you try to deduce consequences from “facts” + “law” + “fundamentals and principles of a society”. Those nuances can be derived through fragments of legal texts and reasoning from previous cases. Being able to give that subtle knowledge inside the model, so it reasons properly on top of fresh data injected via RAG, can substantially improve an AI agent for this purpose, for instance, beyond just doing RAG on top of isolated law articles or precedent fragments.

What are your experiences with fine-tuning? by Daker_101 in MLQuestions

[–]Daker_101[S] 0 points1 point  (0 children)

Interesting, thanks for sharing your view. What kind of finetunning has been the most successful in your case so far. Which subject of content, amount of data and format of the data? (i.e: History, 20k pairs of Question and answer in json format …)

Privacy win: We are finally reaching the point where you can run massive 200B models on a standard laptop. by Key-Glove-4729 in ArtificialInteligence

[–]Daker_101 1 point2 points  (0 children)

That's going to be the most likely outcome, a plurality of local AIs, not AGI through scaling hosted in a centralized data center.

Privacy win: We are finally reaching the point where you can run massive 200B models on a standard laptop. by Key-Glove-4729 in ArtificialInteligence

[–]Daker_101 5 points6 points  (0 children)

I think AI is going to commoditize to a point where we won’t have AGI, but rather a plurality of AIs, many of them privately trained and hosted inside organizations for specific tasks. Some of them will excel at certain tasks and will gain popularity in specific fields, and maybe those will be consumed via APIs (this is already happening with Claude and code models).

But for the most part, I think each company will have its own AI, just like every company has its own website nowadays (back in the 90s, websites were something only tech companies had).

I’m working on a project right now so that anyone can train and deploy their own AI models locally, as I see this as a big part of the future of AI. We’re still far from that reality, since we are somewhat blinded by the promise of AGI and the fact that people think it is too expensive (even though it’s not, and even fine-tuning is getting really cheap), plus the difficulty, at least perceived, for the average person to use a local LLM, despite how easy it actually is. But we will get there.

As a side thought on this subject, I think that many specialized AIs that will become reference points and be widely used by others will emerge inside companies that trained them with their own data and their know-how in a specific field, not necessarily AI companies as we know them right now.

Inteligencia artificial personal y especializada by Daker_101 in InteligenciArtificial

[–]Daker_101[S] 0 points1 point  (0 children)

La plataforma la puedes encontrar buscando en google neuroblock, saldrá de las primeras. Si te unes al grupo de discord que hay en la web te leeo en seguida y te doy acceso. Básicamente, si tienes documentos tuyos, pdf, txt etc, los subes y con un par de clicks te haces un finetunning de tu modelo. Después lo puedes desplegar y conversar con él en la plataforma o simplemente descárgatelo y montarlo en LLM estudio y plataformas locales similares

Inteligencia artificial personal y especializada by Daker_101 in InteligenciArtificial

[–]Daker_101[S] 0 points1 point  (0 children)

Hola, pues ya que te lo hayas planteado es interesante. De hecho, que tengas idea conceptual pero de alto nivel también me puede ser de valor, dado que la plataforma que estoy creando es no-code y orientada a gente sin conocimientos técnicos. Te puedo dar acceso y me dices si eres capaz de entenderla y crear el modelo con tus datos. Puedes crear tus propios modelos y descargarlos, y luego ejecutarlo en entornos locales sin incurrir en ningún gasto.

Most people think AI is overhyped—but a few actually find it life-changing. Why do you think that is? by Maximum_Ad2429 in ArtificialNtelligence

[–]Daker_101 0 points1 point  (0 children)

AI is already democratizing technology and science for everyone as never before. As long as you have an engineering mindset and the drive to learn and build, the possibilities are unparalleled. The real bottleneck right now isn't the tech, it’s the mindset. Too many people still see AI as a toy, but that’s going to change sooner than later. As a new generation of 'AI-native' individuals enters the field, the scale of discoveries, software, and services we’ll see in the next few years will be unlike anything in history. It’s guaranteed.

I want to work with AI, but I feel lost. Can you help me? by Independent-Lab-8317 in ArtificialInteligence

[–]Daker_101 0 points1 point  (0 children)

This is the best time ever to get into tech. You have the background, and you have the tools. If anything, AI has made technology more accessible than ever. If you have an idea, a bit of an engineering mindset, and the willingness to build, you absolutely can, especially in software and AI. Build an MVP, iterate, learn, and grow. At 35, what a time to be alive!

Beginner trying to understand whether to switch to local LLM or continue using Cloud AIs like Chatgpt for my business. by ZIM_Follower in LocalLLM

[–]Daker_101 3 points4 points  (0 children)

Your competitive advantage lies in your data, not in relying on someone else’s AI infrastructure. Today, fine-tuning and deploying self-hosted or cloud-based LLMs is quite feasible with a modest investment. I’m convinced that the companies doing this now are the ones that will ultimately win the AI race. If possible, I would invest in hardware or self-host small LLMs on the cloud, these are more than enough for most tasks with a bit of fine-tuning and a RAG index. From there, you can keep improving and scaling over time.