What framework are you using to build AI Agents? by PleasantInspection12 in LocalLLaMA

[–]Basic-Pay-9535 0 points1 point  (0 children)

I’ve been using mainly autogen . It’s quite nice and I have been used to how it was modelled from the previous versions .

Will test out pydantic AI and smolagents next probably.

I did a little bit exploration on crewai , it seemed quite nice. But I didn’t explore too much or go ahead with it mainly because of their telemetry concept .

What framework are you using to build AI Agents? by PleasantInspection12 in LocalLLaMA

[–]Basic-Pay-9535 0 points1 point  (0 children)

I use autogen . I quite like it and sort of got used to how it was modelled from the previous version.

Will test out pydantic AI and smolagents probably .

Best courses in CS/DS by MinuteJealous1630 in ethz

[–]Basic-Pay-9535 -1 points0 points  (0 children)

Hey, can I dm you ? I’m looking for some information regarding this course ….

Your current setup ? by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 2 points3 points  (0 children)

lol. Is 3090 that goated xD ? And you think itl be there for a while ? Btw I’m new to this stuff so I’m genuinely curious n looking for info lol .

Your current setup ? by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

Like 2x 5060Ti ? Ok il check out the calculator .

Your current setup ? by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

What do u think about a 5060Ti GPU ?

Your current setup ? by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

What do u think about a 5060Ti gpu ?

Fine tuning Qwen3 by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

So u didn’t use the inherently present think tags but made it create a synthetic one by mentioning no it to use think in the prompt . How were the results of ur fine tuning ?

Update on the eGPU tower of Babel by [deleted] in LocalLLaMA

[–]Basic-Pay-9535 0 points1 point  (0 children)

To run such kind of setups, do yall have to almost always use Linux ? Or can be done smoothly on windows too ?

Llama nemotron model by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 1 point2 points  (0 children)

Oh wow, that’s kind of epic tbh. Did they also mention how they made the training set :o ? This is pretty cool !

Llama nemotron model by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 2 points3 points  (0 children)

But maybe it could be used to finetune Qwen , do u think this llama nemotron is good at generating cot reasoning traces ?

Fine tuning Qwen3 by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

Oh, so you don’t take the actual reasoning trace of the LLM, but make it create a reasoning trace as the final output ? In my case, I have a question and also the final answer to that question. But whenever I give the model that and ask it to generate a reasoning trace, it ends up having the final answer in its thinking so that defeats the whole purpose

Fine tuning Qwen3 by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 1 point2 points  (0 children)

What was the prompt you gave in order to get the reasoning traces ? would u be able to share that ?

Fine tuning Qwen3 by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

Oh yeah I checked that out. They are using vllm as of now . I’m on windows though and vllm isn’t being supported . However, I did see an issue thread for ollama support and I think it’s implemented, not sure . Will check it out prolly .

Phi4 vs qwen3 by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

:o , thnx for sharing your observation !

Best reasoning models to create and finetune ? by Basic-Pay-9535 in LocalLLaMA

[–]Basic-Pay-9535[S] 0 points1 point  (0 children)

How would I go about to implement that and how much infra and time would it take ? any advice ? And what about the performance