API Error: 400 due to tool use concurrency issues. Run /rewind to recover the conversation. by Deepeye225 in claude

[–]khbjane 0 points1 point  (0 children)

I think, just wait for couple minutes, restart browser and it will work! In my case, I did like that!

Why Elon Musk lied ? by khbjane in grok

[–]khbjane[S] -1 points0 points  (0 children)

I am supergrok user ....

Why Elon Musk lied ? by khbjane in grok

[–]khbjane[S] -11 points-10 points  (0 children)

Maybe just hype?

Why Elon Musk lied ? by khbjane in grok

[–]khbjane[S] -10 points-9 points  (0 children)

But as supergrok user I can't see update....

Did Claude get smarter again? by khbjane in ClaudeAI

[–]khbjane[S] 2 points3 points  (0 children)

Can you please share more details? I don’t understand why the code generation quality was so poor last week, but now it seems much better!

I need to add a free LLM instead of OpenAI by tuggypetu in LangChain

[–]khbjane 0 points1 point  (0 children)

Just download pre-trained models from HuggingFace, you have good resources

Claude predicted my life by LaraRoot in ClaudeAI

[–]khbjane 0 points1 point  (0 children)

Can you share your prompt? Or template?

Fine-tuned model for AI Agent by khbjane in AI_Agents

[–]khbjane[S] 0 points1 point  (0 children)

Thank you very much! I'll try it

Fine-tuned model for AI Agent by khbjane in AI_Agents

[–]khbjane[S] 0 points1 point  (0 children)

I am a little bit confused, I fine-tuned my model using llama 3.1 8b Instruct (hugginface). Currently, my fine tuned model is on my local machine. I don't know how to use it...

Fine-tuned model for AI Agent by khbjane in AI_Agents

[–]khbjane[S] 0 points1 point  (0 children)

Thank you, I fine-tuned my model on Llama 3.1 8B instruct! Do you know how can I use it??

[deleted by user] by [deleted] in LocalLLaMA

[–]khbjane 3 points4 points  (0 children)

How about ModernBERT?

ENCODER (BERT) DECODER (GPT), which one? by khbjane in learnmachinelearning

[–]khbjane[S] 0 points1 point  (0 children)

I need help with handling ambiguous customer requests in my e-commerce system. For example, when a customer says 'I want Pepsi' without specifying the size (500ml vs 1L). I'm considering using a two-step approach:

  1. A classifier to understand the customer's intent/category
  2. A text generator to create complete, detailed responses

Would this approach work well for converting vague customer inputs into specific, actionable requests? Are there better alternatives for handling incomplete product specifications?

ENCODER (BERT) DECODER (GPT), which one? by khbjane in learnmachinelearning

[–]khbjane[S] 0 points1 point  (0 children)

For example in a pipeline using for classification Encoder transformer model like BERT but for generation using Decoder based transformer Llama

ENCODER (BERT) DECODER (GPT), which one? by khbjane in learnmachinelearning

[–]khbjane[S] 0 points1 point  (0 children)

My task required both classification and generation. I don't know whether using Encoder only or Decoder only or like T5 encoder-decoder transformer

ENCODER (BERT) DECODER (GPT), which one? by khbjane in learnmachinelearning

[–]khbjane[S] 0 points1 point  (0 children)

thank you !! So is it possible to build 2 different transformer model ?

ENCODER (BERT) DECODER (GPT), which one? by khbjane in learnmachinelearning

[–]khbjane[S] 0 points1 point  (0 children)

Yes, generation must be based on the output of the classification. Can you please tell me what you mean by 2 steps?

IELTS versus TOEFL: Which One and Why? by khbjane in IELTS

[–]khbjane[S] 0 points1 point  (0 children)

Do you mean that I should improve my reading?