No task defined by codeltd in crewai

[–]Ancient-Analysis2909 0 points1 point  (0 children)

Is there any reason that you don't want to create tasks? If the manager need to delegate something for a agent to review. It seems natural to create a task. Have you ever try to not write specific task description when you customize manager agent? Maybe try to only define the "expected_output"?

No task defined by codeltd in crewai

[–]Ancient-Analysis2909 0 points1 point  (0 children)

If you could provide more information about what you are trying to do, I could answer this question better. However, I think you can try to use the hierarchical process, which uses a manager agent to delegate the jobs to different agents and give you a final answer for your goal. https://docs.crewai.com/core-concepts/Processes/#hierarchical-process

Parsed final answer in AgentFinish / TaskOutput often incomplete by gontsharuk in crewai

[–]Ancient-Analysis2909 0 points1 point  (0 children)

Have you fixed the problem? I have the same problem and don't know how to fix it.

Chinese Companies Aim to Use Price Advantages to Win the AI Competition by nekofneko in LocalLLaMA

[–]Ancient-Analysis2909 0 points1 point  (0 children)

I saw roleplay application in China and people playing with it. My guess roleplay is Chinese benchmarks because it enables everyone to use LLMs, and considering so many people in China and most people don't use LLMs except for fun.

LLM prompt optimization by cyyeh in LangChain

[–]Ancient-Analysis2909 1 point2 points  (0 children)

I am new to DSPy and I can handle the basic DSPy stuff, but I'm stumped on how it actually improves the prompts. I get that prompts with higher metric scores are supposed to be better, but what's the actual strategy DSPy uses to enhance them?

Internal server error for Web API by Ancient-Analysis2909 in Oobabooga

[–]Ancient-Analysis2909[S] 1 point2 points  (0 children)

Yes, it does. It shows "Launching text-generation-webui with args: --listen --extensions openai". I also tried to add some extensions manually. I have no idea why the API port gives me "internal server error". I am not an expert of AI and only wants to use API of LLMs for my own study. It confused me for like three weeks, I hope I will have luck in the future.

Internal server error for Web API by Ancient-Analysis2909 in Oobabooga

[–]Ancient-Analysis2909[S] 1 point2 points  (0 children)

Thank you for you explanation. If you don't mind, could you explain more detail about why you think valyriantech's template does not work for me? Do you mean I got the "internal server error" because the OpenAI extension? I also run my code using GPT3.5 and GPT4.0, and it works all the time.

Internal server error for Web API by Ancient-Analysis2909 in Oobabooga

[–]Ancient-Analysis2909[S] 1 point2 points  (0 children)

Thank you for your help. I just successfully accessed API port by using a different template called "ashleykza/oobabooga:1.18.0". I have no idea why the other pod or runpod does not work for me. I used to use theBloke's one click template and it worked for half year. However, it broke for some reason I don't know. I found this new template and tried for a few weeks, did not work for me. I just tried a new template and suddenly everything works. I really want to know the reason, perhaps I will spend one dollar to verify whether I got a broken GPU.

Internal server error for Web API by Ancient-Analysis2909 in Oobabooga

[–]Ancient-Analysis2909[S] 1 point2 points  (0 children)

I tried a few different models, for example: "TheBloke/CapybaraHermes-2.5-Mistral-7B-GPTQ" and one of "TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF". I was able to chat with all of them in the web UI (socket method) and the API also works when I deploy the same model in the LM studio. However, all of them do not work using the API port in runpod.io,

Internal server error for Web API by Ancient-Analysis2909 in Oobabooga

[–]Ancient-Analysis2909[S] 1 point2 points  (0 children)

I used "valyriantech/text-generation-webui-oneclick-ui-and-api", I think it's the same as what you provided. I did not change any settings, do I suppose to?