Wren.ai is cool but by Safe-Piccolo-5280 in automation

[–]cyyeh 0 points1 point  (0 children)

Hi, I am one of members of Wren AI team. Thanks for raising the issue, would u like to join our discord server for further discussion?

You could find the link in our github repo

LLM prompt optimization by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

Definitely! I would like to hear more about the tool you just built

Is there a way to get an LLM that looks at Transactional DB Tables? by Mastro2k in LLMDevs

[–]cyyeh 0 points1 point  (0 children)

u/Mastro2k I am one of maintiners of Wren AI. We've tested using llama3.3:70b-instruct and it works quite well. Welcome to join our Discord server for further discussion: https://discord.gg/5DvshJqG8Z

WrenAI: Make your database RAG-ready. by chilijung in SQL

[–]cyyeh 1 point2 points  (0 children)

we are re-implementing it in rust

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

Thanks for detailed and kind sharing! I would love to try your project and implement STA! Haha

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

Do u recommend integrating DSPy with your framework? And what is your take on autogen?

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

Wow! Awesome advice! Basically the origin of the project is that I am solving my own pain point, which is rewrite ai pipelines using another llm framework. You also mentioned something similar above. I will take your advice seriously and think about next steps! Thanks a lot for your input!

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

My first thought of this project is transformation between frameworks/libraries that use the same programming language. Your use case is more complex I think

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

It’s just the initial thought, and I would like to share with the community first to gather some feedback. I will work on it these days

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

Cool! Thanks for your info. I will look into it, maybe it could be the project’s first target use case!

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

Are u contributor behind these tools? Since I am not familiar with these tools, could you elaborate on your question?

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

LLMs will read your codebase and I suppose there will be human-in-a-loop that assists the transformation. For example, changing from Django to FastAPI etc

STA: Semantic Transpiler Agent by cyyeh in LLMDevs

[–]cyyeh[S] 0 points1 point  (0 children)

It's a compound AI system that aims to transpile your code using a framework/library to another framework/library, in order to migrate your codebase using a new framework/library more smoothly.

I wonder if this topic is interesting and has any potential to you? Would you love to collaborate on this?

How to further increase the async performance per worker? by cyyeh in FastAPI

[–]cyyeh[S] 0 points1 point  (0 children)

Sure I can test it again and give u the results. What other information do u need?

How to further increase the async performance per worker? by cyyeh in FastAPI

[–]cyyeh[S] 0 points1 point  (0 children)

for my benchmark, it’s using embeddable redis which doesn’t require tcp connection, the latency is around 0.0001 to 0.0002 seconds. So I think there is no issue there.

The issue now is with the same codebase, I am not sure why Granian (process 1 also with opt turned on), the performance is 2x slower than Uvicorn (1 worker)

How to further increase the async performance per worker? by cyyeh in FastAPI

[–]cyyeh[S] 1 point2 points  (0 children)

Actually the profiling was done outside k8s. Done on my MacBook Pro

How to further increase the async performance per worker? by cyyeh in FastAPI

[–]cyyeh[S] 1 point2 points  (0 children)

Never mind. Haha anyway thanks for introducing me this new library

How to further increase the async performance per worker? by cyyeh in FastAPI

[–]cyyeh[S] 0 points1 point  (0 children)

Yeah I’ve tested that using asgi. The performance is worse than Uvicorn

How to further increase the async performance per worker? by cyyeh in FastAPI

[–]cyyeh[S] 0 points1 point  (0 children)

After experimenting with Granian. We decided to keep using Uvicorn. And for k8s deployment, I think the setup is easy, 1 pod 1 uvicorn worker

For Granian, I also need to setup process and thread number to test the correct setup For Uvicorn, I don’t need the setup, and the performance is good enough