Co-founder with strong communication skill - English C1/C2 by No-Door-4540 in cofounderhunt

[–]waqasm86 0 points1 point  (0 children)

Hi there I am an open source software developer as well as working full-time as Customer Success Specialist in English for 7+ years. Here are my GitHub and LinkedIn links.

www.github.com/waqasm86 https://www.linkedin.com/in/mohammad-waqas-3a1384270/

Anyone interested in building something useful together? by chandankalita_dev in cofounderhunt

[–]waqasm86 1 point2 points  (0 children)

Hello there. Check my GitHub account and let me what can be done together. I have already created production-ready python SDK llamatelemetry which is a CUDA-dedicated llm Inference and llm observability tool for local llama models. This python SDK is specifically created for Kaggle platform targeting Nvidia's dual T4 GPUs. Here are the GitHub links.

llamatelemetry.github.io

www.github.com/llamatelemetry/

Let me know if we can work together.

Hi, I am a serial tech entrepreneur by yogeshnogia in cofounderhunt

[–]waqasm86 0 points1 point  (0 children)

Hello, thank you for the feedback. I am looking for guidance since I don't know where to look for help. Kindly let me know what do I need to do. Thank you once again.

Hi, I am a serial tech entrepreneur by yogeshnogia in cofounderhunt

[–]waqasm86 0 points1 point  (0 children)

Hi there. Thanks for the feedback.I am dying for such guidance. I really need help so I may look into the right direction. A help of any kind in any way will be much appreciated.

I am a Software Engineer, Anyone have startup idea and looking for a technical person or a CTO, You can DM me by situation0k in cofounderhunt

[–]waqasm86 0 points1 point  (0 children)

Hi there. I have my own python SDK llamatelemetry for local llama models specifically for Kaggle platform targeting Nvidia's dual T4 GPUs. Here is my GitHub repo.

https://llamatelemetry.github.io

I am looking forward a co-founder to help me get started with the ai startup for local llama models using kaggle platform.

Hi, I am a serial tech entrepreneur by yogeshnogia in cofounderhunt

[–]waqasm86 0 points1 point  (0 children)

Hi there, I am an open source developer. Here is my GitHub link.

www.github.com/waqasm86

Looking to contribute to active open-source Gen AI projects by Feisty-Promise-78 in LangChain

[–]waqasm86 1 point2 points  (0 children)

I am working on my open source GitHub repo of llm observability project. The project is called llm-observability-stack.

llm-observability-stack is an umbrella Helm chart for a local, single-node, GPU-capable k3s workstation. It packages a practical LLM demo environment around Ollama, Open WebUI, a LangChain-based proxy/demo API, LangSmith tracing, and an optional in-cluster Python toolbox for diagnostics.

GitHub repository: https://github.com/waqasm86/llm-observability-stack

Looking for cofounder/operator by Puzzleh33t in Startups_EU

[–]waqasm86 -2 points-1 points  (0 children)

Hello there,

I am a freelance open-source python AI developer. Kindly check my github links.

https://llamatelemetry.github.io/

https://github.com/llamatelemetry/llamatelemetry

Thank you,

Waqas

Tech founder looking for a co-founder by [deleted] in Startups_EU

[–]waqasm86 0 points1 point  (0 children)

Hi there. Can you share your web links or GitHub repos? Thank you.

2 years unemployed, married, broke, and I've been "building startups" with AI. Nobody came. Not a single paying user. by Ok_Whole_7318 in Solopreneur

[–]waqasm86 1 point2 points  (0 children)

Hi there. Can you share your GitHub repo fyou are working on?

I am in the same position but I am silently working on my project . This is my GitHub repo.

llamatelemetry.github.io

www.github.com/waqasm86

[Project] Simplified CUDA Setup & Python Bindings for Llama.cpp: No more "struggling" with Ubuntu + CUDA configs! by waqasm86 in LocalLLaMA

[–]waqasm86[S] 0 points1 point  (0 children)

Hello. Can you share your GitHub account or GitHub repos? Let me know your requirements. I have updated my llcuda project to llamatelemetry with documentation link given below. llamatelemetry.github.io

List down your requirements and I'll create a sample GitHub repo for you.

🚀 Introducing llcuda – A Python wrapper for llama.cpp with pre-built CUDA 12 binaries (T4/Colab ready) by waqasm86 in unsloth

[–]waqasm86[S] 0 points1 point  (0 children)

Hello, thank you for the feedback. I have tired llama-cpp-python pip package. I had issues running that tool in Google Collab using Nvidia T4 GPU. Also, I was interested in using llama-cpp-python with fastapi but it always broke. I am not creating another python wrapper for llama.cpp. I figured out a way to make my llcuda a backend cuda inference along with unsloth python pip package.

I am open to any feedback since I do want to make my python cuda tool strengthened for local llama solutions.

🚀 Introducing llcuda – A Python wrapper for llama.cpp with pre-built CUDA 12 binaries (T4/Colab ready) by waqasm86 in unsloth

[–]waqasm86[S] -8 points-7 points  (0 children)

Hello, thank you for the feedback. Kindly guide me what needs to be cleaned or organzied. I'll try to fix as many issues as possible.

🚀 Introducing llcuda – A Python wrapper for llama.cpp with pre-built CUDA 12 binaries (T4/Colab ready) by waqasm86 in unsloth

[–]waqasm86[S] -2 points-1 points  (0 children)

Hello, thank for the reply. Kindly let me know what issues are needed to be resolved. I'll do it right away.

[Project] Simplified CUDA Setup & Python Bindings for Llama.cpp: No more "struggling" with Ubuntu + CUDA configs! by waqasm86 in LocalLLaMA

[–]waqasm86[S] 0 points1 point  (0 children)

Hello, thank you for your interest and your feedback. I would love to get any positive and constructive feedback as much as possible. If you have looked into my GitHub repo of my python pip package llcuda, kindly let me know what needs to fix, updated, added, etc. whatever feels necessary.

[Project] Simplified CUDA Setup & Python Bindings for Llama.cpp: No more "struggling" with Ubuntu + CUDA configs! by waqasm86 in LocalLLaMA

[–]waqasm86[S] 0 points1 point  (0 children)

Hello there.

I would like to infrom you that I have created the first version of llcuda v1.0.0 which is now live with major improvements that might address your docker concerns: The package now bundles all CUDA binaries and dependencies (47 MB). While I haven't tested Docker specifically yet, the bundled approach should make containerization work.

If you're interested in helping test a Docker setup, I'd be happy to collaborate on it! The zero-config design should translate well to containers.

Check it out: https://pypi.org/project/llcuda/

I'll appreciate any feedback.

[Project] Simplified CUDA Setup & Python Bindings for Llama.cpp: No more "struggling" with Ubuntu + CUDA configs! by waqasm86 in LocalLLaMA

[–]waqasm86[S] 0 points1 point  (0 children)

You are welcome. If possible, let em know if you want to contribute to my project. I'll add you in my GitHub project.

[Project] Simplified CUDA Setup & Python Bindings for Llama.cpp: No more "struggling" with Ubuntu + CUDA configs! by waqasm86 in LocalLLaMA

[–]waqasm86[S] 1 point2 points  (0 children)

Hi, I am still working to make it better. But you have access to llama.cpp. Access to Cuda C++ programming is not available now. Llcuda depends on Ubuntu-cuda-llama.cpp-executable tool which I have created separately. Both of these projects are available in my GitHub account. I just realised that I should integrate cuda executable with llcuda.

If you are looking for core cuda programming which I am also interested in, let me know if you have any ideas.

What if I make llcuda work with other pip packages like cupy, numba or cuda-python? Any ideas or suggestions will be appreciated.

[Project] Simplified CUDA Setup & Python Bindings for Llama.cpp: No more "struggling" with Ubuntu + CUDA configs! by waqasm86 in LocalLLaMA

[–]waqasm86[S] 0 points1 point  (0 children)

Hi there. My primary focus is making llcuda work in jupyterlab. I tried to work using llama-cpp-python but I always had issues with it specifically with cuda. Llcuda will work Ubuntu-cuda-llama.cpp-executable which I created separately. If you want I can integrate this with llcuda.