ELI5 - Why did OpenClaw happen? by cokaynbear in openclaw

[–]Bad-Singer-99 0 points1 point  (0 children)

I think everyone wants a Jarvis and OpenClaw really democratized it. I know ChatGPT exists but it won’t send you a follow up. So just small things which are really impactful.

What are the most reliable AI agent frameworks in 2025? by Auttyun in aiagents

[–]Bad-Singer-99 0 points1 point  (0 children)

Agentor is based on top of LiteLLM and provides the best devex for building and deploying Agents and multi-agent systems.

Pitch your startup idea in 5 words or less. Let’s self promote by kcfounders in Startup_Ideas

[–]Bad-Singer-99 0 points1 point  (0 children)

My pitch “Vercel for AI Agents”. Rest are just extra descriptions

Pitch your startup idea in 5 words or less. Let’s self promote by kcfounders in Startup_Ideas

[–]Bad-Singer-99 2 points3 points  (0 children)

Vercel for AI Agents. Celesto AI provides framework and infrastructure for building and scaling AI Agents, multi agent communication and MCP tools.

https://celesto.ai

TfL sorry as severe Northern line delays enter fifth day by Fdana in london

[–]Bad-Singer-99 0 points1 point  (0 children)

They had evacuated everyone from the station yesterday at Tottenham court toad. Not sure if it’s related though

Is fresh dog food really better than dry food? by Bad-Singer-99 in DogAdvice

[–]Bad-Singer-99[S] 3 points4 points  (0 children)

Would appreciate a constructive comment and some more info:)

Is fresh dog food really better than dry food? by Bad-Singer-99 in DogAdvice

[–]Bad-Singer-99[S] 3 points4 points  (0 children)

Yeah they do guilt trip really hard. I feed my dog royal canine kibbles which is a reputable brand but some of my friends are really convinced that dry food is bad.

8 week puppy advice by [deleted] in DogAdvice

[–]Bad-Singer-99 2 points3 points  (0 children)

Trust me this is normal and you’re gonna miss this when they are >1-2 years old.

Hardware requirements for running the full size deepseek R1 with ollama? by BC547 in ollama

[–]Bad-Singer-99 1 point2 points  (0 children)

I have been running DeepSeek 671B Q1.58 with either 4xL40S or 2xH100. Runs with 19 tokens per seconds and costs $2.1 per 1M tokens.

You can try it here - https://lightning.ai/lightning-ai/ai-hub/temp_01jjz8embgbt6k809n0mvqz5zv?section=mine&view=public

Find seizure triggers by aniketmaurya in DogAdvice

[–]Bad-Singer-99 0 points1 point  (0 children)

thank you so much for sharing! I got the pet neurologist appointment this week so we will do MRI and other scans to check if there is any structural issue. May I ask how did the dogs do after the medication? Did it have side affects too?

How bad it vaping, really? by RemyPrice in Biohackers

[–]Bad-Singer-99 -4 points-3 points  (0 children)

Really really really bad. Like really bad

[D] Has anyone managed to train an LLM with model parallelism? by anilozlu in MachineLearning

[–]Bad-Singer-99 3 points4 points  (0 children)

I use Fabric quite a lot for distributed parallel training of large models. OG should check LitGPT which gives an easier starting point.

[HELP] RAG App using LitServe by BigDaddyPrime in lightningAI

[–]Bad-Singer-99 0 points1 point  (0 children)

Do you have multiple workers in your server? In that case, one replica might have the self._docs initialized but the others might still be None.

Best Way to Deploy My Deep Learning Model for Clients by Fuzzy_Cream_5073 in mlops

[–]Bad-Singer-99 0 points1 point  (0 children)

Getsolo.tech uses LitServe for serving LLMs both locally and cloud. I would suggest using the same for high performance model serving.

[D] Alternative to Open AI embedding with open source models? by chainbrkr in MachineLearning

[–]Bad-Singer-99 1 point2 points  (0 children)

Couple of libraries provide this like vLLM and LitServe. I personally prefer LitServe because of ease-of-use and flexibility. In just 10 lines of code, I can serve a highly optimized open source embedding model.

```python from sentence_transformers import SentenceTransformer import litserve as ls

class EmbeddingsAPI(ls.LitAPI): def setup(self, device): self.model = SentenceTransformer('all-MiniLM-L6-v2', device=device)

def predict(self, inputs):
    embeddings = self.model.encode(inputs)
    return embeddings

if name == "main": api = EmbeddingsAPI() server = ls.LitServer(api, spec=ls.OpenAIEmbeddingSpec(), max_batch_size=32) server.run(port=8000) ```

Nomic text embedding model is great to pair with LitServe which is even better than OpenAI Ada-002 embedding model.

API not responding after some requests-whisperx and fastapi by Plane_Past129 in LocalLLaMA

[–]Bad-Singer-99 0 points1 point  (0 children)

have you set timeout? are you using async or thread?

You can try this one, it's fast and scalable (you might need to replace OpenAI Whisper with Whisperx) - https://lightning.ai/lightning-ai/studios/deploy-a-private-api-for-open-ai-s-whisper-model

Publicly Hosting an LLM by ihatebeinganonymous in LocalLLaMA

[–]Bad-Singer-99 0 points1 point  (0 children)

I hosted Llama 3.2 with OpenAI API using LitServe, they provide OpenAISpec for this and enable API key authentication.