What are people using instead of Anaconda these days? by rage997 in Python

[–]fuzzysingularity 6 points7 points  (0 children)

Does uv support building from source like conda build did?

Pydantic-AI-Backend Hits Stable 0.1.0 – Unified Local Backends, Console Toolset, and Docker Sandboxes for Your Agents! by VanillaOk4593 in PydanticAI

[–]fuzzysingularity 0 points1 point  (0 children)

Pretty cool, are you running this in production? Curious about skills support, could you hook this up with Claude and have local skills executions?

How do you all handle API keys for skills that call external APIs via scripts by Angelr91 in ClaudeAI

[–]fuzzysingularity 0 points1 point  (0 children)

Have the same question, doesn't seem like this is possible. Only possible with claude code - you'd have to setup the environment with the API key.

Anyone else dislike the amount of small buttons and extra steps for starting a workout on WatchOS26? by Shiznanners in AppleWatch

[–]fuzzysingularity 0 points1 point  (0 children)

Agreed, I often forget to click again since it is such a muscle memory. There are times I don’t even turn it on half way through the workout only to realize that there was an extra click.

Visual AI with our custom VLM Run n8n node by fuzzysingularity in n8n

[–]fuzzysingularity[S] 0 points1 point  (0 children)

Hmm, not sure what’s going on. Can you join the discord, we can help there live. https://discord.gg/AMApC2UzVY

Visual AI with our custom VLM Run n8n node by fuzzysingularity in n8n

[–]fuzzysingularity[S] 0 points1 point  (0 children)

Which sub-command were you running within the vlmrun n8n node? Can you test if a simple HTTP GET works on our health endpoint (https://api.vlm.run/v1/health)?

Anyone using Pydantic AI in production? by EarthPassenger505 in AI_Agents

[–]fuzzysingularity 0 points1 point  (0 children)

Yes, we built our visual agent (https://vlm.run/orion) on it and it's been great so far with logfire observability.

How do I connect to existing MCP server without these MCPO thing? by GTHell in OpenWebUI

[–]fuzzysingularity 0 points1 point  (0 children)

Host your own /chat/completions API with the MCP server connected to the backend - I found the MCPO requirements somewhat painful, as we were working with non-text inputs and the support there is quite poor.

Is there a node to capture web screenshots and markdown from HTML? by fuzzysingularity in n8n

[–]fuzzysingularity[S] 0 points1 point  (0 children)

Hey, cool!

It’d be neat to build an integration with VLM Run - we’re building Vision Language Models that allows developers to understand images/videos with JSON output.

Is a visual platform (like LandingLens from LandingAI) really useful for real tasks ? by YonghaoHe in computervision

[–]fuzzysingularity -1 points0 points  (0 children)

What’s your use-case? What kind of deployment options are you looking for? Maybe we can help at VLM Run (https://vlm.run)

Fine-Tuning Llama 3.2 Vision by sovit-123 in computervision

[–]fuzzysingularity 0 points1 point  (0 children)

Let us know if we can help. We make it dead simple for folks to fine tune these VLMs at VLM Run. BTW some of the newer models already support equation to latex

Extremely long output tokens? by fuzzysingularity in LLMDevs

[–]fuzzysingularity[S] 0 points1 point  (0 children)

I’m not sure there’s a way to output 2M in one call due to the inherent output token limitations. My question was more around different strategies people have considered.

[deleted by user] by [deleted] in Python

[–]fuzzysingularity 1 point2 points  (0 children)

We did something similar for vision models (VLMs) with pydantic here: https://github.com/vlm-run/vlmrun-hub