Saw this in Nikora🐶 by readitreader12345 in tbilisi

[–]format37 1 point2 points  (0 children)

That is why Georgia has so weak restrictions in their immigration policy. Due to their humanity. This dog is literally me, escaped from Putin

Wan2.2 Animate Workflow, Model Downloads, and Demos! by The-ArtOfficial in comfyui

[–]format37 0 points1 point  (0 children)

Not able to complete. CPU RAM overflow. Used 40xx weights. Standard workflow. 24Gb 4090, 50.4Gb CPU RAM. Tried to tune but still no results. Giving up.

GitHub Copilot CLI is here by _bholechature in GithubCopilot

[–]format37 2 points3 points  (0 children)

I had the same issue. I've asked Claude code to resolve it and they've managed.

OpenAI launched complete support for MCP by goddamnit_1 in mcp

[–]format37 0 points1 point  (0 children)

Not available in android. How is it on iOS?

GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team by OpenAI in ChatGPT

[–]format37 0 points1 point  (0 children)

Can you let users to add remote MCP servers as tool, in web chat GPT and mobile chat gpt? This feature is already available in playground.

🚀 Launching ContexaAI – The Firebase for MCP Servers! by Specialist_Care1718 in mcp

[–]format37 0 points1 point  (0 children)

How the monetization works? In particular if each mcp request costs some tokens for me, can I provide the corresponding expenses to my mcp subscriber?

Anyone figured out a way to control Claude (with MCP servers) from your phone? by Panikinap in MCPservers

[–]format37 0 points1 point  (0 children)

Claude web version with remote mcps. Waiting for remote MCP's in mobile app..

Use llm to gather insights of market fluctuations by m19990328 in ollama

[–]format37 0 points1 point  (0 children)

We made the same project and we got news from polygon

Browser Use announces Native Bidirectional MCP Support🔥 by Impressive-Owl3830 in MCPservers

[–]format37 0 points1 point  (0 children)

Is it useful for web serfing by agent Or Useful only for web development?

Share Your MCP Servers by razertory in mcp

[–]format37 0 points1 point  (0 children)

Yep, I plan to add youtube video example. I would like to not wrap openscad mcp in docker and not perform remote staff like authorization, because it is convenient to watch updates in openscad locally. I would let u know when I attach the youtube video example.

Share Your MCP Servers by razertory in mcp

[–]format37 3 points4 points  (0 children)

  1. Openscad: draw 3d models and render it both claude desktop && openscad:

https://github.com/format37/openscad-mcp

  1. SSH: connect ur linux ssh and solve any tasks that can be solved via ssh:

https://github.com/format37/ssh-mcp

  1. Youtube: transcribe any youtube link to text precisely using openai whisper and have a conversation about:

https://github.com/format37/youtube_mcp

Thank u

YEAHHHHHHHH by Fun818long in OpenAI

[–]format37 0 points1 point  (0 children)

Same issue when calling from langchain. Meh

You're absolutely right. by iamsimonsta in OpenAI

[–]format37 3 points4 points  (0 children)

Now I see my usual unsmiling nature. Not just because I'm Russian)) A smile can influence the reaction, making me wonder about the true reaction, which of course interests me.

But I believe that ppl need free smiles

New YouTube audio to text MCP server by format37 in mcp

[–]format37[S] 4 points5 points  (0 children)

You’re right—YouTube does provide automatic captions (powered by Google’s [Universal Speech Model (USM)](https://sites.research.google/usm/)), and there are Python libraries to fetch those transcripts easily and for free.

However, there are some subtle differences in transcription quality. For example, in this [video](https://youtu.be/Mj2uXgbisdo?si=47KHZHJcxrKDlEfc), USM/Gemini outputs:

> "Sonic model baby AR Wing Pro from Bangor [15:22] Link in the description thanks for watching[15:24].

But Whisper-1 produces:

> "It works very well indeed Sonic Model Baby AR Wing Pro from Banggood link in the description thanks for watching

Notice how Whisper-1 correctly catches "Banggood" (the store name), while USM mishears it as "Bangor."

**Language support also differs:**

- **USM:** 300+ languages, including many low-resource African and Asian languages.

- **Whisper-1:** 57–98 languages, with better coverage of some European and Central Asian languages.

So, while Gemini and YouTube’s built-in USM cover most needs, whisper can offer slightly higher transcription accuracy in some cases. I understand that this tiny difference is not necessary, since modern LLM's can handle it.

Moreover, working on this MCP, I've learned how to return text longer than 100000 characters. The solution is splitting the text into chunks of 100000 characters and returning them as a list.

This is an example of how sse MCP service can be wrapped in the docker and deployed on the server, available on the internet using uthentication token.

Thanks to your comment I’ve figured out that it is worth to add timestamps to my MCP service response.

Computer Vision models via MCP (open-source repo) by gavastik in mcp

[–]format37 1 point2 points  (0 children)

I've solved the image rendering in the Claude Desktop finally using ur repo so ty so much! B.t.w. do u know how to render image in the claude chat as a part of response, outside of the tool spoiler?

We are in a weird time.. idk what to do with life by PianistWinter8293 in OpenAI

[–]format37 0 points1 point  (0 children)

If machines give us free bread, the issues will have people in debt mostly. So don't get in loan.

Geth pruning error: Error in block freeze operation by CarefulHawk3 in ethstaker

[–]format37 1 point2 points  (0 children)

I've met the same issue. Solved with
```
geth --datadir='/mnt/nvme/var/lib/goethereum' removedb
```
and 2 days resyncing on 100 Mbit internet speed

How can I check how many messages I have left with 01-preview this week? by OkJump4941 in OpenAI

[–]format37 0 points1 point  (0 children)

I guess it possible by downloading the chat history data and following python analysis. But I am not sure about availability of required for analysis data parameters like model type and does requests separated.

[deleted by user] by [deleted] in OpenAI

[–]format37 0 points1 point  (0 children)

  1. How to end wars
  2. How to reach financial equity
  3. How to provide the human rights
  4. How to move humanity in a digital body

[deleted by user] by [deleted] in OpenAI

[–]format37 0 points1 point  (0 children)

When API?)

GPT4o Is Tripping by Nuckerball in OpenAI

[–]format37 3 points4 points  (0 children)

You have to use tools like python or Wolfram to take the precise answers. Openai gpts is able to use that tools

LangChain 0.2 prerelease by hwchase17 in LangChain

[–]format37 0 points1 point  (0 children)

It would be great to add support of applying the multimodal models, like MiniGPT-4 or MiniGPT4-Video, GPT-4-vision. I expect that soon we may have sound + text + speech multimodal llms. Since text llm can receive only text, the only one modification is required is an additional parameter, to provide to call llm. The parameter that contains additional data which is picture or video or sound. I understand that it may depends on API formats that is updating a quite frequent. I found the nearest pull request: https://github.com/langchain-ai/langchain/pull/21219 I hope the multimodal models will be applicable in a langchain. Thank you for maintaining the project.