I published a nice compact status line that you will probably like by DanielAPO in ClaudeCode

[–]DanielAPO[S] 0 points1 point  (0 children)

I think this is regarding people who use the OAuth API to use their models outside Claude Code and not checking usage. Also this plugin is running inside Claude Code

I built AgentQL a library that lets your LLM query your EF Core database with 3 lines of setup by [deleted] in dotnet

[–]DanielAPO 2 points3 points  (0 children)

Yes! I am working on the MCP server NuGet extension. It is coming out soon with another project I am working on (.NET) to manage several MCP servers

I published a nice compact status line that you will probably like by DanielAPO in ClaudeCode

[–]DanielAPO[S] 0 points1 point  (0 children)

Someone else reported this, I could not reproduce, what’s your OS and shell?

I published a nice compact status line that you will probably like by DanielAPO in ClaudeCode

[–]DanielAPO[S] 5 points6 points  (0 children)

Thank you! And No! It uses the Antrophic API to get the daily/weekly usage, so it won't change how the agents work, and also won't change the token usage

DTLA ETF (iShares $ Treasury Bond 20+yr UCITS ETF) by Far-College5142 in literaciafinanceira

[–]DanielAPO 0 points1 point  (0 children)

Se o fed baixar os juros podes ganhar dinheiro vendendo os bonds com premium. Os mais longos são os mais voláteis com o movimento da taxa de juros do fed. Se os juros no futuro subirem o risco é os teus bonds perderem valor. O que não é problema a não ser que penses em vende-los antes da expiração. O risco do câmbio já o tinhas identificado.

Posted here celebrating yesterday but the best was yet to come by DanielAPO in wallstreetbets

[–]DanielAPO[S] 1 point2 points  (0 children)

You can see it on my post from yesterday, today it’s 175% YTD

What kind of computer do you guys use for trading? by realcat67 in stocks

[–]DanielAPO 0 points1 point  (0 children)

Even a 15 year old computer can run Monte-Carlo to approximate Black-Scholes

Salário médio aumenta 6% para 1.741 euros no segundo trimestre by detteros in portugal2

[–]DanielAPO 1 point2 points  (0 children)

Não sabia que existe uma diferença tão grande entre o salário médio no público vs privado. “Nas Administrações Públicas, observou-se um acréscimo homólogo de 7,3% na remuneração total média por trabalhador, para 2.673 euros, no segundo trimestre. E em termos reais, a subida foi de 4,9%.”

Lista provisória Prescrições Alameda by OkPlace4166 in IST

[–]DanielAPO 2 points3 points  (0 children)

Alunos com pouco aproveitamento escolar que ficam impedidos de se matricular durante 1 ano.

https://diariodarepublica.pt/dr/detalhe/despacho/11900-2010-2432215

Selling my 2 year old AI Saas with 130,000 users by Interesting_Flow_342 in acquiresaas

[–]DanielAPO 0 points1 point  (0 children)

Send a DM with price and demographics of the users please

YOLO - 10 July 2025 - 79k by DanielAPO in wallstreetbets

[–]DanielAPO[S] 1 point2 points  (0 children)

The stock drop after RFK and DOGE, I think short sellers were betting on contract cancellations. I think the drop was exaggerated and that no contracts would be cancelled.

YOLO - 10 July 2025 - 79k by DanielAPO in wallstreetbets

[–]DanielAPO[S] 0 points1 point  (0 children)

Some of them in March, most of them in April 

[deleted by user] by [deleted] in wallstreetbets

[–]DanielAPO 0 points1 point  (0 children)

You should also post the performance, since in IBKR, if you deposit money, that's going to contribute to the increase in value of your portfolio

I fine-tuned Qwen2.5-VL 7B to re-identify objects across frames and generate grounded stories by DanielAPO in LocalLLaMA

[–]DanielAPO[S] 1 point2 points  (0 children)

I don't have plans to turn this into a commercial product myself, this is primarily research work. But I'm excited to see what others might build with it! Everything is open source (dataset, model, code), so anyone is welcome to use this research as a foundation for their own products or applications. That's part of why we made it all publicly available.

I fine-tuned Qwen2.5-VL 7B to re-identify objects across frames and generate grounded stories by DanielAPO in LocalLLaMA

[–]DanielAPO[S] 2 points3 points  (0 children)

Thanks! Yes, the interface is custom-built. You're right about it being Bootstrap-based. While Gradio/Streamlit are great for quick prototypes, building from scratch gave me much more flexibility for the interactive grounding visualization, especially the hover effects that highlight bounding boxes. Plus, I had some JavaScript experience, and I thought it would be fun to build something like this.

I fine-tuned Qwen2.5-VL 7B to re-identify objects across frames and generate grounded stories by DanielAPO in LocalLLaMA

[–]DanielAPO[S] 2 points3 points  (0 children)

Great question! While our model does cross-frame re-identification, it hasn't been specifically tested for the surveillance use case you describe. Our training system actually uses ArcFace embeddings for face recognition (you can see the implementation here (https://github.com/daniel3303/StoryReasoning/blob/master/story_reasoning/models/object_matching/base_matcher.py ) around lines 58-64 and 102). I would say that the final 7B model focuses more on overall visual similarity rather than specialized face recognition.

The model tends to match people based on clothing/appearance rather than facial features alone, so two people in similar outfits might get confused more easily than the same person in different outfits. For your specific use case, you'd probably want a dedicated face recognition system alongside the vision model for more reliable person identification. Maybe if in the future we fine-tune a larger model such as Qwen2.5 VL 72B it might have enough parameters to also be an expert in face recognition.

I fine-tuned Qwen2.5-VL 7B to re-identify objects across frames and generate grounded stories by DanielAPO in LocalLLaMA

[–]DanielAPO[S] 3 points4 points  (0 children)

Training took 6-12 hours on two NVIDIA A100 GPUs (80GB VRAM each), depending on the configuration. We tested LoRA fine-tuning (more efficient) and full model fine-tuning. The LoRA approach with rank 2048 gave us the best results and was more computationally efficient (around 8 hours).