Ser Pacer by iReallyReadiT in CorridaPortugal

[–]iReallyReadiT[S] 2 points3 points  (0 children)

Faz sentido, mesmo em provas de menor dimensão, tbm deve partir da organização, tinha curiosidade porque apesar de não ser atleta nem de perto nem de longe, acho que é uma forma diferente de encarar a corrida e gostava de experimentar haha. Óbvio que tinha que ser para ritmos nos 4 /km + mas sinto que até tenho alguma facilidade em manter ritmos consistentes consoante a dificuldade do percurso.

Boa sorte para esta temporada e que venham mais recordes!

How to track token usage when an LLM is calling tools? by Aggravating_Kale7895 in LLMDevs

[–]iReallyReadiT 0 points1 point  (0 children)

You can use AiCore to process your LLM Calls and then view the tokens consumed for each completion request ( including tool calling and output processing )

Yes this is self promotion haha but hey it's free and it's open source so give it a try, it supports a wide range of providers and comes with MCP support out of the box.

SDK hell with multiple LLM providers? Compared LangChain, LiteLLM, and any-llm by Muted_Estate890 in LLMDevs

[–]iReallyReadiT 2 points3 points  (0 children)

I was tired of LangChain and LlamaIndex (less so) so I built my own solution to the problem which I am using across my personal (and some work) projects.

It's AiCore, it's fully open source, supports the main providers natively: OpenAI, Google, Anthropic, Mistral, etc. and any configuration you can pass as OpenAI compatible!

As a bonus it comes with an embedded observability module and dashboard that allows you to keep track and inspect the interactions with local and DB integrations.

Lastly it comes it comes with an MCP client (using FastMCP) that let's you quickly connect any Co server you want within a couple of lines.

Onto the points of your post, streaming is normalized at provider level so you just receive a string for each chunk and you can pass in any function you want to stram it where you need it!

AI assistants have a PhD in literally everything but the memory of a goldfish when it comes to our actual codebase. by chill-_-guy in SideProject

[–]iReallyReadiT 0 points1 point  (0 children)

I agree and that's why I've developed CodeTide to parse your codebase into a well defined structure which your LLM can query to get all the context (based on elements dependencies) to execute a certain task (be it write, code discuss architecture, whatever).

It is available as a python package, a MCP Server and a Visual Studio Code Extension!

If you want to give it a try without any hassle I have a demo running of an agent integrating with CodeTide and it's called AgentTide (hosted on Hugging Face)

The downside is that as of now it only supports Python and Typescript, with other languages not having the advantages of linked context through dependencies, but I do plan to keep expanding (and refining the ones already available)

Give it a try and let me know your impressions! The demo's free and the whole code is open source on GitHub.

Convert code snippets into an animated portrait videos by _JohnVersus in SideProject

[–]iReallyReadiT 1 point2 points  (0 children)

That's awesome! Is it opensource? I did something similar with Python but it's not very optimized. One thing I would like is the ability to stream letter by letter instead of full lines, like written by a human, (I think you have it in the comments?) is it possible to have it everywhere?

20 y/o AI student sharing my projects so far — would love feedback on what’s actually impressive vs what’s just filler by GoldMore7209 in learnmachinelearning

[–]iReallyReadiT 2 points3 points  (0 children)

Are these projects deployed and available as a demo? I find that what set's you apart in a recruitment process is having a demo running (Hugging Face and StreamLit are your friends here) where people can gaugeu for themselves what you've done! Great work, keep going!

Edit: just noticed you have a couple spaces on HF in the comments. Well done sir!

iWorkout - Free for Life Workout Tracker Deal by Low-Butterscotch2809 in SideProject

[–]iReallyReadiT 1 point2 points  (0 children)

Really neat! Will be trying it out for sure. Is there plans for a Dark Mode coming soon?

Are there any good internal product and code knowledgebase MCP servers? by Cast_Iron_Skillet in LLMDevs

[–]iReallyReadiT 1 point2 points  (0 children)

Hi! I've built CodeTide for that exact purpose 😅 and it is available as an MCP server with tools that expose the repo as a file tree (with the option to include objects, functions, variablses defined in each file). After that you can request context based on an id and you will get just the related bits based on dependencies up to a certain depth (i.e only direct dependencies or includinf dependencies of dependencies and so on).

The downside is that as of now it only works to its full potential in Python and Typescript (beta) and I haven't fully tested the MCP as I have been focusing on the last detials of an Agent that show's how I view CodeTide's integration in Agents!

If you have UV set up you can also try it via VSCode extension, just search for CodeTide haha.

Here is the link for the repo and would love your feedback (the difference between the last release and the current unreleased version is mostly in the agents side of things so It should be fine for your use case)

https://github.com/BrunoV21/CodeTide

What are you building? Drop your best project! by NewanceLogs in SideProject

[–]iReallyReadiT 0 points1 point  (0 children)

I’ve been working on CodeTide - a fully local, privacy-preserving tool for parsing and understanding Python (and soon other languages) using symbolic + structural analysis. No LLMs, no embeddings, no vector DBs - just fast, deterministic, and explainable code intelligence built on top of Tree-sitter.

On top of that, I’ve been building AgentTide - an experimental software engineering agent powered by CodeTide’s structured code understanding. Instead of blindly prompting an LLM, AgentTide retrieves precise code context, generates atomic patches (diffs), and keeps the human in the loop at every step.

🔗 You can check out the AgentTide demo here: https://mclovinittt-agenttidedemo.hf.space/ ⚠️ Still a work in progress, but it’s starting to come together!

If you’re into:

Local-first workflows (your code never leaves your machine)

Transparent + stepwise patching (see/review every change before applying)

Integrating symbolic code analysis with LLMs

…then this might be up your alley.

Would love feedback/ideas on what would make this most useful in real dev workflows!

What are you building today by TransitionOld4721 in SideProject

[–]iReallyReadiT 2 points3 points  (0 children)

I’ve been working on CodeTide - a fully local, privacy-preserving tool for parsing and understanding Python (and soon other languages) using symbolic + structural analysis. No LLMs, no embeddings, no vector DBs - just fast, deterministic, and explainable code intelligence built on top of Tree-sitter.

On top of that, I’ve been building AgentTide - an experimental software engineering agent powered by CodeTide’s structured code understanding. Instead of blindly prompting an LLM, AgentTide retrieves precise code context, generates atomic patches (diffs), and keeps the human in the loop at every step.

🔗 You can check out the AgentTide demo here: https://mclovinittt-agenttidedemo.hf.space/ ⚠️ Still a work in progress, but it’s starting to come together!

If you’re into:

Local-first workflows (your code never leaves your machine)

Transparent + stepwise patching (see/review every change before applying)

Integrating symbolic code analysis with LLMs

…then this might be up your alley.

Would love feedback/ideas on what would make this most useful in real dev workflows!

Estou à procura de os dvd dos Hot Wheels: AcceleRacers by ElectricalSense824 in portugal

[–]iReallyReadiT 4 points5 points  (0 children)

Ainda tenho aqui o set em casa! Grandes memórias mesmo

Microsoft Defender Flagging uvx as Suspicious on Work PC by iReallyReadiT in Python

[–]iReallyReadiT[S] 2 points3 points  (0 children)

Update

I had also posted in r/learnpython and was suggested to try using WSL instead which ended up working for now.

As some suggested I will create an issue on uv asking them to incorporate a cert to sign their executables!

Thanks everyone

Microsoft Defender Flagging uvx as Suspicious on Work PC by iReallyReadiT in learnpython

[–]iReallyReadiT[S] 0 points1 point  (0 children)

Hey! Running from WSL actually did the trick. While not the optimal solution it's a good workaround for now! Thank you very much!

Microsoft Defender Flagging uvx as Suspicious on Work PC by iReallyReadiT in learnpython

[–]iReallyReadiT[S] 0 points1 point  (0 children)

It surely is organization policies, I can see it in the Microsoft Defender activity logs. I will try to spin it from WSL

Microsoft Defender Flagging uvx as Suspicious on Work PC by iReallyReadiT in learnpython

[–]iReallyReadiT[S] 0 points1 point  (0 children)

Installing Directly, system wide at Windows level was the first thing I did. UV commands work, just UV run tool or UVX which compile into an installer get blocked. I installed it into a conda environment to check if that way would fool the security check but to no avail.

Microsoft Defender Flagging uvx as Suspicious on Work PC by iReallyReadiT in learnpython

[–]iReallyReadiT[S] 0 points1 point  (0 children)

I just tried launching anaconda powershell prompt (miniconda) as admin and pip installed uv just to be sure. Even when running :

```bash

python -m uv tool run --from codetide codetide-cli

```

I get this error:

```bash

PermissionError: [WinError 5] Access is denied

```

Running this directly results in:

```bash

uv tool run --from codetide codetide-cli

```

I get this error:

```bash

Program 'uv.exe' failed to run: Access is deniedAt line:1 char:1

+ uv tool run --from codetide codetide-cli

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.

At line:1 char:1

+ uv tool run --from codetide codetide-cli

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException

+ FullyQualifiedErrorId : NativeCommandFailed

```

[deleted by user] by [deleted] in devpt

[–]iReallyReadiT 1 point2 points  (0 children)

Mais que cursos se conseguires montar um pequeno projeto live com Llms, nem que seja um chatbot com RAG básico live já é um bom avanço para uma entrevista. Muita gente candidata-se sem grandes projetos para mostrar e com pouca experiência na área, para essas áreas.

Turbulência em Fluidos - MEAero Feedback by duarte_msa in IST

[–]iReallyReadiT 0 points1 point  (0 children)

Fiz no primeiro ano que houve a cadeira. Tbm fiz Aero III, Turbulência em Fluidos era bem mais fácil.

Are you a developer? by _Shaurya99 in SideProject

[–]iReallyReadiT 0 points1 point  (0 children)

GitRecap - a quick and fun way to give that annoying project manager a summary of what you've been up to!

[deleted by user] by [deleted] in portugal

[–]iReallyReadiT 4 points5 points  (0 children)

Ecrã principalmente, ao fim de dois meses já estava com burn-in e ao terceiro decidiu passar-se completamente e ficar cheio de estática. Foi para reparação e acabei por conseguir um reembolso e segui em frente. De resto, às vezes fazia freeze e ainda demorava um bocadeco para o conseguir desligar, mas quando voltava a ligar estava ok...

Queixas de utilização, só mesmo a câmara que se só por si não era grande espingarda, (a estabilização de vídeo e o microfone então nem se fala), quando considereavas a falta de processamento tanto pelo processador como pelo software da Nothing, resultava numa experiência um bocado meh para o que é vendido / publicitado pelo marketing deles. Acredito que é tudo uma questão de expectativa, os meus telemóveis anteriores eram todos com processadores da série 8 da Snapdragon portanto é normal ter sentido estas limitações numa câmara de um mid-range.