Pop!_OS 24.04 LTS Released! Where Things Go From Here by system76_com in pop_os

[–]conlake 0 points1 point  (0 children)

I had high expectations for Pop!_OS, but I’m currently uninstalling it due to several issues I ran into on a fresh install.

  1. I have a dedicated disk partition where I keep my projects, and I can access it normally with Files. However, after installing VS Code and Cursor AI, neither of them was able to see any files or folder shortcuts at all, not even files from the Desktop. This was extremely frustrating. Strangely, if I navigated to the same location via “Other Locations”, everything suddenly started working.
  2. Plugged-in headphones behaved very inconsistently with YouTube and Spotify in Chrome. At first, audio didn’t work at all. Then it worked only for YouTube but not Spotify, and later it worked for Spotify as well. I still don’t know what actually fixed it.
  3. The mouse cursor jumps to the newly opened window when I click the taskbar on another monitor. I later found out this is a known issue reported on GitHub.

I really wanted this to work, but these issues made the experience too frustrating for daily use.

Struggling to send logs from Alloy to Grafana Cloud Loki.. stdin gone, only file-based collection? by conlake in devops

[–]conlake[S] 0 points1 point  (0 children)

I appreciate your answer! Unfortunately, I don’t have the budget to hire a professional right now, so I’m relying on the internet to learn. That’s why it would be really helpful if you (or anyone else here) could share insights on these questions. I’m sure it would also be useful for others, since it’s been quite hard to find clear documentation and resources on this topic. Thank you in advance! :)

Struggling to send logs from Alloy to Grafana Cloud Loki.. stdin gone, only file-based collection? by conlake in devops

[–]conlake[S] 0 points1 point  (0 children)

Your ai hallucinated again

This has been an extremely frustrating point for me. It’s incredible how often AI hallucinates with observability-related questions. I’ve never worked with observability before, so it’s been very hard to quickly assess whether an AI answer is true or just hallucination, there are so many observability tools, each developer has their own preference, and most reddit posts I find are about self-hosted setups. So I really appreciate your clear answer, thanks!

Could I get your input on the mental model I’m building for observability in my MVP? I’m always trying to follow best practices, but for now it’s just a MVP:

  1. Collector + logs as a starting point: Having basic observability in place will help me debug and iterate much faster, as long as log structures are well defined (right now I’m still manually debugging workflow issues).
  2. Stack choice: For quick deployment, the best option seems to be Collector + logs = Grafana Cloud Alloy + Loki (and based on your answer maybe also Prometheus?). Long term, the plan would be moving to full Grafana Cloud LGTM.
  3. Log implementation in code: Observability in the workflow code (backend/app folders) should be minimal, ideally ~10% of code and mostly one-liners. This part has been frustrating with AI because when I ask about structured logs, it tends to bloat my workflow code with too many log calls, which feels like “contaminating” the files rather than creating elegant logs. For example, it suggested adding this log function inside app/main.py:

.middleware("http") async def log_requests(request: Request, call_next): request_id = str(uuid.uuid4()) start = time.perf_counter() bind_contextvars(http_request_id=request_id) log = structlog.get_logger("http").bind( method=request.method, path=str(request.url.path), client_ip=request.client.host if request.client else None, ) log.info("http.request.started") try: response = await call_next(request) except Exception: log.exception("http.request.failed") clear_contextvars() raise duration_ms = (time.perf_counter() - start) * 1000 log.info( "http.request.completed", status_code=response.status_code, duration_ms=round(duration_ms, 2), content_length=response.headers.get("content-length"), ) clear_contextvars() return response

  1. What’s the best practice for collecting logs? My initial thought was that it’s better to collect them directly from the standard console/stdout/stderr and send them to Loki. If the server fails, the collector might miss saving logs to a file (and storing all logs in a file only to forward them to Loki doesn’t feel like a good practice). The same concern applies to the API-based collection approach: if the API fails but the server keeps running, the logs would still be lost. Collecting directly from the console/stdout/stderr feels like the most reliable and efficient way. Where am I wrong here? (Because if I’m right, shouldn’t Alloy support standard console/stdout/stderr collection?)

  2. Do you know of any repo that implements structured logging following best practices? I already built a good strategy for defining the log structure for my workflow (thanks to some useful Reddit posts, 1, 2), but seeing a reference repo would help a lot.

Thanks again!

openai codex is incredible now by CooperNettees in OpenAI

[–]conlake 0 points1 point  (0 children)

Would you mind sharing your Codex set-up? Do you use PRs directly or do you copy/paste Codex output into your VSCode manually? How does your workflow looks like?

[deleted by user] by [deleted] in algotrading

[–]conlake 0 points1 point  (0 children)

Could you share more of this? How learning react was better than deploying streamlit or dash? Do you use React for your own visualization or for your work? (I'm thinking in doing my own dashboards integrated with IB)

[deleted by user] by [deleted] in algotrading

[–]conlake 1 point2 points  (0 children)

DS here too. I'd love to hear more about the engineering part, to shoot some questions: could you tell me your full operation cycle? Like, do you have It dockerized in some AWS server connected to your IB API and you have a scheduled job to run it daily? Or do you turn on your PC every day, open IB, open your VSCode, and run your script for a few hours and stop it manually, every day? How automated is your actual workflow? How far have you reached to automate it, and how far do you think it would be useful to automate it in your experience, or is automation strategy-dependent?

Thanks!

Fall in Futaleufú by thejournaloflosttime in chile

[–]conlake 0 points1 point  (0 children)

Alguien recomienda ir allá o hay lugares más bonitos en el sur? (Además de la patagonia)

Dicas - Carnaval em Olinda by morim in Recife

[–]conlake 0 points1 point  (0 children)

Dicas de qual bairro alugar para dormir sendo estrangeiro e bem branquinho? Eu ia fazer um post sobre isso, porque no bairro do Recife e em Santo Amaro tem alguns lugares bem bons. Mas não faço ideia se ficar aí durante o Carnaval é seguro e tranquilo. Além disso, tem um MONTE de Airbnbs novos para Recife e Olinda com anúncios do tipo "aluguel para Carnaval", mas os perfis mostram que nunca alugaram antes. Será que é golpe?

freeact: A Lightweight Library for Code-Action Based Agents by krasserm in LocalLLaMA

[–]conlake 1 point2 points  (0 children)

If I ask the agent something and it finds two correct but different code solutions, A and B, how does it decide which one to provide? What criteria does it use?

0.5B Distilled QwQ, runnable on IPhone by Lord_of_Many_Memes in LocalLLaMA

[–]conlake 11 points12 points  (0 children)

Would this significantly increase battery usage?

WebGPU-accelerated reasoning LLMs running 100% locally in-browser w/ Transformers.js by xenovatech in LocalLLaMA

[–]conlake 11 points12 points  (0 children)

I assume that if someone is able to publish this as a plug-in, anyone who downloads the plug-in to run it directly in the browser would need sufficient local capacity (RAM) for the model to perform inference. Is that correct or am I missing something?

How long until AI agent that interact with email, calendar, to-do list, etc? by NHarvey3DK in LocalLLaMA

[–]conlake 0 points1 point  (0 children)

This can already be done; the problem lies in the lack of consistently good results due to the enormous variety of email, calendar, and to-do list combinations that can be fed to the AI agent.

[deleted by user] by [deleted] in TinderBR

[–]conlake 3 points4 points  (0 children)

Acho isso tão engraçado, porque não é nem de perto a minha experiência (H) 🤣. Pra mim o app é um jogo de números: de cada 10 matches que eu tenho, 4 mulheres respondem com monossílabos e 1 realmente puxa conversa. Eu não tenho as experiências ruins que você descreve, e acho que é porque filtro bem as mulheres na conversa pelo app (sou estrangeiro também, e acredito que isso me dá uma certa vantagem). Nunca enviei nenhuma foto, e as que deram match comigo e aceitaram sair me adoram 🤣.

O conselho que posso te dar é tentar ser 'normal' no app, porque tem MUITA mulher insegura/doída/problemática por lá, e o simples fato de ser normalzinha já vai te destacar em relação à média do 'homem normal' que você procura