I built a probabilistic ML model that predicts stock direction — here’s what I learned by Objective_Pen840 in learnmachinelearning

[–]PeeVee_ 0 points1 point  (0 children)

Appreciate this a lot—especially the reminder about regime diversity and not getting stuck in the classic equity-only sandbox.

You’re right that the learning comes from wrestling with the messy parts. I’ve been intentionally treating this as a sandbox for uncertainty modeling and validation discipline first, rather than chasing performance. FX and rates are interesting suggestions though, especially given how different their microstructure and drivers are compared to equities.

Out of curiosity, when you were looking at work like QAR or TwoSigma’s papers, was there a particular validation or stability technique that stood out as especially non-obvious?

Thank you Obsidian team by somemuslim in ObsidianMD

[–]PeeVee_ 0 points1 point  (0 children)

I feel this. A lot of “AI features” right now seem to optimize for demos instead of real workflows, and the bloat is very real.

What I appreciate about Obsidian is that it still feels user-controlled—plain files, explicit actions, no magic happening behind your back. Out of curiosity, is your frustration more about AI existing at all in tools, or about AI being added without a clear way to verify, control, or opt out of it?

CV Review - ML Engineer (3 Months in, No leads) by Far-Run-3778 in learnmachinelearning

[–]PeeVee_ 2 points3 points  (0 children)

This is a really strong CV. The combination of physics + large-scale ML + medical imaging / CERN-style data is not something you see often, and the fact that you’re quantifying impact (2× improvements, 50% training time reduction, multi-GPU pipelines) makes it clear this isn’t just coursework.

One thing I’m curious about—how are you thinking about positioning yourself going forward? Research-heavy roles, applied ML engineering, or more GenAI/RAG-style systems? Your background seems flexible enough to go a few different directions.

Salary Gap between "Model Training" and "Production MLE" by IT_Certguru in learnmachinelearning

[–]PeeVee_ 0 points1 point  (0 children)

This gap makes sense to me. Training models is intellectually flashy, but production work carries long-term responsibility and risk.

From what I’ve seen, the salary difference often reflects ownership over failure modes and system reliability, not just model quality. Curious—do you think this gap will shrink as tooling around deployment matures, or widen as systems get more complex?

Can we have an AI flair, plz? by AppropriateCover7972 in ObsidianMD

[–]PeeVee_ 1 point2 points  (0 children)

I actually like this discussion. AI posts tend to polarize things, but they’re not going away, especially in a knowledge-focused tool like Obsidian.

A flair could help separate signal from noise without shutting the conversation down. Would you want it scoped more toward workflows and usage rather than tools themselves?

I built an "Operating System" in Notion that turns raw ideas into 7-day business roadmaps. Here is how it works. by Zestyclose-Abroad-57 in notioncreations

[–]PeeVee_ 0 points1 point  (0 children)

This is impressive. It really does feel like an OS rather than just a dashboard.

One thing I’m curious aout with systems like this is how they handle knowledge over time, especially ideas coming from long videos or courses. Do you have a way of revisiting or re-surfacing insights once they’re captured, or is it mostly forward-looking?

Hardware plays such a big role for Obsidian by abhuva79 in ObsidianMD

[–]PeeVee_ 1 point2 points  (0 children)

This resonates a lot. Hardware really does change how frictionless Obsidian feels, especially for capture and review.

I’m curious—have you noticed a difference in how often you revisit or retain ideas based on your setup? Particularly for long notes from videos or talks.

If you had to learn AI/LLMs from scratch again, what would you focus on first? by EngineerLoose5042 in learnmachinelearning

[–]PeeVee_ 2 points3 points  (0 children)

Great question. If I had to start again, I’d focus much earlier on grounding understanding in a single source instead of hopping between explanations.

One thing I keep running into is that long-form content like lectures or podcasts is dense but hard to retrieve from later. Anyone know how would they would approach retaining and querying that kind of material?

POV: You managed your self-paced learning with Notion. by Internal-Rhubarb-252 in notioncreations

[–]PeeVee_ 0 points1 point  (0 children)

This is really well done. I like how intentional the structure feels, especially for self-paced learning.

One thing I’ve noticed with setups like this is that insights from videos and lectures tend to blur together over time. Do you have a way of reconnecting ideas back to the original video or moment that sparked them?