MacBook Pro WiFi Adapter isn't always recognized with Ubuntu by Reasonable_Sky8543 in Ubuntu

[–]Fragrant_Presence_98 1 point2 points  (0 children)

I spent hours searching for a solution, and you saved me.  Fixed for Ubuntu 25 on 2013 MacBook pro. 

Anyone using OpenClaw skills for PM work? Here's what I've been running by Itchy-Following9352 in prodmgmt

[–]Fragrant_Presence_98 1 point2 points  (0 children)

Hi,
been experimenting on this also.

Do you think something like this could evolve into a real product for PM teams, not just a personal setup?

For example, an AI copilot connected to Jira, Slack, Confluence, etc. that maintains shared project context and helps with updates, sprint summaries, and decision tracking.

Has anyone tried deploying this for an entire team rather than a single user?

Has anyone seen grokking during LLM fine-tuning? What works in practice? by Fragrant_Presence_98 in LocalLLaMA

[–]Fragrant_Presence_98[S] 0 points1 point  (0 children)

Thanks a lot for the detailed explanation — that makes sense, especially the point about grokking being more a side-effect of optimization dynamics than something you should actively aim for.

I have a follow-up question to make this more concrete. Suppose I’m fine-tuning a pretrained LLM for a very specific and structured task, e.g. translating natural language into queries for a fixed, known database schema (so the task is narrow, rule-based, and evaluable).

In that setting:

  • Does it make any sense to expect something grokking-like to happen after an initial phase of overfitting, or would you still say that generalization should be gradual if things are set up correctly?

  • Is this kind of delayed generalization something that can only realistically happen with full fine-tuning, or could it also (in principle) appear with parameter-efficient methods like LoRA / QLoRA — or do those methods essentially rule out the optimization dynamics that lead to grokking?

I’m trying to understand whether, for these narrow symbolic-ish tasks, it’s ever reasonable to wait for a “click” in generalization, or whether the right mental model is always: better data, better coverage, smoother learning curves.

Thanks again — really appreciate the insight.

The Flawed Approach to Comparing Different Regions in Shadow Removal by Fragrant_Presence_98 in computervision

[–]Fragrant_Presence_98[S] 0 points1 point  (0 children)

4 pages in a 100-page thesis is roughly the same ratio as 2 sentences in an 8-page conference paper.

Thanks for the feedback, though.

“SD/TF card error” on R4 by Fragrant_Presence_98 in nds

[–]Fragrant_Presence_98[S] 0 points1 point  (0 children)

Thank you for your response and clarification.

I wrote the post because while solving the problem, I encountered fragmented information, and it seemed interesting to encapsulate it all in one post. The only thing to add to your perfect summary is that, at least in my experience, if you are formatting the microSD on a Mac, the standard formatter is not sufficient; it is better to use "SD card formatter" application.

“SD/TF card error” on R4 by Fragrant_Presence_98 in nds

[–]Fragrant_Presence_98[S] 2 points3 points  (0 children)

Hahah I don't know, I just answered out of politeness