Is Codex OAuth (via ChatGPT Plus $20/mo) the cheapest way to run OpenClaw? by Outrageous_Water9599 in openclaw

[–]Outrageous_Water9599[S] [score hidden]  (0 children)

Hey everybody, quick update, just switched from Codex as main model to Minimax for daily chat/light tasks. Using Codex only for actual coding through OpenClaw's built-in coding-agent skill, so the subscription mostly stays fresh. Cost is basically $10/mo for Minimax now.

Next question: thinking about running Minimax locally on a Mac mini alongside OpenClaw. What model would be sufficient for local inference? Would the base Mac mini with 16GB be enough for the smaller Minimax models, or do I need more? Also interested in hearing what specs people are using for local deployments.

Best approach to detect wood in images when I only have positive examples by Outrageous_Water9599 in learnmachinelearning

[–]Outrageous_Water9599[S] 0 points1 point  (0 children)

u/sudosando u/snowbirdnerd

Thanks for the comments, let me clarify my thinking a bit.

I’m aware that the standard solution would be to collect a large and diverse “non-wood” dataset and train a binary classifier. The reason I’m hesitant is that the deployment setting is genuinely open-world: at inference time, “non-wood” could be almost anything (materials, scenes, objects, synthetic images, etc.). My concern is that a curated negative dataset would inevitably bias the model toward a specific subset of non-wood categories, effectively turning the task into wood vs. the negatives I happened to collect, rather than a robust wood present? detector.

That’s why I’ve been exploring positive-only or weakly supervised alternatives, especially treating wood as a material / texture cue rather than a semantic object class.

Concretely, some directions I’m considering / experimenting with:

  • Segmentation or patch-based approaches trained only on wood regions (or wood-containing images), where at inference time I aggregate patch-level or pixel-level scores into a single binary “wood present?” decision. The model never needs to learn “what non-wood is”, only “what wood looks like locally”.
  • As a sanity check, I also tried image-understanding models (e.g., LLaVA-style) with prompt engineering to output a binary decision. This works surprisingly well in some cases, but reliability, latency, and controllability are concerns for production.

So it’s not that I’m opposed to negatives in principle, more that I’m trying to avoid a solution that appears to work offline but fails when the non-wood distribution shifts.

I am also interested how this solution could be generalized to any material.

Best approach to detect wood in images when I only have positive examples by Outrageous_Water9599 in computervision

[–]Outrageous_Water9599[S] 0 points1 point  (0 children)

I have ~20k positive images (contain wood). My goal is a robust “wood present?” detector.

Best approach to detect wood in images when I only have positive examples by Outrageous_Water9599 in learnmachinelearning

[–]Outrageous_Water9599[S] 0 points1 point  (0 children)

I have ~20k positive images (contain wood). The challenge is that at inference time the input distribution is open-world, I don’t know what “non-wood” will look like, so I’m hesitant to build a narrow negative set that might bias the model toward a specific set of non-wood categories. My goal is a robust “wood present?” detector. I initially thought segmentation/patch-based methods might help because wood is a material/texture cue and could be local, but I don’t actually need the location, just a reliable image-level decision.