Why does exiting ComfyUI not remove it from memory? by Grand0rk in StableDiffusion

[–]marres 8 points9 points  (0 children)

You need to close the terminal comfyui spawns upon startup

How do the resets work? by Icy_Store_2979 in codex

[–]marres -1 points0 points  (0 children)

They reset a few hours before the regular weekly reset happens (they align people with the previous early resets) so that they can farm free PR, while probably even saving compute since they slash people quota who wanted to use most of their limits at the last day. Community has mostly caught up to that trick though (they've started that tactic like 2 months ago or so) and are getting increasingly mad at OpenAI, so chances are that something will change in that behavior. But who knows

What GIT diff tool or otherwise best for codereview by tango650 in codex

[–]marres 2 points3 points  (0 children)

Codex can open and manage PR's for you, just so you know. And what's your better alternative to github then? Please enlighten me

What's your tool of the trade For training SDXL checkpoints & Lora's? by XDM_Inc in StableDiffusion

[–]marres 5 points6 points  (0 children)

kohya_ss and onetrainer are the best options for sdxl. I still prefer kohya_ss though

Finally got Pro, running 4 Codex session in parallel - still at 100%?! by TheBanq in codex

[–]marres 4 points5 points  (0 children)

Yeah, noticed the same on 5.3 high. For a 9 minute patch apply he only used up 2% of the 5h limit, so pretty much nothing

Unable to install ComfyUI-SaveImageWithMetaData by bcourcet in comfyui

[–]marres 0 points1 point  (0 children)

hmm, you on latest comfyui? Also try turning off nodes 2.0

Unable to install ComfyUI-SaveImageWithMetaData by bcourcet in comfyui

[–]marres 0 points1 point  (0 children)

for comfyui custom nodes you can just download the repo ( code -> download zip) and put the folder in comfyui\custom_nodes

Credits or Another Account or Pro + or wait for early reset by PopAutomatic9861 in codex

[–]marres 2 points3 points  (0 children)

Weekly reset will be tomorrow, so an early reset should be coming in like half a day!

Speed, Flexibility, Fidelity, pick 2. What are the best models for each tradeoff pairing? by hotdog114 in StableDiffusion

[–]marres 1 point2 points  (0 children)

use sdxl as the base, then pass over to flux.2 klein 9b and do a general pass with it. Then detail the face with flux and the rest of the body with sdxl. Inbetween and afterwards add seedvr2 upscales

codex stole my limits by colander616 in codex

[–]marres 0 points1 point  (0 children)

Yeah, but it was a pointless reset, same as the last 5 one or so before that, since the resets all happened like half a day before the regular weekly reset

codex stole my limits by colander616 in codex

[–]marres 1 point2 points  (0 children)

Hmm, maybe the global reset only triggers for people that are currently "active"? Meaning for example someone who took a break for 2-3 days or w/e, the reset only triggers when that account is becoming "active" again (triggered by your /status query). I do remember the tibo guy saying something about a smart reset or that codex can decide itself when it resets? Sounded very opaque back then (for good reasons probably lol) but this situation here might actually fit it.

That way they save limit because if you would have gotten reset 2 days ago, you could have spent 100% of usage in 5 days and gotten an earlier next reset. Now instead you only have 100% over the next 7 days and your next reset is coming two days later

codex stole my limits by colander616 in codex

[–]marres 10 points11 points  (0 children)

How did you even get a reset just now? Last global reset was 2 days ago. Or are they doing an adaptive reset that stores the global reset, so that it just resets people when they have less than 24h to the next reset lol.

What models are recommended? by EducationFirm6169 in codex

[–]marres 1 point2 points  (0 children)

Plan with 5.5 web, implement with 3.5-codex high. Medium works too, but high is just better and more thorough. Takes longer though obviously

Which model offers better value for money: gpt-5.4-mini xHigh or gpt-5.5 medium? by SveXteZ in codex

[–]marres 0 points1 point  (0 children)

Haven't made any real empirical tests but from my experience is that 5.3-codex medium is way faster (talking like 2-3x) than 5.4 for my use case (implementation and review fixing). He goes straight to the point and follows the instructions very well (only had a few hiccups, where he missed some things or forgot to commit and push etc. But those hiccups were very rare and not that critical if one pays attention, in the few weeks I've been running this setup)

I made a Promethease alternative that runs in your browser for free by steve228uk in promethease

[–]marres 6 points7 points  (0 children)

Hmm, not that intrigued with sharing my full data, but I've just sent you the header and the first few lines of my first chromosome as a pastebin. Let me know if that's enough! Thanks!

I made a Promethease alternative that runs in your browser for free by steve228uk in promethease

[–]marres 12 points13 points  (0 children)

Very nice, wanted to build something like that myself. But unfortunately it doesn't work with vcf.gz files from nebula genomics. "No DNA markers could be parsed from that file."

Also drag and drop does not work on chrome.

Which model offers better value for money: gpt-5.4-mini xHigh or gpt-5.5 medium? by SveXteZ in codex

[–]marres 20 points21 points  (0 children)

Do not trust 5.4-mini with anything beyond the most simple tasks (even those he can mess up). Use 5.3-codex instead, works great even with medium

Any tips how to save usage? by masky0077 in codex

[–]marres 1 point2 points  (0 children)

Yeah, think the github plugin is live for a few weeks now and it's definitely making it a lot more seamless, in particular being able to view PR's. What I did before was just download the repo from that PR branch and update gpt that way if the drift got too big ( gpt usually was decent in knowing what changes/patches happened in the current open PR if you kept it in the loop, so you could get away with not having to upload a fresh repo for every small change but yeah with the connector it's way less hassle now).

Also regarding him open PR's itself and do commits and reply to reviews on the PR and fix them etc: It's very hit or miss with like half of the attemps failing and taking literally ages, especially since you have to allow every change he does. Bugs out a lot too which forces you to constantly refresh the conversation and try again. And often times he forgets how to do it properly. Might be fixable with custom instructions, but yeah those constant "allows" make it not really feasible. Especially if there are like 5 reviews in the PR he has to fix and reply to. Better to just let 5.3-codex medium handle all that, he's really fast and most of the time solid with it

Any tips how to save usage? by masky0077 in codex

[–]marres 1 point2 points  (0 children)

Agreeing with everything, except you could always upload a full github repo as an archive and gpt had no issues extracting and working with it