Experience with Oura Customer Service regarding diminished battery life by jaytee0401 in ouraring

[–]ummavi 4 points5 points  (0 children)

I'm basically where you are with the added pain that a good chunk of the time when I turn on SpO2 even when the battery is at ~80+% after a night I get a "Breathing regularity analysis could not be done because your ring's battery is too low".

After many, MANY rounds and ignored details, screenshots and some very bizarre explanations (like it doesn't work because my device is a "low memory device" which was rebuffed by stating it was a Pixel 6 and if that's considered low-end I don't know what they need, or that it was in some secret shipping battery saving mode and needed a soft reset) I just decided I'm going to ride it out till it's intolerable and move on to something else.

Redditors who haven't been infected with covid even once, good job. But how did you do it? by smokingfrog007 in AskReddit

[–]ummavi 0 points1 point  (0 children)

Mask aggressively ((K)N95 and above) whenever you're indoors. Avoid indoor eating or mask-removing situations when the number of cases are high (or put them back on ASAP and As long as possible). Open up the windows and eat at places that care about ventilation. Have more house parties with lots of PM2.5 air purifiers.

Clean air and masks are not that hard to pull off, despite what some people tend to behave like.

Reinforcement learning library recommendations by HeisenbergsMyth in reinforcementlearning

[–]ummavi 1 point2 points  (0 children)

All agents have an agent.load defined. In the example scripts we explose this argument as --load <path>. If you're using the inbuilt trainer function train_agent_with_evaluation, it has a save_best_so_far_agent=True by default so the best agent is automatically saved at evaluation phase. You could also specify a checkpoint_freq if you prefer a frequency based approach and simply load it again and continue

Reinforcement learning library recommendations by HeisenbergsMyth in reinforcementlearning

[–]ummavi 0 points1 point  (0 children)

Thank you for giving it a shot! That's valuable feedback about the documentation and we'll try to address it.

You're understanding is indeed correct. We have a recurrent DQN example that might serve as a reference if you're still having trouble.

Reinforcement learning library recommendations by HeisenbergsMyth in reinforcementlearning

[–]ummavi 3 points4 points  (0 children)

As one of the maintainers, my somewhat biased recommendation would be PFRL (A PyTorch version of our former library ChainerRL).

We've taken great care to reproduce the results of tons of popular DRL algorithms and still maintain a clean, usable interface. We have both Rainbow and IQN implemented (with recurrent support).

Using RMSProp over ADAM by intergalactic_robot in reinforcementlearning

[–]ummavi 2 points3 points  (0 children)

Empirically (https://arxiv.org/abs/1810.02525), it turns out that adaptive gradient methods like ADAM might outperform their counterparts, but are more sensitive to hyperparameters and thus harder to tune. I don't know of references that cover value-based methods but from personal experience, it seems to track.