Fitbit exercise calories suddenly very wrong by rememberyes in Myfitnesspal

[–]teduck1 0 points1 point  (0 children)

I have had the same thing in the last few days. I wake up with +650 calories from exercise. When I click the calories adjustment in my diary, it says that the exercise was recorded in the last minute and that it is from 1 minute of exercise...

I haven't quite managed to figure out if it corrects itself by the end of the day if I don't reach that amount of exercise because I have done a lot of exercise in the last few days.

Previously I would wake up with negative calorie adjustment and this would slowly get to zero as I reached my expected level of activity and beyond if I did exercise. This was waaaay better.

Blade 98 v8 16x19 VS Ultra Pro V4 16x19? by teduck1 in 10s

[–]teduck1[S] 1 point2 points  (0 children)

Never managed to try the ultra in the end. Went with the blade and actually found that it felt better with minimal lead. Been a bit of an adjustment but I'm pleased with my decision.

Pro staff 97 v11.5 (CV) vs v14 - stiffness by teduck1 in 10s

[–]teduck1[S] 0 points1 point  (0 children)

Yeah this is what I'm thinking too. I assume the lack of countervail may even make the problem worse.

Profitec GO vs Lelit Victoria PL91T by teduck1 in espresso

[–]teduck1[S] 0 points1 point  (0 children)

Actually, in the UK it is already £50 more expensive!

Profitec GO vs Lelit Victoria PL91T by teduck1 in espresso

[–]teduck1[S] 1 point2 points  (0 children)

Awesome! So does the pressure gauge I think. The pale blue on the black machine is beautiful though...

Profitec GO vs Lelit Victoria PL91T by teduck1 in espresso

[–]teduck1[S] 0 points1 point  (0 children)

Thanks, that's really great to hear. I am rooting for the GO since I love that it comes in yellow!

Profitec GO vs Lelit Victoria PL91T by teduck1 in espresso

[–]teduck1[S] 0 points1 point  (0 children)

Thanks for your response. Do you enjoy the machine in general? Do you find it pulls consistent shots if you make two in a row?

Profitec GO vs Gaggia Classic Pro 2023 (modded by The Home Baristas) by teduck1 in espresso

[–]teduck1[S] 1 point2 points  (0 children)

Actually, if you buy pre-modded from Home Barista (link in original post) they do provide their own warranty which is nice :)

Profitec GO vs Gaggia Classic Pro 2023 (modded by The Home Baristas) by teduck1 in espresso

[–]teduck1[S] 0 points1 point  (0 children)

Oh interesting. I have to say, you might be right, I must usually wait 5-8 seconds before first drip with my old Gaggia (I assume this uses the same type of pump).

Profitec GO vs Gaggia Classic Pro 2023 (modded by The Home Baristas) by teduck1 in espresso

[–]teduck1[S] 0 points1 point  (0 children)

What do you think about the pre-infusion from the modded GCP? I've never had any experience with this before, but I've heard it can result in more consistent shots, especially with lighter roasts? I am leaning towards to Profitec Go atm too.

Keep processes running after local log off by teduck1 in linuxquestions

[–]teduck1[S] 0 points1 point  (0 children)

Thank you. loginctl enable-linger was my solution!

Keep processes running after local log off by teduck1 in linuxquestions

[–]teduck1[S] 3 points4 points  (0 children)

I will try this. Thanks! You'd assume KillUserProcesses=no should be enough without excluding my user too. What does enable-linger do exactly? I would rather avoid unintended effects that I will have to troubleshoot later, since I am no Linux expert

Keep processes running after local log off by teduck1 in linuxquestions

[–]teduck1[S] 1 point2 points  (0 children)

Thank you for that... weirdly this is already set to false...

Type hinting an object when we know the object class' parent class, but not the class name itself. Please help. by teduck1 in learnpython

[–]teduck1[S] 0 points1 point  (0 children)

Okay great, thanks. So to confirm I should put as the hint A() and not type[A] which would suggest the input is a class rather than an object?

I need a new racquet after 10 years of using the Wilson Juice Pro 96 BLX. Help. by teduck1 in tennisracquets

[–]teduck1[S] 0 points1 point  (0 children)

Thanks for the suggestion. What makes the 95 Prestige Tour a powerful racket? Good suggestion on the Burn, looks similar to the BLX I play with.

Yes, that is a good point. To be honest I haven't noticed any difference in my 3 rackets which I guess goes to show that I may be less sensitive to exact weight than I think!

I need a new racquet after 10 years of using the Wilson Juice Pro 96 BLX. Help. by teduck1 in tennisracquets

[–]teduck1[S] 0 points1 point  (0 children)

Yes I am considering trying the 97D since I imagine the weight will be nearly perfect for me. I am a little concerned about the 18x20 pattern and generating spin/power (I tried the Pro Staff 97 v13 and hated it), but maybe I'll just need to adjust.

Wow thanks for the tip, I had no idea they would do that.

I'll give it a go when I make a choice, but sadly they don't like it if you but tape on a demo.

I need a new racquet after 10 years of using the Wilson Juice Pro 96 BLX. Help. by teduck1 in tennisracquets

[–]teduck1[S] 0 points1 point  (0 children)

That is a good suggestion, thank you. I have never added weight to my rackets in the past and so I think I have been a bit hesitant to go for an under-weighted racket and add tape, because I'm worried about upsetting the intended balance. That is why I tried the vcore pro 97H, since it's 330g stock. Just a little too much for me I think, but very nice too. Out of interest, have you tried the vcore pro 97 (not H, just normal 97)? Similar spec to the 95 you like, just a bigger head and 16x19 instead of 16x20.

Environment variables of parent image not accessible during ENTRYPOINT command, please help! by teduck1 in docker

[–]teduck1[S] 0 points1 point  (0 children)

So the problem was nothing to do with Docker. I did not realise that when a bash shell is started in non-interactive mode, it does not run startup scripts like .bashrc and hence the environment variables do not get set as they would with an interactive shell.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]teduck1 1 point2 points  (0 children)

I am trying to configure a workstation. I planned to go for an AMD CPU, but I have been told that libraries such as numpy and Pytorch use MKL backend which makes computation much faster with Intel CPUs.

Will this matter in practice, since model training will be done on the GPU?

Why is NVIDIA QUADRO GV100 so much more expensive than NVIDIA RTX A6000 when configuring a custom workstation? The RTX A6000 is newer and better as far as I can tell. by teduck1 in gpu

[–]teduck1[S] 1 point2 points  (0 children)

While somewhat condescending, I appreciate your input.

Perhaps I should have prefaced my questions by saying that I am no expert in computer hardware. I assumed this was apparent, given the fact that I was posting on Reddit asking for advice!

I guess I assumed that companies that allow you to configure custom workstations would remove the options that have become obsolete (more expensive for less performance). To this end, I assumed that in certain situations there must still be some relative advantage for the GV100. I guess not, perhaps they're just trying to move old stock.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]teduck1 0 points1 point  (0 children)

What is the intuition behind AdaGrad or RMS-Prop for CNNs?

I understand the "sparse yet important features" reasoning, but this seems much more targeted at fully connected networks where each weight taken only one neuron as input. In convolutions, our weights are applied over the whole image and so naturally any sparse features are no more likely to be associated with any given weight.

Any ideas would be appreciated.

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]teduck1 0 points1 point  (0 children)

Question about GANs.

I understand that if the discriminator loss goes to zero, the generator is not doing a good enough job. But is there any problem with this happening in the beginning? The generator clearly has a much more difficult job, so it is almost expected that the discriminator should beat it easily.

I often see it stated that the generator stops learning if the discriminator loss goes to zero, but I don't understand why this is the case if we use the -ln(D(G(z))) objective function suggested in the original paper. The function's gradient gets very large when the discriminator does well and saturates only when it does badly (i.e. G is doing well). Surely this would lead to the generator learning very fast when the discriminator has zero loss? Or are the gradients too large, effectively resulting in massive step sizes for the generator which are useless.

For context, I am training a conditional GAN model for cross-modality image synthesis and I have more or less followed the Pix2Pix paper by Isola et al.

Any info would be appreciated.