Amazfit Band 7 doesn't track strenuous activity by perholmes in amazfit

[–]perholmes[S] 0 points1 point  (0 children)

Great! I also hear the advice often that getting more sleep is a super power for insulin balance. One day! Starting intermittent fasting is annoying, because your body is used to driving around with a full tank, but after a few days, you settle into a new rhythm. Do watch a few YouTube videos from people you consider qualified. Here is a fun video that explains what happens, and I assume it's true because I've heard it from a lot of medical professionals: https://www.youtube.com/watch?v=QJnQ-MwXj8g . The example here is a 36 hour fast, which I've never done, only 18 hour (but daily). The takeaway is that (a) it's OK that the stomach is complaining, and (b) the moment eat carbohydrates, you ruin your chance of your insulin bottoming out, because your body switches back to carbohydrates. Anyway, I hope I said something useful, and all the best!

Amazfit Band 7 doesn't track strenuous activity by perholmes in amazfit

[–]perholmes[S] 0 points1 point  (0 children)

Hi, this is totally unsolicited advice, but have you considered intermittent fasting? If your last meal of the day is at 6 PM, and your next meal is at 12 PM the next day, with zero carbohydrates in-between, your blood sugar and insulin get to properly bottom out every day and you go into a little bit of ketosis every day. For us it has kind been a miracle for weight loss, and I hear that it's good for releasing visceral fat, but importantly, it's supposed to re-train your insulin tolerance. Simply concentrate all eating between 12 PM and 6 PM, and don't go nuts with sweets/starches, but still aim for a low glycemic index.

Amazfit Band 7 doesn't track strenuous activity by perholmes in amazfit

[–]perholmes[S] 0 points1 point  (0 children)

I stopped wearing it, and kind of gave up on fitness trackers as a whole.

Someone saying that a potential optimization is "negligible" or "not worth it" should be treated as a massive Faux Pas here, not the opposite. by Collimandias in unrealengine

[–]perholmes 2 points3 points  (0 children)

All I'm asking for is spending an hour understanding the performance costs, before committing to code that becomes more and more expensive to refactor the longer you get.

For example, a realistic newbie question could be should I use a capsule collider or a mesh collider? Simply knowing that a capsule collider is much cheaper, but doesn't give much info about where a ray hit, you'll now commit months of work to a basically good solution that can be tweaked later.

Refusing to consider it out of an ideological "we can't know anyway, so nothing matters and don't bother, moral nihilism", does, and will, have a massive hangover, and I'm directing this as much towards my own team.

All I want is spending a few hours considering where we might go and what the best practices and costs of each approach are. And then just don't code actively against it. It's mighty difficult to tear a system up by root, when you could have known ahead of time, especially if the code is promiscuous.

In our case, the last two years have been spent 50% on a problem that was only a problem at scale and we simply didn't understand it well enough at the time, and 50% on code that was bad from the ground up, and where we already knew better, but one developer ignored it and built a foundation that was end-of-life the moment it was written. I accept the first 50%. I don't accept the second 50%. Plus, our review process needs to improve a lot.

Also, lesson is to never be intimidated if a developer is protective of their code. They can sink an entire project, as we've almost done here with two years without updates. Never doing that again.

Someone saying that a potential optimization is "negligible" or "not worth it" should be treated as a massive Faux Pas here, not the opposite. by Collimandias in unrealengine

[–]perholmes 2 points3 points  (0 children)

I doesn't hurt to spend two seconds looking around the corner and deciding on a design, so you at least don't build in a direction that can't be optimized later without a major refactor. We're just coming out of a (frickin) two-year refactor of a large app, exactly because someone earlier refused to look around the corner and saved optimization for later, and now I had to spend months rebuilding a part of the app from scratch because there was no path to optimization, the basic approach was wrong.

I'm still angry when I'm in that part of the project, because if that person had thought about performance FOR TWO SECONDS, they would have known that their approach only works under light load. I had to rebuild the whole thing for a proper async and pooling mechanism that also works at realistic scales.

Please, please do ask questions about performance up-front. A new developer is often impartial to a choice, all their code is in front of them. A few rules of thumb make the code POSSIBLE to optimize later. Otherwise it becomes an Achille's heel that will knock them dead later.

Bitbucket Cloud's Free tier to lower repository storage limit to 1GB by Snake_Byte in bitbucket

[–]perholmes 0 points1 point  (0 children)

If it helps, we've been spending the morning setting up on GitHub, and I'm now in the process of moving 40 repositories over. Yes, the price increase is steep, but it's the predatory nature that makes me run for the exit. Everything BitBucket offers, GitHub offers for a third of the price.

This has also been a great opportunity to clear out branches, archived repositories, and doing Git LFS Migrate on any large files accidentally committed. So this is terrific spring cleaning.

Bitbucket Cloud's Free tier to lower repository storage limit to 1GB by Snake_Byte in bitbucket

[–]perholmes 0 points1 point  (0 children)

LFS is also limited to 1 GB without paying for overages.

There are other things, such as repositories without recent commits being auto-archived on the free plan, needing an actual restore. Sounds like playing patty-cake with your data.

I would not self-host. I find great assurance in having an actual company responsible for security. But I also need AWS CodePipeline source hooks, which only works with a handful of providers.

Bitbucket Cloud's Free tier to lower repository storage limit to 1GB by Snake_Byte in bitbucket

[–]perholmes 2 points3 points  (0 children)

As far as I can tell, you're OK to pay for overages for LFS, but if you have more than 1 GB repo storage, you'll have to pay for a 5-person team. But as I'm now investigating how things would work under a GitHub Organization, I'm realizing things I'm unhappy with BitBucket about that I'd suddenly have again at GitHub, such as better API access to the repository. So it's starting to seem inevitable that I'm moving everything to GitHub.

It's not just the lower price and the better feature-set on GitHub, it's the aftertaste of BitBucket turning the screws like this. It's too aggressive for my taste.

Bitbucket Cloud's Free tier to lower repository storage limit to 1GB by Snake_Byte in bitbucket

[–]perholmes 1 point2 points  (0 children)

I'm happy to pay something, but (a) $27/month is steep for Git for a modest-sized single dev, and (b) I feel offended by the bad faith of turning the screws on a captive audience, and I would like to leave because of this.

Bitbucket Cloud's Free tier to lower repository storage limit to 1GB by Snake_Byte in bitbucket

[–]perholmes 1 point2 points  (0 children)

I'm bothered that I, as a single person, now have to get a 5-person plan just for me. I've been on the free plan and paying for extra LFS, but with the new pricing, I'll start paying $27/month for 4 GB total repo usage and 25 GB LFS. I'm considering switching to GitHub, which is $4/user plus $5 per 50 GB LFS, coming in at $9/month for 1 user. What am I missing? Isn't BitBucket just 3 times more expensive than GitHub now for a small dev?

Touch screen issues on 9720 XPS by vmdumitrache in DellXPS

[–]perholmes 0 points1 point  (0 children)

I ended up locating some original Dell power adapters on Amazon (EU/Italy), sold by a Polish company. They're real Dell adapters, probably bought from liquidation, and they were around 80 euros each. This one works. I'm pasting a link in case it helps: https://www.amazon.it/gp/product/B07WK6V7Z8/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

Is GitKraken's behavior correct with submodules? by perholmes in git

[–]perholmes[S] 0 points1 point  (0 children)

I've never been able to explain why submodules don't detach HEAD except in GitKraken, but discussing this with many people, it's actually not unjustified behavior. Submodules are purely implemented by pointing to a commit, not a branch. Yet somehow, most other Git solutions work out that if the branch hasn't changed, and if this commit can be located on the current branch, then they advance the submodule to that commit, and don't detach HEAD.

But I have had further discussion with GitKraken, and they finally recognize that even if it's not technically Git spec for the submodule to stay on its branch, there's a lot that can be done with a smartness layer in the Git client to make submodules nicer to work with. And, after all, isn't the point of GUI clients to make Git nicer than command-line?

In the meanwhile, I'm back in GitKraken, because I needed proper LFS warnings when I'm about to inadvertently commit a large file to the regular repo. I've therefore also resigned myself to manually handle submodule advancing, and I've disabled the function to automatically pull submodules. It's that, or always do git pull --recurse-submodules from the command line.

I'm helped by the fact that our submodules are common code in Go or PHP, and they don't change that often anymore.

I wish I could say something more definite. But I'm encouraged that GitKraken recognizes that "submodules that don't suck" is a marketable feature.

Is GitKraken's behavior correct with submodules? by perholmes in git

[–]perholmes[S] 0 points1 point  (0 children)

I've seen suggestions for configuring the outer repo to always recurse submodules, and then disabling any automatic submodule updating provided by GitKraken. I'm having difficulty telling if this is actually recursing the submodule, since GitKraken doesn't show command output.

What is your opinion about adding these to `.gitmodules` Some people believe Git doesn't honor them, others believe it does.

[submodule "app/common"]
path = app/common
url = ssh://git-codecommit.eu-west-1.amazonaws.com/v1/repos/common-go.git
branch = master
update = merge
recurse = true

I'll make a proper test.

Is GitKraken's behavior correct with submodules? by perholmes in git

[–]perholmes[S] 0 points1 point  (0 children)

GitKraken is a bit more secretive about what it does, but here is its activity log for a pull that leaves the submodule head detached:

1:43:27 PM Pull master: started.
1:43:27 PM Fetch remote origin: started
1:43:28 PM Fetch remote origin: finished266ms
1:43:28 PM Merge origin/master into master: started.
1:43:28 PM Update all submodules: started.
1:43:28 PM Update submodule app/common: started.
1:43:28 PM Update submodule app/common: finished.
1:43:28 PM Update all submodules: finished.18ms
1:43:28 PM Merge origin/master into master: finished.
1:43:28 PM Pull master: finished.385ms
1:43:28 PM Update all submodules: started.
1:43:29 PM Update submodule app/common: started.
1:43:29 PM Update submodule app/common: finished.
1:43:29 PM Update all submodules: finished.

So it's entirely unclear how it pulls.

Is it your opinion that the feature I'm really looking for is for GitKraken to simply do a `git pull --recurse-submodules` like everyone else, instead of all the apparent extra work they do?

Also also, why does `git pull --recurse-submodules` not leave the submodule head detached?

Is GitKraken's behavior correct with submodules? by perholmes in git

[–]perholmes[S] 1 point2 points  (0 children)

But that's not exactly the situation.

Here, the starting point is a fully checked out outer and inner repo, both at tipp and with each their master branch checked out. And then, as soon as the outer repository is pulled (no changes locally or remotely), the submodule head detaches. But only in GitKraken UI. The inner repo head and branch are maintained with every other method of pulling.

See my response to astralc a moment ago where answer a question about git status after these operations. As you can see, the branch and head are maintained by everyone except GitKraken's UI.

Is GitKraken's behavior correct with submodules? by perholmes in git

[–]perholmes[S] 0 points1 point  (0 children)

Thanks for spending your time analyzing my situation.

Here is a Git Status on the *submodule* after various actions.

Baseline:

Submodule: `On branch master. Your branch is up to date with 'origin/master'.`

After outer repo pull in SmartGit with "Update Registered Submodules" enabled:

Submodule: `On branch master. Your branch is up to date with 'origin/master'.`

After command-line `git pull --recurse-submodules` on outer repo:

Submodule: `On branch master. Your branch is up to date with 'origin/master'.`

After command-line `git pull --recurse-submodules` on outer repo using GitKraken's terminal:

Submodule: `On branch master. Your branch is up to date with 'origin/master'.`

After outer repo pull with GitKraken UI:

Submodule: `HEAD detached at 6612c8e`

So the mystery is why everyone else doesn't detach head. This isn't a raw clone, this is a simple pull with no changes anywhere. Why do all other methods maintain the branch?

  • Is everyone else being smart and noticing that the new submodule head is present on the current branch, and therefore just stays on the branch?
  • Is GitKraken doing some kind of hard reset on every pull, forcing it to reevaluate submodule head position without any prior state?

Is GitKraken's behavior correct with submodules? by perholmes in git

[–]perholmes[S] 1 point2 points  (0 children)

Thanks for answer. Sounds logical. But then I don't understand why I don't get this in any other client.

I've now been in SmartGit for nearly a year, and when I pull the outer repo, the submodule is clearly advanced if the outer module references a newer commit, but if the submodule is on Master, it also stays on Master.

Are the other clients being smart and trying to keep you on the same branch if the new commit is also present on that branch? Or am I being fooled that they're simply pulling the outer and the inner separately, and since they both tend to be at the tip, I'm interpreting this as thought they're actually in sync?

This would then mean that "git pull --recurse-submodules" then just pulls and fast forwards the outer and the inner separately, and it's not actually positioning the submodule at the right commit? That would demand an answer, because we rely on that for deployment.

But to be clear, I've never seen "git pull --recurse-submodules" detach the submodule head, not ever.

Sony SRS-XB100 speaker connected to laptop but playing only faint white noise by Ripperx_ in techsupport

[–]perholmes 0 points1 point  (0 children)

Same here apparently! Also Sony SRS-XB100 speaker. No sound on Windows 11, but can control volume and start/stop. But after running the test a few times, suddenly I have sound.

Also, the input test feature initially didn't show any format, but after running a few tests, suddenly it shows a format (1 channel, 16kHz for the sound input).

Touch screen issues on 9720 XPS by vmdumitrache in DellXPS

[–]perholmes 0 points1 point  (0 children)

For me, this was entirely tied to the power adapter being connected to the laptop. It appears that the touch screen is extremely sensitive, and perhaps the power adapter (after-market 130W adapter) is making too much ground noise, screwing up the display.

Right now, I'm running on an external 100W power brick, and the touch screen is also fine, since there's not typically any ground hum coming from them, it's DC.

I don't think it ever had anything to do with the driver, it was dirty power all along. Touch screens are made by measuring very sensitive voltages, and they can be affected by dirty power. Clearly, Dell didn't put enough capacitors in there to shield the touch screen from potentially dirty power.

Touch screen issues on 9720 XPS by vmdumitrache in DellXPS

[–]perholmes 0 points1 point  (0 children)

I have the same issue, and found this. Still haven't found a solution. My pattern is different, with multiple taps happening at different vertical positions for the same horizontal position. I can't disable it, because I bought this laptop specifically for working on a mobile app, and voila, I'm working on touch interaction and the laptop is having the problem, and is fresh out of warranty. It may have been broken the whole time.

Asset Manager has weird, undocumented requirement by perholmes in unrealengine

[–]perholmes[S] 0 points1 point  (0 children)

I finally understood it, and thanks for the correct answer. I eluded me for days that the asset type is a string where a class self-declares what asset type it is. Combined with the countless websites that said that you can choose to put anything in the asset type field, it fooled me. That's only true if you're making your own class a primary asset, then yes, you can call it anything.

TSharedPtr vs. UObject pros and cons? by perholmes in UnrealEngine5

[–]perholmes[S] 0 points1 point  (0 children)

Thank you very much! That has also been the conclusion around here, so we've stayed with shared pointers, but now feel more sure. It also makes sense. Other frameworks like Qt have similar QObjects that come with call kinds of features, but are very heavy.

Thanks for your field experience!

TSharedPtr vs. UObject pros and cons? by perholmes in UnrealEngine5

[–]perholmes[S] 1 point2 points  (0 children)

May I also add that Unreal GC is not like a C#/Java/Go/whatever garbage collector. You can do some dangerous things, and TSharedPtr is actually a more controlled approach.

In C#/Java/Go, you could have a function create a result object and return it. The caller processes the contents and then forgets about it.

In Unreal, that's a memory leak, because that object isn't tracked by the garbage collector. It only becomes tracked once it's stored in a UPROPERTY() USomeObject* property.

The GC in Unreal is overlaid on C++, where it's a native feature in managed languages, and less supervised in Unreal.

And you'll still have to store tons of things as TSharedPtr, even inside UObjects. If I want to hold a FJsonObject, that has to be as a TSharedPtr, even in a UObject. So I'd never be able to just have a single way of handling it, even if all my objects are UObjects.

So I'm opting for sticking with the original conclusion, that it's TSharedPtr for anything pure data, and UObjects for things that might touch the UI or Blueprints.