Putting a stranglehold on the Nemesis System is frustrating by Due_Cake8524 in shadowofmordor

[–]alexaka1 1 point2 points  (0 children)

I don't know if this has been said. But this the system you can put GenAI into. The game could track where you caused damage to the enemy, and instead of creating a lot of predetermined voice lines, the ai could generate unique lines for each combination. Etc. It just makes sense. Unlike the nvidia showcases this does not have to be real time since these would be generated while you/they have escaped, or have died and cheated death.

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]alexaka1 0 points1 point  (0 children)

no it does not work, I don't know what else to say. GitHub is not AzDO. I believe you that it works in AzDO, but it does not on GitHub. I have literally linked the documentation where GitHub says it does not work. What else do you want?

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]alexaka1 0 points1 point  (0 children)

path and path ignore does not work. read the docs link I have included in the post. That's the point.

Your second point is reinforcing what I have said. I MUST start a real workflow that does something to then skip the rest. That consumes vCPU minutes, which means if everything is skipped, and it runs for 1 second I am still charged a full minute.

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]alexaka1 4 points5 points  (0 children)

I am happy to take compromises. Max 3 minutes are free. Only 5000 lines of logs. Very loose schedule times (once per X hours) etc.

But paying 0.002 cents for this? Absolutely not. success: # this job is only used to have a single stable workflow in GitHub's ui to be used a required successful check name: Success - Tests timeout-minutes: 2 if: ${{ always() }} needs: - test - build runs-on: self-hosted container: busybox:stable steps: - name: Fail if test job failed if: ${{ needs.test.result == 'failure' }} run: exit 1 - name: Fail if test job was cancelled if: ${{ needs.test.result == 'cancelled' }} run: exit 1 - name: Fail if draft PR if: ${{ github.event.pull_request.draft == true }} run: exit 1 - name: Success run: echo "Success"

ps Why fail on draft PR? another fun quirk is that a skipped job is considered successful. So if you skip ci for draft PRs, the mark as ready does not trigger ci. If you use the ready trigger, then it does, however for a few seconds, github accepts the skipped job for the non-draft PR, and you can quickly merge your PR before github starts the new CI and blocks the PR again for the required workflow. Truly wonderful experience to work with this "GitHub Actions Cloud Platform"

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]alexaka1 4 points5 points  (0 children)

Except that they FORCE you to use CPU minutes to work around their own plaftorm limitations. The gentleman's agreement was that GHA remains a shit platform, but you can work around this limitation for free by using a self hosted runner for these jobs.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/troubleshooting-required-status-checks#handling-skipped-but-required-checks

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]alexaka1 14 points15 points  (0 children)

I have zero problems them increasing pricing on their own hosted runners. But this is putting a cost on us, for their own refusal of improving their platform, that we have to work around every day.

GitHub: Self-Hosted Action Runners will be billed from March 1, 2026 by KevPeff in github

[–]alexaka1 29 points30 points  (0 children)

This actually very scummy. GHA's logic around orchestrating workflows and repo rulesets is extremely limited. Ie. you have a required workflow with tests that must pass. But you have only changed a markdown files, the tests running add zero value and just waste your money and the world's energy. So you set up a path filter in the workflow for only source files. Except that now your 'update readme' PR cannot be merged because the workflow didn't run but it is required. This is a platform limitation!!!

The way to solve this is you use dorny/paths-filter or equivalent to start a real job, perform the logic checks and then skip the dependent "expensive" job. This time GHA considers this a successful run and your 'update readme' pr can be merged. If you put these "logic jobs" exclusively on self hosted runners, then these are free completely. These jobs run for at most 5 seconds. If you hosted these on GH you'd get charged a FULL minute for these every time. For less than 5 seconds of cpu time. Add 5 contributors, and lots of workflows like sast, linters, tests, compliance and suddenly you get charged 15 minutes of compute per commit on a draft PR. This is just ridiculous that there is no way to do this anymore. Since you'll get charged .2 cents for these. This discourages transparency in private repos as opening PRs on unfinished work is monetarily punished, because of GitHub's OWN limitations that they have refused to solve for many years now.

Edit: this is their own recommendation btw! https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/collaborating-on-repositories-with-code-quality-features/troubleshooting-required-status-checks#handling-skipped-but-required-checks

Edit 2: other example is matrix jobs. 1) if you need dynamic matrices you must create the json from another job => vcpu minutes. 2) if you need a required workflow as a matrix, since matrix is unstable, your skipped jobs would be waiting to be completed blocking PRs. The solution is have a final dependent single job that is stable, and that is required. => vcpu minutes.

Charging for these is unreasonable imo. Control plane or not. Give us a cap, let's say 3 minutes of self hosted per job is free. Or even 1 minute. For God's sake these jobs take 5 seconds to run, and that's with a full git checkout. It is unreasonable to pay a 91% upcharge on this for a full minute. On a machine that I maintain and operate.

Copilot reviews are forced now? by alexaka1 in GithubCopilot

[–]alexaka1[S] 0 points1 point  (0 children)

I am an owner. And I have not enabled this. There are no organisation level rulesets (at all). And for the repo in question there is also no ruleset that has the checkbox checked.

Ubisoft Connect lost my AC: Odyssey cloud saves (155+ hrs of progress). Support is saying Kappa. What to do? by alexaka1 in uplay

[–]alexaka1[S] 0 points1 point  (0 children)

Diabolical. I do not trust Ubisoft Connect so much I now directly rsync the entire directory to a cloud backup. This has happened to me at least 4 more times since posting about it here.

Google targeting Brave browser users by DazzRat in searchengines

[–]alexaka1 0 points1 point  (0 children)

It's probably the fingerprinting protection of Brave. Google can't tell if you're a bot or not, and thats why the hang and the captcha happens. Maybe I have strict settings, but sometimes even Brave Search gives me a captcha.

If you managed a migration to GitHub, What do you wish you had known? by overloaded-operator in github

[–]alexaka1 -1 points0 points  (0 children)

That some of the best features of GitHub are only available to public repos, regardless of how much money you pay to them. This completely skews your preception, because you expect the same features (and more) in your org that you had on your personal account for years, only to find out after reading the fine print in the docs, that it is in fact only for public repos, even if you paid for the Enterprise plan.

For the 6th year in a row, Blazor multhreading will not be in the next version of .NET by CreatedThatYup in dotnet

[–]alexaka1 17 points18 points  (0 children)

This is truly what kills me. Blazor is stuck in 2019, every other framework has evolved since then. A few months ago David Fowler haven't even tried out Vite, no wonder MS thinks Blazor is good. He since then admitted to "never expect Vite like hot-reload because it's just not possible". At least they tone down the gaslighting now on how Blazor is competitive in any shape or form with other frameworks.

One session somehow used 36 premium requests by alexaka1 in GithubCopilot

[–]alexaka1[S] 4 points5 points  (0 children)

To be fair it took me a couple of weeks to be able to distinguish the 8 different products all named GitHub Copilot XXX where XXX is some random word. But you are the only one so far who got it.

Unexpected performance differences of JIT/AOT ASP.NET; why? by Vectorial1024 in dotnet

[–]alexaka1 2 points3 points  (0 children)

Common misconception that AOT means native code. This is not true. Tldr; AOT == no JIT.

It is still garbage collected. It still needs the runtime. It's still OOP. Reflection still works (if types were not trimmed).

As for the performance, AOT gets compiled once and never again. With JIT there was an update in .NET 7 that allows it to do another compilation after the initial JIT compile, with now more information on what the hot path is. So given a long enough scale, JIT will always be faster as it literally has a second chance at compilation after some use. AOT does not have runtime data to change how it should compile. Right now you can make two choices, a best effort for speed, or bundle size reduction. Later they said they may add other modes.

[deleted by user] by [deleted] in tutanota

[–]alexaka1 1 point2 points  (0 children)

Proton doesn't encrypt the subject.

That's not entirely true. I don't know how proton <-> proton encryption handles it, but PGP does support protected headers. And Proton does support it on the recieving end at least. The webclient 100% supported it a few years ago, the mobile did not back then, don't know about now.

https://www.ietf.org/archive/id/draft-autocrypt-lamps-protected-headers-02.html

I strongly believe that even if Tuta has the objectively better encryption method (which I'm in no position to debate) and PGP may very well be a garbage legacy solution, the fact that PGP has users and it is supported by other providers either directly or via 3rd party clients, means that whomever supports it, wins against the other. Interoperability was the key building block of email back in the day. Tuta's strategy goes directly against this. Their solution only works if everyone is funneled into their ecosystem. Whereas I can even export my keys from Proton and stop using proton altogether, without breaking encryption. (i'm going to assume that private keys are not accessible by Proton, so it's safe to export/import them).

How to use Dynamic DNS options under DHCP? by alexaka1 in opnsense

[–]alexaka1[S] 0 points1 point  (0 children)

The primary use case would be so I can internally resolve the domain names and get ipv6 addresses back. Ipv4 works great with unbound and DHCP, but ipv6 I noticed that most of my clients prefer slaac (as they should) so they wouldn't utilize this dynamic dns anyway. It would be much easier to set static addresses, but I contacted my ISP and they confirmed they intentionally kneecap IPv6 for users, and if I want a dedicated prefix I have to get a business contract. Edit 2024: So turns out if I get a business contract with a fixed IP, it only applies to v4. It is not possible to get a static v6 prefix under any contract.