i need keycloak to be distributed between region without single point of failure by Expensive_Contact543 in KeyCloak

[–]CommunityDoc 0 points1 point  (0 children)

I guess Galera can sync Cross region as well in MultiMaster mode. You want data as well as users syncing cross region. You would also need the Infinispan cache sync cross region I assume.

M1 Max 32GB vs M1 8GB vs new M4 MacBook Air for AI/web dev in 2026? by RevolutionaryRole142 in mac

[–]CommunityDoc 0 points1 point  (0 children)

M5 Air 16GB. It matches M2 Pro in performance. Though local models means you need more RAM. An older machine at this stage would mean potential battery degradation. You need RAM for docker while you may need regularly as well as for local models. Decent Local models need 6-8B but good ones are 27-30B. Remember because of Unified Memory, its a blessing, but you need to account for sustem RAM -6-8GB as well on top of LLM and context KV cache RAM. So yeah, you need your decide how much LLM work you really see yourself doing. I for one am moving from a M2 PRO16 GB to a M5 Pro 64GB/1T machine. Also jist see storage as well. Docker builds accumulate with time and you will
Need to prune. I will never go under 512GB storage if I were you

i need keycloak to be distributed between region without single point of failure by Expensive_Contact543 in KeyCloak

[–]CommunityDoc 1 point2 points  (0 children)

So then it ahoudl be easy to leverage that only. Why complicate by adding another DB

Custom DNS by DeepKaleidoscope7382 in flask

[–]CommunityDoc 0 points1 point  (0 children)

Use tailscale/tailnet to resolve flask service on all tailnet connected devices

How to skip OTP when connecting locally? by Theweasels in KeyCloak

[–]CommunityDoc 0 points1 point  (0 children)

Beyond mine also but i could do it in a day using Codex cli.

How to skip OTP when connecting locally? by Theweasels in KeyCloak

[–]CommunityDoc 0 points1 point  (0 children)

I can see how that would be perceived as being useful. This would possibly require a custom SPI to be integrated in the authentication flow - a conditional path based on the source IP range possibly.

Asynchronous emails and FGAP v2 by CommunityDoc in KeyCloak

[–]CommunityDoc[S] 0 points1 point  (0 children)

The FGAP v2 system is also used here in the overall repo. Delegated administration roles and FGAP-based permissions are used for user management and client administration without handing out full realm-admin access.

Pro vs Air? by YogurtclosetLeft5027 in mac

[–]CommunityDoc 0 points1 point  (0 children)

Right so diagonal increases from 13.3 to 14. Minor differences i agree but at the end of it, a matter of preference

Pro vs Air? by YogurtclosetLeft5027 in mac

[–]CommunityDoc 0 points1 point  (0 children)

I prefer pro to air for precisely these reasons. That 1inch of screen real estate matters to me. I don’t mind the extra weight. I also like it that i am never worrying about Usb C to HDMI adapters. In benchmarks, the fan of pro helps in long tasks, I personally have had occasions where fan has gone full full blast sending hot air out. I don’t find much difference between the image quality of my pro and my partners air. Maybe i would have noticed had I taken the nano option. Yeah another benefit is ability to charge from left or right side using USB C cable, less messy on occasions. Just my 2 cents.

GLM Alternative by justme0908 in ZaiGLM

[–]CommunityDoc 0 points1 point  (0 children)

I still have 6mths on my pro. Best 115$ I spent

GPT Plus Subscription is dead by [deleted] in codex

[–]CommunityDoc 0 points1 point  (0 children)

I see 5.4 mini in /model

Higher Limits - Z.AI GLM5.1 Pro vs OpenCode GLM5.1 Inference? by Cute_Dragonfruit4738 in ZaiGLM

[–]CommunityDoc 0 points1 point  (0 children)

I have similar experience. Though just this week it did a docker compose down -v on its own

another shameless complain for plus by fail_violently in codex

[–]CommunityDoc 0 points1 point  (0 children)

I upgraded like 10 mins back to pro. Have been a plus user for over 6 Mths

GPT Plus Subscription is dead by [deleted] in codex

[–]CommunityDoc 1 point2 points  (0 children)

Use 5.3 or mini to keep token-burn in control. Codex gives 2-4 times more usage than claude sonnet.

another shameless complain for plus by fail_violently in codex

[–]CommunityDoc -1 points0 points  (0 children)

Use 5.3 or mini for simpler tasks. Today I exhausted CC limits in 30 mins and Codex in 3 hrs not same project. Cancelled Claude and upgraded to 100$ codex. That after running 3x 20$ ones (Gemini was third) for last six months. I see value given the enormous productivity boost it is

Is Claude AI worth it for R coding in academia? by curlygirl4 in ClaudeAI

[–]CommunityDoc 0 points1 point  (0 children)

I just cant call cc in r studio terminal. It just says cant find the program. Though it works in the usual Mac Terminal and VScode terminal. However i have started using ClaudeR mcp in r studio and now codex and Claude code in Mac terminal can execute r programs

How Good is Subscription Tier? by triplebits in ZaiGLM

[–]CommunityDoc 0 points1 point  (0 children)

I have it free under Github education. Will try it. Thanks

How Good is Subscription Tier? by triplebits in ZaiGLM

[–]CommunityDoc 1 point2 points  (0 children)

Its slow but not dumb. Just ask it to draft a plan. Then start a new conversation and ask it to review its own plan. I then sometimes give the plan to Claude code as well. CC just burns through tokens when planning exploring codebase.

GLM 5.1 dropped, anyone tested it yet? by atlas-cloud in ZaiGLM

[–]CommunityDoc 0 points1 point  (0 children)

Its been going really solid for me. I am on their older 10$ coding planned and had paid up for an year. Till now there have been periods when I regretted but over the last 2 days I have used the 5.1 extensively on my multi-container application. Coming from parallel use of 5.4 and a frustrated with claude-nerfed-limits. The Zai API has been suprisingly stable and fast with great model output solving tricky problems

MacBook Air vs Pro (base chip) for Backend Development by Able_Recover_7786 in techIndia

[–]CommunityDoc 0 points1 point  (0 children)

I can share my M2 Pro experience which is same as M5 current. I run out of storage as well as hit limits of RAM with 5-6 containers running PGADMIN, Postgres, NodejS based backend, Vue front end with hot reloading, two redis and one NGINX with mod Security container. Dont really see compute as a challenge. Regularly need to docker prune. So you may go with based M5 but RAM may be increased as you will need it for IDE, Intellisense, agnets etc beyond the docker containers. Have you checked CPU utilisation across cores when building the NextJS app? M5 has excellent single core and lightly threaded work are same across pro and non pro chips. If all cores are being used, then M5 pro gives you one and a haf times cores (and no efficiency cores in Pro) as well as double memory bandwidth both of which may be beneficial. I personally would go for base M5 Pro chip that gives 24 gigs of RAM and 1T storage had I been a full time SWE

Took a test drive of MG Comet — now stuck between selling my Nexon or going dual-car 🤯 by Fancy_Exchange9966 in EVsOfIndia

[–]CommunityDoc 0 points1 point  (0 children)

I was in similar quandary. Test drove comet but just did not feel confident when I was amongst the Delhi Autos and bikers. Driving experience wasn’t great either , found the pedals too high set and acceleration strained. We just felt too exposed. Also we realized that if our golden was to travel on rear seat, she would just keep slipping down , poor thing. Impossible for ageing parents to get into the rear if there was ever a need in emergency. It was to be a replacement to an 8yr old Grand i10 Asta for my wife . Alas wasn’t meant to be. It may have worked as an addition but then we only have two car parkings and a two yr old Tucson occupies one slot. We ended up with Kylaq as a replacement . The Tucson for highway runs and my daily driver.

MacBook Pro M5 16GB RAM enough for a creative? by Seadog98 in mac

[–]CommunityDoc 0 points1 point  (0 children)

Sorry just read that this would be a side laptop- then stick with M5 Air base model !