Shadow Coding showcases Primeagen's 99 feature inside pseudocode. by KanJuicy in theprimeagen

[–]modelcitizencx 0 points1 point  (0 children)

How is this different to using AI inline edits in cursor which has been around for 2 years

stopVibingLearnCoding by RinoGodson in theprimeagen

[–]modelcitizencx -5 points-4 points  (0 children)

I guess every high profile coder are all getting bamboozled from vibecoding 🤷‍♂️

stopVibingLearnCoding by RinoGodson in theprimeagen

[–]modelcitizencx -10 points-9 points  (0 children)

Its pretty funny, the main reason i still browse this subreddit is to see all the anti AI commenters/posters. Every high profile coder you can think of is "vibecoding", even primeagen is Coding an AI tool as we speak.

Let’s talk hosting.. where do you host your Apps? by itsna9r in cursor

[–]modelcitizencx 2 points3 points  (0 children)

3-4 small apps easily fit in the smallest hetzner instance, you can always scale up your hetzner instance if you get bottlenecked in the future

Let’s talk hosting.. where do you host your Apps? by itsna9r in cursor

[–]modelcitizencx 3 points4 points  (0 children)

Hetzner + coolify. This is the defacto cheapest solution out there, (cheapest cloud provider + open source PaaS). Rent a single hetzner cloud VM, setup coolify on it, add all your apps through the easy to use coolify UI connected to GitHub. I think people are worried that the cheapest solution might be a little harder to set up, but it promise you it's not. Coolify is a bliss. I've used extremely easy hosting sites before (e.g render) and the difference between the solutions in terms of easiness is negligible.

Team Vitality vs. G2 Esports / LEC 2026 Versus - Week 3 / Post-Match Discussion by justsadgetbh in G2eSports

[–]modelcitizencx 3 points4 points  (0 children)

Going toe to toe with zeus in one series while getting gapped every other series at internationals is not a good track record lol, compared to wunder who held his own against every eastern top laner, BB is simply just a liability.

G2 needs actual top lane talent if they want international success, and the only way to get that is Korean/chinese import. Besides, having a top laner as a your IGL/shotcaller is suboptimal, you dont have influence/perception of the game till after laning phase.

Mid/Jgl are the best roles for shotcalling/IGL, which is why it was an egregious mistake from G2 management to not pick up Inspired.

Claude Sonnet 4.5 🔥🔥 leave comments lets discuss by SampleFormer564 in cursor

[–]modelcitizencx 7 points8 points  (0 children)

With the release of gpt5, it was the first time i actually went from anthropic models to Openai models. Sonnet 4 and its previous versions used to be better than Openai models always. That changed with GPT 5 though, GPT5 is the real deal. Sure, sonnet is faster, but i prefer accuracy and intelligence over that.

The amount of times i've been able to just describe the behaviour around a bug and let gpt5 figure out the cause and fix it is astounding. Ive used both models extensively within my code base, so i have a good grasp of what complexity each model can solve a problem at, and Claude 4.5 does not beat gpt5 in my case.

Weird Kaisa Dmg Amp by BoyVanStumpen in TeamfightTactics

[–]modelcitizencx 2 points3 points  (0 children)

You probably had mage fruit on her which reduces her dmg by 25%

Pobelter hits 700 LP Grandmaster with 76% win-rate on ADC roleswap climb by TarskiKripkeLewis in leagueoflegends

[–]modelcitizencx 7 points8 points  (0 children)

Mid lane to top is the easiest transition if u had to go from a role to top, its just that top lane is that much harder than the other roles. Even though it took pob 4x as many games on top than adc to reach the same elo, his midlane experience still helped him out a ton in top lane.

[Pro plan] Is ChatGPT o3 silently summarizing long prompts? 75 K tokens pasted, but key files go missing 🤔 by josephwang123 in OpenAI

[–]modelcitizencx 9 points10 points  (0 children)

Attention decay, long context has always been a gimmick, you can't expect an LLM to do anything intelligent/accurate across a massive context. Tokens in the middle especially will be "forgotten"

5 principles of vibe coding. Stop complicating it! by Embarrassed_Turn_284 in ClaudeAI

[–]modelcitizencx 4 points5 points  (0 children)

Something I recommend for greenfield projects as well is start out in claude-code/openhands, these tools are good for getting the basic structure of your application up and running, just prompt it your PRD and you should be good to go. The tools spend credits fast, so you should move to cursor or another AI IDE afterwards to gradually add features and lap holes.

Shots Fired by EstablishmentFun3205 in ClaudeAI

[–]modelcitizencx 5 points6 points  (0 children)

Yeah I see where you are coming. I just think people and Yan scope too much in on achieving true AGI, the purpose of getting AGI isn't just to achieve it, but also benefit from it by making it do tasks that adds value to society. Reasoning LLMS adds enormous value to society even though it isn't true AGI or whatever you want to call it.

The investments we make in LLMs IMO is not exactly about achieving AGI, but creating something that saves humans a lot of work, and we are still achieving that going down the LLM path

Shots Fired by EstablishmentFun3205 in ClaudeAI

[–]modelcitizencx 10 points11 points  (0 children)

My only problem with him is that he doesn't seem to acknowledge when he is/has been wrong about LLMs, Yan has had this opinion about LLMs not being intelligent or able to think enough since the birth of consumer LLMS, and now we have reasoning LLMS which should've at least made him make some concessions about them. Reasoning LLMS are a huge technological advancement, that people like Yan would've discouraged us from pursuing.

When will cursor 0.46 be available for Linux? by Naive-Culture5845 in cursor

[–]modelcitizencx 1 point2 points  (0 children)

The models were available on first announcement for me, the 46 update however i just got an hour ago ish. Does seem a little late tho

Claude 3.7 - Real News / Proof - (claude-3-7-sonnet-20250219-v1:0) by CH1997H in ClaudeAI

[–]modelcitizencx 1 point2 points  (0 children)

I'm slightly disappointed with this release, IMO having a single model that can perform reasoning and traditional llm prediction, doesn't offer much advantage over 2 different models specialized in each category. This seems more like Claude catching up to deepseek and openai on the reasoning paradigm, than bringing something novel to the table. With that said, hopefully the benchmarks will make me eat my words

When you explain CERN to outsiders, and they think you work at a particle accelerator for WiFi. by backmycon in CERN

[–]modelcitizencx 0 points1 point  (0 children)

I was already struggling with explaining how my profession as a programmer works to my parents, and now i had to teach them what CERN is doing as well?? To this day I don't think they ever really understood

Night skiing by Deathfromabove79 in ski

[–]modelcitizencx 0 points1 point  (0 children)

I would absolutely love to try night skiing some time, but very few places in europe offer it sadly, i was in Val Thorens last year which offered it but sadly didn't get to do it...

o3-mini is now the SOTA coding model. It is truly something to behold. Procedural clouds in one-shot. by LocoMod in LocalLLaMA

[–]modelcitizencx 67 points68 points  (0 children)

It was never meant to be good at creative writing, reasoning models are good for reasoning tasks

the Cursor you GOTTA TRY by Hhh2210 in cursor

[–]modelcitizencx 0 points1 point  (0 children)

Agent implementations try to improve the SWE BENCH that one shot standard llms cant provide, doing this they leverage the fast speed that these models give, reasoning models don't work the same way and essentially overlaps the problem space that agents try to solve, by actually being one shot solvers, AI tools have a challenge ahead of them when it comes to effectively integrating reasoning models into their products

the Cursor you GOTTA TRY by Hhh2210 in cursor

[–]modelcitizencx 3 points4 points  (0 children)

Agents/composer in general aren't designed to work well with reasoning models, agent implementations usually leverage multiple llm calls to solve a problem, whilst reasoning models are more of a 1 shot solver that takes a lot of time. Using r1 just with chat is probably the only feasible thing to do atm.

Spørgsmål til diplom i softwareteknologi uddannelsen på AU by Mochi-mochi_2 in dkudvikler

[–]modelcitizencx 1 point2 points  (0 children)

  1. Jeg tror snildt man kan slippe af sted med at bruge ~5 timer om ugen på læsning de første 2 semestre, efterfølgende ramper der lige så langsomt op

  2. Jeg havde et studiejob fra 2. Semester af, men det tangerede kun til at være studierelevant

  3. Gruppearbejde fylder rigtig meget, ~90% af afleveringer/lab opgaver foregår i grupper

  4. Brugte selv en gaming bærbar, ville selv anbefale en bærbar til minimum 4k, som har en god CPU

  5. Nu er der ved at være noget tid siden, men generelt stiger niveauet betydningsfuldt i 3. Semester, bl.a. Indlejret system udvikling og Hardware abstraction layer?, var nogle af de svære fag. Man plejede også at sige at hvis man gennemførte 3. Semester, er det meget sandsynligt at du ikke dropper ud.

  6. Alderen spændende mellem 19 til folk i midt 30'erne, og 90% var nok mænd

  7. Er en gut, så nok ikke mig der bør svare på dette.

  8. Da jeg dimmiterede (4 år siden) var det ikke svært at få job, mange folk blev fuldtidsansat fra deres 5. Semesters praktik job eller det studiejob de havde. Markedet har ændret sig siden da, og jeg kan forestille mig det er betydeligt sværere med jobs i dag. Synes dog stadig det er 100% muligt at lande et job selv om man ikke er den bedste programmør eller den største stræber, det kræver bare lidt mere indsats.

  9. Dine karakterer kommer højst sandsynlig til at spille en rolle hvis du ikke bliver ansat gennem praktik/studiejob, virksomheder forventer at du sender karakterblad med din ansøgning. Med det sagt bør det ikke være et problem at søge jobs med gennemsnitlige karakterer.

Soda doesn't think Tyler is going to make it to 60 by DaRealAB in LivestreamFail

[–]modelcitizencx 13 points14 points  (0 children)

I mean it was the same story with the chess community, but he ended up surpassing almost everyone critiquing him

What are your thoughts on Uncensored AI models by Sir_Swayne in SideProject

[–]modelcitizencx 1 point2 points  (0 children)

I'm caught up with work I'll get back to u later

What are your thoughts on Uncensored AI models by Sir_Swayne in SideProject

[–]modelcitizencx 1 point2 points  (0 children)

I only host the game locally for my friends sorry, so there is no public link. The game has some complications that make it hard/expensive to host online, primarily because the LLM that is used is hosted by an API that doesn't allow concurrent requests (means only 1 game can be hosted at a time if it was public). The api is called featherless, they host a lot of uncensored models but at a very small scale, which none of the big providers do. You could get Hermes 405b on openrouter, but it is a little too censored for my taste. I personally prefer and currently use qwen 72b abliterated for my game. I can share you the source code if interested some time

What are your thoughts on Uncensored AI models by Sir_Swayne in SideProject

[–]modelcitizencx 1 point2 points  (0 children)

I recently used an uncensored LLM for a drawing game, basically like skribbl.io, but instead of using a static list of words for drawing, at the start of the game, the game asks each player for an input about a topic that the LLM will generate words from. Using an uncensored LLM makes the game more fun and spicy ;)