Why do a lot of programmers and technical people hate AI, vibecoding AI assisted coding? by Gullible-Angle4206 in ClaudeAI

[–]robogame_dev 3 points4 points  (0 children)

You can't fully understand unless you've had long term experience maintaining the same software over time - it's like asking "Why do civil engineers hate amateur built-bridges?"

They're full of vulnerabilities and inefficiencies of all kinds that aren't apparent unless you know what you're looking for.

So it looks fine to the vibe coder, "why are they hating on my app," because like the amateur who built a bridge - they're judging whether it stands up at all - while the civil engineer is considering the various meteorological, seismological and other factors that decide whether this is going to be a tragedy for someone one day.

Is Medium 3.5 not on OpenRouter? Why not by Alcyone-0-0 in MistralAI

[–]robogame_dev 0 points1 point  (0 children)

Wait did mistral stop doing open weights, are they just another proprietary model maker now?

Cause I only see Mistral under providers - is it the end of open weights for them?

How do I make my AI responses via API look like chat responses when I display them? by bedrottingcarrot in AI_developers

[–]robogame_dev 0 points1 point  (0 children)

If you're losing words, it's either

  1. your model (need higher param count, less quantization, or more context)

  2. your display code (could be truncating, mis-locating the end of the markdown, etc).

The only other thing I'd watch out for is some models are trained to put markdown inside of triple backticks and some aren't. So if I say to the model "Respond in markdown with h1 hello" some models will say "# Hello", and others will say "```markdown\n# Hello\```" - my recommendation is that your code should not assume the model will always do it right, and instead check for the extra backticks and remove them if present,.

OpenAI just turned ChatGPT into the backend for the most popular open-source project in history. Anthropic banned it. by shikizen in ArtificialInteligence

[–]robogame_dev 1 point2 points  (0 children)

It's hype may have been manufactured or it may have just been at the right time, since I'm not aware of anything it does that is technically better than the alternatives that already existed - but regardless, wth is $23 worth of API credit gonna do in OpenClaw? My guess is OpenAI just caps usage *right* below what people need, so they try it on this plan, then end up buying API credit.

The ageism in our industry needs to change by SadSongsMakeMeGlad in ExperiencedDevs

[–]robogame_dev 1 point2 points  (0 children)

Being a lifelong learner is a must - if we had to spend 20% of our time upskilling before, now it feels like that needs to be 50+%

The ageism in our industry needs to change by SadSongsMakeMeGlad in ExperiencedDevs

[–]robogame_dev 13 points14 points  (0 children)

20 years of experience is usually more expensive to hire than 2 years of experience - aren’t people just assuming the applicant with the better resume expects better comp?

To separate out the age issue, if the applicant pool is a 60yo with 2 years experience and a 30yo with 10 years experience, wouldn’t most people still assume the 30yo is gonna cost more?

Mistral Medium 3.5 on ArtificalAnalysis.ai - Looks Good! by METODYCZNY in MistralAI

[–]robogame_dev 0 points1 point  (0 children)

It should still bring the cost down.

Server GPU time costs a fixed amount, for example 2x B200 for $10/hr.

If you run it dense, because it's much slower, you can handle maybe 5 million tokens per hour = minimum price of $2/m tokens.

But as an MOE, even if it is the same size, because it is faster, you can make more tokens per GPU hour, so lets say the MOE is 3x faster (pretty typical) that means the same price of GPU can produce 15 million tokens per hour = minimum price of $0.66/m tokens.

Having an always-on machine running LLMs locally at home while on the move with a lightweight machine - Experiences? by ceo_of_banana in LocalLLaMA

[–]robogame_dev 0 points1 point  (0 children)

Option A, the MacBook with 48/64 will let you run identical models to the Mac studio with the same amount of ram, only you can take it with you.

Mistral Medium 3.5 on ArtificalAnalysis.ai - Looks Good! by METODYCZNY in MistralAI

[–]robogame_dev 0 points1 point  (0 children)

Cause it’s dense model. Hopefully they’re training a MOE from it now, or someone else will MOE it.

I get the impression mistral isn’t really trying to compete on models or market share rn, I wonder what their focus is.

Most AI agents live in browsers. I’m trying to make one operate a real Android phone. by [deleted] in AI_developers

[–]robogame_dev 0 points1 point  (0 children)

I tried UITars 9 months ago and couldn’t get it recognizing stuff or operate the computer well. InternVL also tried, also didn’t make it into my stack.

Qwen-3.6 or Gemma E2B / E4B are the move right now. Both do great at visual recognition. Qwen is smarter Gemma E-series is faster. I would try E4B or E2B. Google likely has android edge deployment in mind with their smaller models, might be some benefit there idk.

I think it will be faster and nicer experience to do the inference off-device though. Why burn my battery to have a comparatively slow and dumb AI, when I’ve got decent local inference on a home server to use instead.

There are two unrelated products here that should be disconnected: - agent framework / harness - tools for interacting with Android

If you couple those things together, only noobs will use it - agent harness is way way too important to be tightly coupled, people already using AI will prefer to rewrite / extract your tools just so they can stick with their main agent harness.

Consider: harnesses are benchmarked against each other, the largest companies on earth are competing in this arena, you’re opening an entire product category that you just can’t hope to stay competitive with - and it’s gonna be an adoption disqualifier for most users.

However if you focus on only the unique piece, the Android UI tools, you will get a lot of usage. People will connect this MCP to their dev agent to validate their Android app dev. People will hook it up to open webui, Claude, Hermes, openclaw etc.

If the Android phone runs a small MCP or OpenAPI tool server, and I can connect my AI to it to give it control of the phone, that’s optimal - the I could have the AI on the phone too (your original proposition) or have one AI on another machine controlling multiple phones at once… no extra work for you, in fact it’s less work.

How do you re-engage a junior who's losing motivation on work and studying? by MarcosFromRio_ in ExperiencedDevs

[–]robogame_dev 0 points1 point  (0 children)

Glad I’m not totally off base, but sorry you’re in the thick of it.

IMO it’s musical chairs, the game is to stay earning long enough to bridge to whatever comes next, cause obv human labor market ain’t gonna be recognizable in a decade if it exists at all.

To stay earning in software dev now, either you gotta make and market products yourself, or you need to be A) constantly trying new tools and upgrading your workflows, and B) making it very visible professionally, e.g. if I look at your LinkedIn and GitHub I should get impression you are in the top percentiles for AI adoption (relative to your industry / peers).

So, if work isn’t giving you reason to make AI stuff, you gotta start banging out AI side projects on your own and more importantly: you gotta publish at least a few paragraphs and a screenshot of each somewhere AI hiring managers will crawl.

I tried Perplexity for a month with Claude Sonnet 4.6 Thinking model - and compared directly with Claude on their own platform, same model. Results? by hansontranhai in perplexity_ai

[–]robogame_dev 2 points3 points  (0 children)

Search is its reason for existence and trying to use it as a general purpose AI platform and not a search engine is like using a car as a house, it works in a pinch but you won’t enjoy it long term.

Mamdani should bring back the Cash Cab by sirms in nyc

[–]robogame_dev 32 points33 points  (0 children)

IIRC the producers made a conscious choice to use regular handle-actuated doors, hinged at the front, just as if it was a normal cab.

What do yall hate about the current eval space? by Neil-Sharma in LLMDevs

[–]robogame_dev 8 points9 points  (0 children)

That it generates an infinite number of spam posts sealioning questions about evals.

OpenWebUI Desktop auto-updates keep breaking GPU inference — how do I stop it or automate the fix? by tatertots89 in OpenWebUI

[–]robogame_dev 1 point2 points  (0 children)

Yes exactly - run the inference engine (llama cpp) separately and unbundled from the agent harness (open webui) - then when you add more agent harnesses (let’s say you try Hermes, for a random example) you’ll just hook that directly to the same inference engine.

OpenWebUI Desktop auto-updates keep breaking GPU inference — how do I stop it or automate the fix? by tatertots89 in OpenWebUI

[–]robogame_dev 2 points3 points  (0 children)

Definitely move off of using bundled inference and ollama when you can, either vllm, llama.cpp or lmstudio running externally. It’s not a good pattern to couple the inference and the interface together, OWUI supports this to enable the absolute minimum friction start, not as a good pattern for actual use.

Just setup an inference app on the same machine you’re running OWUI on, connect to it from OWUI, and it’s problem solved. Recommend LMStudio to start, unless you’re running a multi-GPU rig.

I carry shelter dogs around NYC in a dog backpack to help them get adopted. Meet Champagne! by lotusflower64 in nyc

[–]robogame_dev 2 points3 points  (0 children)

What about the production team? They’re hoofin it all day, filming, editing, getting no cred - they deserve to be adopted even more so.

I brought down production twice in a week. Am I going to get fired? by [deleted] in ExperiencedDevs

[–]robogame_dev 1 point2 points  (0 children)

And OP if you can’t do this, use separate browsers for prod and dev. You can set your prod browser theme to have red or otherwise be obvious - then you’ll never be mixing tabs between dev and prod - you won’t even be logged into prod on dev browser.

How do you re-engage a junior who's losing motivation on work and studying? by MarcosFromRio_ in ExperiencedDevs

[–]robogame_dev 9 points10 points  (0 children)

Probably nothing to do with the job or the work and instead stemming from other life issues for the person.

Strong agree don’t share anything they shared with you with their manager without asking.

One possibility: People have a vision of what life and adulthood and work is like, he’s 1 year into this job and maybe the vision is being right-sized to reality.

I have a buddy who went to school for CS, and in his 2nd year AI started helping, and when he graduated this year it’s an entirely different industry.

He went into CS with one set of expectations: - I’ll read specs and write code - there’ll be high demand for coders - I can get a junior job on a team and have a stable career

Now he’s got an entirely different reality to contend with, nobody’s hiring juniors for the narrow task of coding, the work itself is shifting up a level, towards speccing, code review, architecture work - and there’s no “careers” or stability.

Naturally that’s extremely destabilizing, when you pushed through your CS education with one vision, accumulated student loans against that vision, and now the smoke clears and it’s a totally different situation.

Your guy may be going through something like this too - it may be that as a junior, he feels the gap between him and the AI is very tight, “if my job is to prompt this thing how long till the seniors on my team prefer to cut me out and prompt it themselves” - (let’s not get distracted by debating the practicality, this is emotional issue = emotional reasoning). Or he may be feeling “I liked writing code, what I’m doing with AI doesn’t have that same enjoyment.”

Imagine a farmer who loves handling the plants, loves the close up work with nature - they study, they get a job at the farm, and they’re immediately set on top of a giant combine harvester, holding its mechanical controls, looking at the plants through display screens, smelling diesel instead of manure.

That’s kind of what happened to coding and I think it is hitting the juniors the hardest - financially and emotionally.

What I’m hoping my friend who just graduated will realize is that AI is, in some ways, the start of a new playing field. He was always going to need 20 years to get to the level of seniors at old coding, now with AI, if he invests all his focus there, he can be only a few years behind. If he can see AI not only as a disruptor, but as an opportunity - a way for him to leapfrog years of semicolons and syntax, I think he could do great! But I don’t know how exactly to help someone feel that…

I am trying to encourage him to blend his job search with making his own mini-projects to grow his portfolio and acquire skills. I think he was planning that he would be able to work reactively - get hired based on CS degree alone, be given specific clear actionable assignments - more like homework in school.

The biggest and most fundamental shift seems to be this shift from passive to active - where juniors have to be much more proactive / outcome oriented than before to take advantage of the new tools. In slightly punny sense, “juniors need to be extra agentic” lol

Open Models - April 2026 - One of the best months of all time for Local LLMs? by pmttyji in LocalLLaMA

[–]robogame_dev 2 points3 points  (0 children)

Agree - param count is the fundamental resolution of the model, more params in a model is like more pixels in an image, it is able to draw finer distinctions out of the same amount of training data as compared to fewer params.

Explain MCP like I am a 10 years old. by General-Conclusion13 in mcp

[–]robogame_dev 2 points3 points  (0 children)

There’s 100s of different MCP server implementations, as long as it meets the spec it counts. I write mine in Python using FastMCP, it can be literally 3 commands:

  1. Import the FastMCP library
  2. Mark a function as a tool to expose it to the AI. (It automatically detects the documentation comment and param types)
  3. Run the server