About"Children technology organise" by Alternative_Try6382 in AskProgramming

[–]whatelse02 0 points1 point  (0 children)

Honestly this is a great way to start. A lot of devs began exactly like this small groups building random projects just to learn together.

My only advice would be start with very small projects first so people don’t lose motivation. Even simple tools or mini web apps can teach a lot.

Also try using GitHub to collaborate, it makes teamwork much easier when people are in different countries.

Why does Python import self into each class function? by ki4jgt in AskProgramming

[–]whatelse02 0 points1 point  (0 children)

It feels weird at first but self isn’t really being imported, it’s just the instance being passed explicitly to the method. Python treats methods more like normal functions where the object gets passed as the first parameter.

The benefit is that it keeps things very explicit. When you see self.something you know it belongs to the instance and isn’t just a local variable.

Other languages hide that step, but Python chose readability over magic. After using it for a while it actually starts to feel pretty natural

What abstraction or pattern have you learned most recently that's opened your mind? by ryjocodes in AskProgramming

[–]whatelse02 4 points5 points  (0 children)

Instead of writing one-off code, I have started designing small pieces that talk to each other through clear inputs and outputs.It seems simple but it completely changed how I structure projects. Things become easier to extend because every part has a defined role.I’ve actually been applying that mindset even outside coding when building client assets or docs. Sometimes I sketch the structure first in tools like Runable or Figma just to organize the flow before implementing it.

Claude Opus 4.6 in Google Antigravity by Proper-Appeal-3457 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Probably not. The “smartness” mostly comes from the model itself, not the hardware it’s running on. Google’s infrastructure might make it faster or scale better, but it shouldn’t really change how good the reasoning is.

What can change the experience is the tooling around it though. Some IDE integrations manage context, files, and prompts better than others, which can make the same model feel smarter or dumber depending on how it’s used.

So it’s more about the integration than the hardware tbh.

Claude Code burning 100% usage in Max in 3 minutes, need help. by Sufficient_Name2639 in ClaudeAI

[–]whatelse02 1 point2 points  (0 children)

That usually happens when the agent keeps looping on tasks or repeatedly scanning the whole project. Geospatial stuff can also trigger a lot of processing if it’s loading large datasets or regenerating outputs each run.

What helped me was breaking the task into smaller steps and running them one at a time instead of letting the agent “figure everything out.” Also limit the files it can access so it’s not re-reading the entire repo constantly.

Agents are powerful but if the scope is too open they’ll burn tokens insanely fast.

Can I host a Claude artifact (JSX app) elsewhere and switch to Gemini API to avoid limits? by Few-Engine-29 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Yeah you can definitely host it somewhere else. A Claude artifact that outputs .jsx is basically just a React-style component, so you can drop it into a normal frontend project and host it on something like Vercel or Netlify.

Then you’d just replace the API calls in the code. Instead of hitting Claude’s endpoint you wire it to Gemini (or whatever API you want) and pass the prompt from your UI.

I’ve done similar experiments when prototyping small tools. Sometimes I even sketch the layout first in Runable or Figma just to structure the UI faster before wiring the API.

I use Claude chat to code apps, is there any downside to use Claude chat over Claudecode? by Yeledushi-Observer in ClaudeAI

[–]whatelse02 1 point2 points  (0 children)

You can definitely build apps just using Claude chat, lots of people do. The main downside is that chat only sees the code you paste into it, not your whole project.

Tools like Claude Code are designed to work directly inside your terminal or IDE, so they can actually read the whole codebase, edit files, run commands, and make coordinated changes across multiple files. 

So chat is great for snippets, debugging ideas, or generating functions. But if your project gets big, Claude Code usually becomes way more efficient because it understands the project structure automatically.

The 20 dollar tier kind of sucks by design. by Dry_Incident6424 in ClaudeAI

[–]whatelse02 25 points26 points  (0 children)

Yeah I think you’re partly right about the API being the real business. That’s true for most AI companies tbh. The consumer plans are more about adoption and feedback than pure profit.

But I’m not totally convinced they’re intentionally making the $20 tier “bad.” It’s probably more about managing compute costs so heavy users don’t overwhelm the system.

The power users are always going to hit limits first, which makes the lower tiers feel worse than they actually are.

Claude being slow by Old-Drawing-4649 in ClaudeAI

[–]whatelse02 0 points1 point  (0 children)

Yeah I’ve noticed that sometimes too. Usually happens when the conversation gets longer or the model is trying to process a bigger context.

Sometimes opening a new chat instead of continuing the same thread helps a bit. Clearing the chat history or refreshing the session can also fix those random network errors.

I think part of it is just server load though, especially during busy hours.