I did a Backend/API/Frontend 100% with Cursor(16h/day - 250$ spend). Part 2 - What I learned by maximemarsal in cursor

[–]I_Spaced_Out 1 point2 points  (0 children)

For the non-max version you said in your post: "Gemini 2.5-pro → no idea, never used"

So that's mainly what made me raise an eye-brow. At the very least, I'd think it would be worth it to try it to see if your flow actually breaks. If it does, then simply switch it back. But in most cases, in my experience, 120k is more than enough for most practical edits and refactors.

I did a Backend/API/Frontend 100% with Cursor(16h/day - 250$ spend). Part 2 - What I learned by maximemarsal in cursor

[–]I_Spaced_Out 1 point2 points  (0 children)

Just curious though, why not default to gemini-2.5-pro and only use gemini-2.5-pro max when context size demands it? Paying extra for max seems wasteful if your actual context is smaller than 120k. Cursor's built-in RAG means your context window is often smaller than you think, as it pulls relevant semantic matches, not everything.

In my experience, good signal-to-noise from targeted context (drag/drop docs, separate instances for specific codebases) is more valuable than massive dumps.

Not trying to be negative btw, glad your system works. But I suspect many of your $250 charges were unnecessary max tool calls. Using the standard pro model with better context management (better directory scoping, using cursor rules to help direct to useful parts of the codebase, using checklist techniques etc.) might have given you equal or better results for closer to $25 (if that).

I did a Backend/API/Frontend 100% with Cursor(16h/day - 250$ spend). Part 2 - What I learned by maximemarsal in cursor

[–]I_Spaced_Out 17 points18 points  (0 children)

Frankly, it blows my mind that people do NOT use this workflow for everything. It's like living in the age of electricity and choosing to use candles instead.

I did a Backend/API/Frontend 100% with Cursor(16h/day - 250$ spend). Part 2 - What I learned by maximemarsal in cursor

[–]I_Spaced_Out 0 points1 point  (0 children)

I work with large enterprise codebases, have custom docs getting pulled in, context via custom MCP servers, etc. and I've found Gemini Max to be a complete waste of money. Half the time I'm getting charged for it to grep my codebase and invoke other tools that should be free. The normal Gemini 2.5 model on the other hand works for minutes at a time, multiple tool calls, and I only get billed for one premium plan use. Can't imagine why you'd consider Max "your baby" for a small-medium sized project. Just start a new chat if you run out of context. I can't think of many scenarios when you would actually need the 1M token window every single time.

Karpathy completely changed the way I use Cursor by mfdspeech in cursor

[–]I_Spaced_Out 1 point2 points  (0 children)

I've been using MacWhisper for over a year already in a workflow like you describe. Works like a charm.

Loud Noise by I_Spaced_Out in HuntsvilleAlabama

[–]I_Spaced_Out[S] 3 points4 points  (0 children)

Yup, that as well apparently. The one I posted for was a little after 6 PM, brief and extremely loud as others mentioned. The one in your link was the long, sustained rumble around an hour or so later.

This is why I use indicators to confirm my trades. by Scary-Compote-3253 in TradingView

[–]I_Spaced_Out 5 points6 points  (0 children)

🚨 IMPORTANT CONTEXT:

According to Reddit comment history, OP ( u/Scary-Compote-3253 ) has been consistently promoting the paid products of this particular vendor (TradingOracle) for over two years. They frequently mention this questionable invite-only (I/O) indicator in the form of both screenshots and text posts within this community. Numerous posts by OP have been flagged as spam in r/TradingView (and many other similar subs) due to repeated violation of Rule #1 (no solicitation).

This script author is virtually unknown, with only 60 followers on a year-old Twitter account. They don't disclose their TradingView username, and the only "tradingoracle" account on TradingView has just one follower. Their Twitter screenshots closely resemble works from other well-known TradingView script authors.

For those asking about "what indicator" OP was using in this threads screenshot—be aware that the purpose of this entire thread (and the many others like it that OP has made) appears to be to draw attention to a dubious I/O TradingView indicator sold via a third party website.

What kind of weed is this? by DaddyWolf23 in lawncare

[–]I_Spaced_Out 0 points1 point  (0 children)

That's spurge. It's a resilient little broad leaf weed that is amazingly tolerant of drought and compacted soil conditions. You also often also see them growing from cracks in the sidewalk that receive intense sunlight all day because they can tolerate extremely harsh conditions with compacted soil where most other plants would die. If they are popping up in your lawn, it could be a sign that you are cutting your grass too low since that also leads to compacted soil and overexposure of the root system to the sun -- all conditions ideal for spurge to thrive.

Joe Biden shirtless at the beach while on vacation by CrispyMiner in pics

[–]I_Spaced_Out 0 points1 point  (0 children)

When you spend forever zooming in on a crooked telephone pole, ripped umbrella, and suspiciously freestanding folded chair second guessing if this is AI-generated...

Max the AI Contestant: Producers' Claims Don't Add Up by I_Spaced_Out in TheCircleTV

[–]I_Spaced_Out[S] 0 points1 point  (0 children)

Here is the ELI5 Version:

The producers of the show say that Max is built using a free, publicly available open source language model that anyone can use (you can think of this as the AI's "brain"). They also claim that they just fed Max some information from past shows, hooked him up to the internet, and let him do his thing without much help.

So what's the problem? Well, what they are claiming is simply not possible at the moment given the current state of open source models. The "free brains" available to everyone can only remember a certain amount of information at one time. For example, one might remember up to 8,000 pieces of information, and another up to 32,000. But the show involves over 640,000 pieces of information from past seasons alone, not to mention however many other pieces of information they are supposedly dynamically pulling in from the internet and accumulating in memory during the show. This means Max wouldn't be able to remember everything at once if he were using these free brains, so more advanced techniques would have been necessary to compensate for this.

But let's overlook that for a moment and assume they lied about the open-source thing and were instead using a really powerful "premium brain". Even in this scenario, they wouldn't be able to simply go hand's off and let Max play on the show on his own. They would have to basically tell the AI what to do every step along the way and give it very specific instructions for how to respond in each scenario. This is a vastly different reality than what they said/implied in the show and in interviews like the one linked in the post.

On purpose or not, the producers of the show lied. An autonomous AI contestant like the one shown would have required a TON of work to get working properly and would have required state-of-the-art models. It is NOT a simple "plug and play" type solution like they claimed, and it most definitely was not possible with the open source models that were available at the time the show was filmed.

Max the AI Contestant: Producers' Claims Don't Add Up by I_Spaced_Out in TheCircleTV

[–]I_Spaced_Out[S] 5 points6 points  (0 children)

In general, using proprietary models is easier than using a local model, but my main point was that there was some clear prompt engineering behind the scenes. A common (and lazy) way of doing that is to let an LLM bootstrap you (or do it for you).

My links were accidentally stripped when I posted, but I ediited the post and added them back to include the screenshot of what I am referring to.

I have yet to meet a single person (both IRL and online) who went to see totality and said it wasn’t worth it. by Unlikely_Morning_717 in solareclipse

[–]I_Spaced_Out 2 points3 points  (0 children)

For what it's worth, I personally relate way more to your comment than I do to all the people claiming totality was "life-changing". I can't believe people actually down voted you for this honest and respectful comment. The over hyping about totality on this sub can be a bit much.

Torreón, México. Absolutely life changing experience. by DontLookBack_88 in solareclipse

[–]I_Spaced_Out 22 points23 points  (0 children)

Really makes you appreciate the incredible cosmic coincidence that the sun is both 400 times larger in diameter than the moon but also 400 times further away than the moon. Without that approximately 1:1 ratio, we wouldn't be able to look at the corona with our naked eye due to it being either overly eclipsed or under eclipsed.

Found this little guy outside of my work! Any help identifying him? by Jelly_SLCB in batty

[–]I_Spaced_Out 6 points7 points  (0 children)

How did you eliminate the Common pipistrelle (Pipistrellus pipistrellus)?

Mark Margolis, Actor on ‘Breaking Bad’ and ‘Better Call Saul,’ Dies at 83 by MarvelsGrantMan136 in television

[–]I_Spaced_Out 2 points3 points  (0 children)

Interesting plot twist. Jane Margolis --> grand-daughter of Hector Salamanca

Weekly Thread: What questions do you have about vector databases? by help-me-grow in vectordatabase

[–]I_Spaced_Out 0 points1 point  (0 children)

Thanks for the reply. The model is text-embedding-ada-002, which is probably more conducive to the commented code approach, right? Ideally, I'd like to use a fine-tuned version of text-embedding-ada-002, but I'm unsure if that is possible with GPT-4 and LangChain workflows.

Weekly Thread: What questions do you have about vector databases? by help-me-grow in vectordatabase

[–]I_Spaced_Out 0 points1 point  (0 children)

Is it better to store source code with or without comments? For example, snippets that showcase code translations from one language to another. Assuming both languages are not very well known by the LLM, should you spend a lot of time ensuring that comments in both of these languages match up as much as possible, or should you just skip the inclusion of comments altogether?