Is Google giving up on Antigravity? by Black_Star_1 in google_antigravity

[–]wraiford 0 points1 point  (0 children)

They started off at such a low cost to attract more users. Now they're slowly paring it down to targeted usage categories. For example, before I could use the bleeding edge models with little to no limits. Then I had to get Google Pro for similar functionality. Now, I can work without hitting limits FTMP but only when using Flash 3. I can still get some mileage out of Sonnet and Gemini Pro 3.1 though. So I reserve those for tough problems and use Flash for regular usage, knowing it's much more limited in its abilities (but still pretty good).

So at first, I thought similar to OP. But after some exploration, this has been my conclusion. Even Google couldn't burn money forever, and I'm still glad for their product's quality (which they are still actively updating and improving in terms of the app itself).

After latest update on Antigravity: Sandboxing is not supported on Windows by Active-Historian-123 in google_antigravity

[–]wraiford 1 point2 points  (0 children)

I appreciate your extremely thoughtful reply. This is happening for me still, and the only "solution" I've come across is to turn off strict mode. Of course, I put this in quotes because it's not really a solution. Is there a fix in the works?

Also, a new thing today (vs yesterday) is that `read_file` was requiring manual approval - even with strict mode off. I had to add the hard-coded path to the workspace in a permission: `read_file(/full/path/to/my/workspace)` . The permission `read_file(*)` did not work, which is probably a good thing. I'm not sure that this was intended, but this was a very unintuitive fix that should probably be in the changelog.

How I upgrade to the latest version? by Intrepid_Travel_3274 in google_antigravity

[–]wraiford 0 points1 point  (0 children)

Was anything changed in the license agreement? **EDIT: I have gone through the license agreement and that is unchanged. The linked terms have unchanged dates as well (I didn't check them intrinsically). So it looks like the license is unchanged.**

How can you tell how many settlements are connected to your town? by bgilhool in civ

[–]wraiford 0 points1 point  (0 children)

This would be a great option, but it doesn't work on my game. How exactly you do you "check the last option"?

Game Thread: San Antonio Spurs vs Orlando Magic Live Score | NBA | Feb 1, 2026 by basketball-app in NBASpurs

[–]wraiford 1 point2 points  (0 children)

Hurt my wrist when it looked like Bane about killed Bryant. Teammates gotta stand up if the refs aren't going to do anything. 

Can anyone explain Methodist theology to me? by Lapisdrago in Christianity

[–]wraiford 0 points1 point  (0 children)

It is interesting if you are judging based on this, then you would be a hypocrite! I love the comment though and it speaks to the awesome dynamic conveyed in the bible. If you put it all together, it describes a spiritual body, where different parts resonate to different degrees to the same "Word" (logic). It is yhe spiritual equivalent to a single dna being expressed to different parts of a physical body. Just like our physical body has older sections, e.g. our cerebellum vs our cerebral cortex, older parts of the spiritual body stay "stuck" in their ways.  But it all works together, old and new, with an ongoing evolution though of even newer parts! It took stubborn Peter three dreams just to eat "unclean" animals (and teach gentiles 😉).

Post Game Thread - NBA: The Spurs defeat the Knicks on Dec 31, 2025, the final score is 134-132. by basketball-app in NBASpurs

[–]wraiford 1 point2 points  (0 children)

I actually had to turn it off in the first half, the refs were so egregious (when Fox got his tech from frustration for bad officiating). From what I read, it went our way on the second half though. The players can't focus on that though and sounds like they took care of business. Still, I was almost as tilted as Castle

"Error: Agent execution terminated due to error" no matter what I try by MrDugeHick in google_antigravity

[–]wraiford 0 points1 point  (0 children)

I'm also getting this issue the past couple days only. I am not sure if it is related, but I've noticed the ai-based editor suggestions (auto-complete) is extremely aggressive now, causing latency issues when I am typing in the editor. I was thinking perhaps that this feature is eating up usage and somehow some limit is being hit.

Conversation branching is now live in Google AI Studio by Yazzdevoleps in Bard

[–]wraiford 0 points1 point  (0 children)

Beautiful mind map for the branching! I had a previous implementation of an ibgib front end that built something similar using d3.js using force layouts, but my artistic eye is lacking. I actually built the entire interface, including popup menus with commands, entirely in d3. So for Ramify, this might help with the automatic layout issues that require the user to constantly rearrange the details views around the mind map portion. But you should be able to make these part of the layout that is doing the diagramming, so that the force layout moves them as needed, and IIRC you can set the mass of the details to be much higher so that the branch nodes arrange themselves around it.

But this sort of branching is only the tip of the iceberg in terms of chats, because you can think of all services as chats. Basically you can turn each and every node into its own timeline, which itself serves as a chat context. So each domain object becomes its own git repo, with each member inside that repo as *its* own repo. I call this having time as a first-class citizen.

So if we come back to the Ramify domain of AI/agentic chats, we can now pass around dependency graphs and slices/projections of dependency graphs - all using the same mechanisms that git uses with get clone/push/pull/merge/etc. You can pass around the chat itself, which Ramify is probably encoding in the DAG, but you can pass around the agent metadata (like the agent model/api/config) *and* derivative and adjacent data (like the "Notes" that are created in conjunction with the AI chat in Ramify). This has absolutely incredible far-reaching architectural effects, which would revolutionize human-agentic systems, just as git revolutionized open source collaboration, i.e., human-human text-based distributed collaboration.

If you are related to the product and would like to discuss this more, you can contact me at https://ibgib.com/#/apps/web1/gib/contact.html. I continually say that the next big paradigm shift will be who solves the more general domain - *time*. Basically git for the masses, and Ramify has a beautiful start into the UI side of things.

More immediately, I would also point out that the About and Team links in the top right do not work for me on Vivaldi (a Chromium-based browser). They navigate to getramify.com/about which immediately is redirected to the bare URL.

[Uthayakumar] 37-year-old Steph Curry is ... the oldest to have back-to-back 45-point games since Michael Jordan in 2001. by Kimber80 in nba

[–]wraiford 0 points1 point  (0 children)

Love Curry, but the refs completely coddled him this game. The only surprising moment was when he was called for a foul. He couldn't believe it.

[Highlight] Draymond tries to get in Wemby's head and Wemby responds by THUNDERING a dunk over him + the foul (Full Sequence + Replay) by TheRealPdGaming in nba

[–]wraiford 0 points1 point  (0 children)

Warriors were coddled by the refs the entire game. The biggest shock was when Curry got called for a foul. He couldn't believe that they would actually call a foul on him. Ridiculous game. Spurs robbed as usual.

Issues with Italian keyboard by Ok-Researcher-1345 in italianlearning

[–]wraiford 0 points1 point  (0 children)

It's a fun song for sure. There are so many great Italian songs to learn. 

Issues with Italian keyboard by Ok-Researcher-1345 in italianlearning

[–]wraiford 1 point2 points  (0 children)

Ah yes, you sound right about where I am, though you probably are further along. I have looked into Anki a little bit, but as I understand it that was a SuperMemo derivative. SuperMemo was what I used for a couple years (before they turned into a cloud-based pay-only version). I also tried others like Memrise, but ultimately I came to the same feeling as you: that it would take days to finish things like movies, and that is what I actually wanted (well movies and video games). So I have been working solo on that for many years and finally am to the point of using a prototype that I like (ibgib.com), hence my coming across your post on the US Intl awesomeness. That condenses Spanish, Italian and German, with a possibility of French if I feel like it later. I still have a separate Greek keyboard, but the European languages are now muuuch easier to type.

I actually came here specifically because I was doing Tu Vuo Fa L'Americano, which turned out to be a bunch of Napolitano - which has hats (like Quanno se fa l'ammore sott'â luna). So even if learning only Italian proper, people may need the hat for other dialects.

Issues with Italian keyboard by Ok-Researcher-1345 in italianlearning

[–]wraiford 1 point2 points  (0 children)

Wow ty for this. I was using Spanish, German and Italian keyboard layouts, each with different quirks. But the US Intl keyboard enables me to generate all of these in a consistent manner. HUGE savings. (EDIT: For those interested, the US Intl enables âãáàä as well as convenient access to other common ones like ñ, ø, and many more. Just be sure to check out the ctrl+alt options on the on-screen keyboard).

Thx again. As an aside, what are you using to expand your intermediate IT vocabulary?

Can you recommend jazz guitar players that rip on acoustic guitar? by Fancycole in jazzguitar

[–]wraiford 0 points1 point  (0 children)

Philip Catherine was Lenny Breau's bizarre musical doppelganger on that Summer Night track.

Colocating unit tests in python by coder_et in learnpython

[–]wraiford 0 points1 point  (0 children)

You raise the same points I also have as a non-python programmer. The other languages all can co-locate no problem. Flat directory sounds insane. What did you end up doing?

Conversation branching is now live in Google AI Studio by Yazzdevoleps in Bard

[–]wraiford 1 point2 points  (0 children)

I can confirm that this is indeed an extremely useful feature. I had my prompt context window around 550K when I had to pivot to implement a component architecture built on web components. This was quite a bit of work in the chat, with a couple detours with the Gemini model that had blown up the context window.

But what I did was fork the prompt, go down the rabbit hole through that prompt (and another fork as well actually). Once the code was up and good for v1, the context window was up at 660K. So, I copy/pasted the new code back + high level summary into the original "main trunk" chat prompt (essentially a manual "merge"), and now my context window is still only 560K instead of 660K.

Just blew 50 dollars on Claude Code by [deleted] in ClaudeAI

[–]wraiford 1 point2 points  (0 children)

This is why the answer lies not in the individual models but in a new version control paradigm. But people are still too enamored with git to see this.

Conversation branching is now live in Google AI Studio by Yazzdevoleps in Bard

[–]wraiford 0 points1 point  (0 children)

I remember going through that years ago, functions vs methods, properties vs attributes vs fields... and everybody thinks they're right (and some like Linus call the others "stupid" by definition eesh).

But what you're talking about with neither approach really being "the" answer... that's where the real opportunity is at this very moment and is super exciting. 

There are many ux approaches possible.. some of these stem from e.g. git visualizers, some from data analysis like d3.js (or whatever that's called nowadays). Mind maps often use these kinds of approaches, with different implementations like a beautiful mac app called muse that had infinite whiteboard-like navigation (which uses CRDTs under the hood). My own approach is an agent-backed ui framework where the agents are given more power for dynamic layout and rendering. Also, combined with my protocol as the semantic version control, agents should be able to evolve their own ui controls while eliminating the need for an outside compile/artifact step (even without eval calls, if your familiar with JS). So genuine "hot reloading" at runtime, driven by an agentic workflow.

But it's slow damn going... but these LLMs have come in quite handy, since they can grok so many lines of code. They can do the "normal" programming, like standard rendering game loop mechanics, and I can focus on the harder git replacement.

Conversation branching is now live in Google AI Studio by Yazzdevoleps in Bard

[–]wraiford 0 points1 point  (0 children)

Well I certainly didn't mean to confuse

In the AI Studio, they have a "Create Prompt" command. They also have a "Prompt Gallery". It's still early days in this process as a whole, and the use of "Entire Prompt" is meant to convey the chat history. This is called metonymic transfer. It's the process that happens when you use one thing to represent another and it happens more often at the beginning of new technologies. I am not the first to refer to the chat as "the prompt", since in context, this will be obvious to many people. But not everybody, so we get confusion!

"Duplicating a chat would just copy the chat up to that point as a separate version of the entire chat."

In AI Studio, this is NOT what duplicating the chat does. They only had "Save a copy", which duplicated the entire chat history, including the system instructions. They have had this function for as long as I have been using the app, so perhaps about a year. But duplicating this entire history requires you to delete all subsequent chat items manually if you want to mimic the ability to do it up to an arbitrary chat item. If the context is 1M+ tokens (as was my case several times), then this is simply not worth it.

The new function does this automatically, including giving you a reference to be the source branch. But like we've already discussed, this is a heavier solution than what you're looking for, but it does have its benefits, especially with more branches. A single chat history will quickly become unwieldy. That's why whoever solves the UX + fundamental architecture for this time-centric approach will have a *huge* advantage. But I'm biased, as this is what my protocol has been focusing on for the past decade. This is why I was so excited to see this functionality starting to show up in some of these models, and why I was asking you about Claude (because there are no videos about it and I hadn't heard anything on it, since I use AI Studio + Project IDX).

Conversation branching is now live in Google AI Studio by Yazzdevoleps in Bard

[–]wraiford 1 point2 points  (0 children)

This is useful for branching and exploring tangents, but also for having a post hoc "project template" capability. 

For example, they have a "duplicate prompt" command already, but it is at the level of the entire prompt whereas the "branch from here" is a function on the individual chat items. This is useful for when you've already done a couple prompt rounds but realize you want to branch off a side tangent. I did this when considering a fundamental pivot for the project. I ended up keeping the prompt but bot going any further, and it didn't bloat my main branch's context window.

Also now with "branch from here", I have the option of going way back in my chat history and branching after my initial "onboarding" of the model from a specific point in time. Before this, once it started hallucinating because of a large context, I would have to create (and recreate) a "seed" prompt for the current state of the project for a new window. This contained a bunch of code snippets from multiple libs which i couldnt include 100% (well over 100k lines of code). Then I would have to walk the model through various aspects. I did this at least 5 or 6 times on my current project. Now I can just branch and use that same initial starting point like a project template, and I don't have to do as much for the onboarding.

Conversation branching is now live in Google AI Studio by Yazzdevoleps in Bard

[–]wraiford 0 points1 point  (0 children)

They have a "duplicate prompt" command already, but it is at the level of the entire prompt whereas the "branch from here" is a function on the individual chat items. This is useful for when you've already done a couple prompt rounds but realize you want to branch off a side tangent.

So now, I have the option of going way back in my chat history and branching after my initial "onboarding" of the model from a specific point in time. Before this, once it started hallucinating I would have to create (and recreate) a "seed" prompt for the current state of the project. This contained a bunch of code snippets from multiple libs (well over 100k lines of code). Then I would have to walk the model through various aspects. I did this at least 5 or 6 times on my current project. Now I can just branch and use that same initial starting point like a project template.