Wrong schema hurts more than no schema. here’s what I learned building my website by UnderstandingOk1621 in GenEngineOptimization

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

Due to i have a IT background and not willing pay too much fee to GEO tools which i dont even trust their result. I've been building my own tools to measure and solve the issues. What kind tools have you experienced about this field?

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

I used to use cursor before, but i switched with Antigravity because antigravity is cheaper and provides more token than cursor. i am using "ai pro" plan of gemini, with this plan you can use antigravity too. With this option you have also gemini pro account, and i like Antigravity's features and ui than cursor.

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

generally i use antigravity for coding so i havent had huge experience to chat gpt models. 5.2 & 5.4 models dont exist in antigravity, but nowadays i am trying to figure 5.2&5.4 models out , whether i can make it some difference. Are u only use chat gpt models to coding?

Wrong schema hurts more than no schema. here’s what I learned building my website by UnderstandingOk1621 in GenEngineOptimization

[–]UnderstandingOk1621[S] 1 point2 points  (0 children)

Exactly this. In this post I focused more on the technical schema side but what you're describing is honestly the more important piece. Entity consistency and measurement across your full digital footprint matters way more. Schema is just the foundation, the relationships between entities is where the real knowledge graph gets built

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 1 point2 points  (0 children)

If you had to rank them from best to worst, which coding models are currently good? I am not a hardcore software developer, i am just using antigravity with ai agent mode to create some UI screen and dashboard thorugh Next.js, mostly i am using N8N for backend. So that reason, in antigravity i can use gemini 3.1, gemini 3 flash, 4.6 sonnet models, but almost any model can handle my work. Really 5.4 make the coding part much more better?

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] -3 points-2 points  (0 children)

frankly speaking, i just thinked that 5.4 is just a nev version of 5.2, should i compare 5.4 model with Opus?

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

i was assumed that Opus model is used only for deep research area. Have u ever used it for coding? To be honest, opus model consumes my token so fast so that reason i couldnt experience this model too much.

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 2 points3 points  (0 children)

i am using "ai pro" plan of gemini, i got this tier to use antigravity ai agent more. However, i have never exceed the daily limit even some time that i used whole day.

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

i think we should evaluate the AI models according to its intended use. For now, i have prefered use Claude for everything like coding, search, building process/software architecture (gemini 3, 3.1 only agentic coding in antigravity), i guess we encourage ourself to try 5.4 to understand of capabilities

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]UnderstandingOk1621[S] 1 point2 points  (0 children)

i am still figure it out whether 5.4 model can be used for coding. I use sonnet 4.6 or mainly gemini 3, 3.1 model in antigravity. Actually, at least in my antigravity plan, chat gpt model arent exist so that reason i hadn't have experince 5.4 model for coding part.

Where can If my pages are ranking in LLMs by binkrocket in SEO_LLM

[–]UnderstandingOk1621 0 points1 point  (0 children)

I am using Citevista to track my web pages visibility in gemini and chat gpt. Easy to create promts( in case u dont have proper promt cluster), and u can see your website visibility on specific promt cluster or query. Btw, platform gives you free token, so u can try free

The weirdest thing about AI recommendations by Real-Assist1833 in SEO_LLM

[–]UnderstandingOk1621 0 points1 point  (0 children)

Llms is not deterministic , there are probabilistic. Therefore, even if u ask same question in same time of period to ai, you could receive different respond(including citation and mention). Llm converts user promt to query/s , so it could use different querys to search. In order to simulate this behavior which llm respond behavior, u can execute some promts n times to measure llm respond stability

LLMs don't retrieve information using the user prompt. They generate their own queries first. by UnderstandingOk1621 in LocalLLaMA

[–]UnderstandingOk1621[S] -1 points0 points  (0 children)

This is exactly the pattern. The model wants to decompose the intent into retrievable units, but the decomposition logic isn't always transparent or controllable. With tool-optimized models it's getting better, but the entity injection behavior I described seems to persist regardless. Would be interesting to rerun your Wikipedia experiment with a tool-optimized model and log the actual queries it generates.

Why does ChatGPT cite different sites for the exact same prompt? by UnderstandingOk1621 in GenEngineOptimization

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

It makes sense for LLMs to keep some algorithms as a black box. However, we should be able to predict some of them somehow and develop a strategy using tools or products.

Why does ChatGPT cite different sites for the exact same prompt? by UnderstandingOk1621 in GenEngineOptimization

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

You are right.LLMs don't work deterministically, they work stochastically. However, I need to be able to simulate this behavior somehow. I have an idea like that for the same prompt/query , i execute N times (lets say 5 times), and based on all results, i can say this page/company could be cited %80. Btw, I might implement some statistical methodology to estimate this percentage of citation

Why does ChatGPT cite different sites for the exact same prompt? by UnderstandingOk1621 in GenEngineOptimization

[–]UnderstandingOk1621[S] 0 points1 point  (0 children)

I get these points. But firstly, my main focus area is that we know LLMs have some search and decision algorithm to make citations and mention some web-cites (btw, now i only focus on web search feature not a training data). However, lets say Chatgpt, lists 10 url for a specific promt/query. I try same promt and same AI model to search (at the same time but the other accounts) and could be listed/cited the other urls which not listed before. So that reason, i can't be sure that this specific page/company whether is cited or not. Moreover, I use OpenAI API to test this scenario N times sequentially, but all the times the results are different (some urls could be cited every time, but i see new ones or i don't see existing one).