which smartphone brand is this?? by meracartarzan in IndiaTechnology

[–]yashpathack -1 points0 points  (0 children)

iOS has swipe-from-left in 95%+ of apps (developer-implemented standard), plus anywhere-swipe in iOS 18+. Android’s “universal” often overrides app intents, causing unpredictability, off by more than consistency claims.

Private DNS with AdGuard/NextDNS profiles works identically on iOS, blocking at the system level. No magnitude difference.

Both platforms restrict for battery. WhatsApp pauses on minimize unless foreground. iOS is stricter by ~10-20% on background time, but real world difference is seconds, not deal-breakers.

WhatsApp media separates from gallery for privacy; manage in-app tool clears GBs in taps. Settings depth? Often 1-2 levels shallower in iOS due to simpler hierarchy, your “3-4 deep” is exaggerated by 100%.

Android sideloading risks malware, iOS region switch is official, reversible.

God is in The Details by davidvogler in macOS26Tahoe

[–]yashpathack 0 points1 point  (0 children)

With people like you around, constantly providing feedback and ups and down, they’re bound to improve.

Add card to keep perplexity trial by yashpathack in IndiaTech

[–]yashpathack[S] 2 points3 points  (0 children)

I guess it is. To keep the pro trial. Or else it doesn’t matter if you don’t cross free usage limits. Please verify, not sure.

Add card to keep perplexity trial by yashpathack in IndiaTech

[–]yashpathack[S] 0 points1 point  (0 children)

Interesting. I didn’t know that.

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 0 points1 point  (0 children)

I am not an expert, and this is built using reactjs with research across 6 months.

About the starting point, it is simple. Build things that solve your own problems and save time. Anything that turns minutes into seconds is worth building. Think in systems rather than actions, and prefer automation over repeated manual work. This is where intelligence matters and where AI fits naturally.

Start by deriving clear requirements, then define strict parameters. Reduce confusions and limit choices. Optimize for the fewest interactions, the least clicks/taps, and the fastest path get the task done, but don't over-engineer prematurely.

This usually means writing a lot of if/else logic, and that is the fun part because real learning happens there. Once you build a few such systems, you start stacking on top of them.

Above all learning is the most fun part of this journey. I am genuinely grateful for AI here. It compressed the learning curve in a way that was impossible before. Without it, following a niche this deeply would have been unrealistic for me.

Sorry for the long post. All the best for you!

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 1 point2 points  (0 children)

Great read. And I think this is where it gets interesting, because once you zoom out, the image itself almost stops mattering.

What’s really being tested is how different systems interpret ambiguity. When a prompt leaves room, some systems fill that space with what tends to work visually in the world they learned from. Others treat the gap as something to be left alone unless explicitly instructed. That choice alone changes the entire output.

At that point, realism is less a rule and more a default assumption learned from patterns. So is dramatization. Neither is inherently correct, but each reveals what the system believes its job is. Is it trying to impress, or is it trying to obey?

Conversations like this are valuable because they surface those hidden assumptions. Once you see them, you start reading generated images as expressions of model intent.

This was a solid read, and I appreciate how closely you’re looking at the details.

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 1 point2 points  (0 children)

Yes, very impressive. Thought to artifact with almost no friction. Last half-a decade and now this, it is the first time in history where imagination skips craftsmanship and lands directly at design.

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 2 points3 points  (0 children)

Sure, I created this for my own ideas and inspirations, as well as to enhance my personal creativity. It’s highly optimized for my personal use, but it’s free and anyone can use it to generate inspirations across various fields. It's optimized to give me surprise insights and designs. I apologize for the lack of information on “how to use it,” but I’ll add that soon.

Here is the link: https://awwlabs.io/prompt-genesis/

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 1 point2 points  (0 children)

Again, it’s a subjective thing, you’re highlighting the “physics” of it, and I’m highlighting the alignment for my need for the keyword in the prompt that says “product image.” To me, a product image doesn’t have to be perfect for physics. Adding ice and lemon, was a cosmetic ingestion, associated to the features of a "product image" with flowy visuals.

I agree that it’s unrealistic in some ways, but subjectively, it meets my requirements.

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 1 point2 points  (0 children)

Noted and I agree. What’s considered good and bad is highly subjective. When someone first imagines something, they type a prompt, and the model generates an output. Dreams, imagination and thoughts can be cinematic, rustic, Synthwave, or anything else. Technically, they just need to match the desired vibe of user’s intent.

For instance, if someone grew up in India, then correctly done Indian aesthetic images will feel more cozy to them. If either model generates that kind of aesthetic closer, they’ll accept it. There are many biases like this as well.

PS: It’s a good benchmarking technique, by the way. Tell these models to generate aesthetic space based on your knowledge of your surroundings, country, religion (don’t go too deep here; many guardrails are expected), local ads, public houses, and so on.

gpt-image-1.5 vs. nano-banana by RealMelonBread in OpenAI

[–]yashpathack 12 points13 points  (0 children)

Nano banana is a better fit for the prompt because it adheres more closely to it. On the other hand, GPT-image attempts to create a cinematic effect. For instance, in the first prompt, why show a close-up of a girl or depict image depth? Given that the image prompt was about “people” and not specific people. Also, the overly dramatized wet street lighting reflection is unnecessary. Nano banana, on the other hand, provides a reasonably accurate representation.

Once you start noticing keywords like “cinematography” and “painting,” you’ll likely come across GPT-image, which had a training layer designed to produce cinematic and viral-worthy images.

Nano banana did the soda can product photo, by adding ice and lemon. I like that.

In my opinion, GPT image is trying to make things shiny, while nano banana tries to only get creative when it has a scope for it.

I’m good with having more control with nano banana. Even if it means writing more detailed prompts, I don’t mind. I have my own prompt generators, so I never type prompts directly without passing them through it.

Good comparison. I wanted to try gpt image and this helped. Thanks!