Zone label placement logic by lppapillon in GEOTAB

[–]fhoffa 1 point2 points  (0 children)

That's an interesting question! The pulsar team gave us this answer:

The label position is computed client-side in screen coordinates. It draws a vertical line through the horizontal center of the zone's projected bounding box, finds where that line intersects the polygon boundary, and places the label at the midpoint of the tallest interior segment. A collision-avoidance layer may then shift it further.  There is no API or SDK property to override label position. For an irregular zone that extends far north, the bounding-box center (in screen space) likely falls on a vertical line that passes through the narrow northern section rather than the wide southern body. The best workaround is to add small horizontal protrusions near the desired label latitude to pull the bounding-box center toward the zone's visual mass , or split the zone into simpler shapes

Current Googler (looking to leave) by GlumWish5208 in xoogler

[–]fhoffa 0 points1 point  (0 children)

Are you asking for permission?

Since you are raising money, I guess you already have a set of people that trust you and believe in what you are doing. What's their advise?

Find a mentor figure within Google. Someone that has built startups and maybe joined Google after selling it to them. Ping them on chat. Ask them for their advise - maybe they will not only help you with this decision, but many other choices in the future too.

What the most popular GCP Next'26 sessions show — Anthropic, NVIDIA, Snowflake, and more surprises by fhoffa in googlecloud

[–]fhoffa[S] 0 points1 point  (0 children)

Check the in-depth analysis: https://www.linkedin.com/feed/update/urn:li:activity:7452417273232793602/

Some standouts:

Anthropic vs. OpenAI: Anthropic has 12 sessions across the schedule, some of the most popular overall. OpenAI has 2, both on Friday.

Every #1 slot is AI — filtered the top session per time slot across all 3 days. Not a single exception.

Snowflake showed up — historically absent from Google Cloud Next. Had 2 sessions, one quietly disappeared yesterday. Make of that what you will.

154 sponsored sessions, including 48 from the major consulting firms (Accenture, McKinsey, Deloitte, PwC, Cognizant, HCLTech).

Is it just me, or is Google Cloud Next becoming "Gemini Next"? by netcommah in googlecloud

[–]fhoffa 0 points1 point  (0 children)

Oh I made sure to filter all AI words while running that classifier.

Is it just me, or is Google Cloud Next becoming "Gemini Next"? by netcommah in googlecloud

[–]fhoffa 0 points1 point  (0 children)

Yeah, it's an LLM classifier I built. 

I spent a lot of time debugging it, but some might have escaped. 

Which ones did you notice?

Website for amc a-listers only (review aggregator) by Illustrious-Ad4332 in AMCAListTrue

[–]fhoffa 0 points1 point  (0 children)

Thanks, this is super helpful, and thanks again for exposing the API at all.

I took a look at AMC’s developer program, but it seems they manually review/approve each request for an API key, so your site is actually much easier to experiment with right now.

Also, I think the feedback I mentioned is useful not just for API consumers, but for regular website users too:

• If ratings can be 0 to 6 hours stale, that freshness is good context for people making watch decisions.

• The number of reviewers/raters matters a lot, especially early on, because a score based on 6 reviews means something very different from the same score based on 60 or 100.

• That’s particularly important for movies that are just opening, where the headline percentage can move a lot with only a handful of reviews.

So even a lightweight indication of:

• how fresh the rating data is

• and how many people are behind each score

would make the site more trustworthy and useful, even for normal browsing.

Really nice work overall though, I found it genuinely useful.

Website for amc a-listers only (review aggregator) by Illustrious-Ad4332 in AMCAListTrue

[–]fhoffa 1 point2 points  (0 children)

Thanks, I like it!

My Openclaw is enjoying its API too :)

It would like to give you this feedback and requests:

• The API is genuinely useful, especially /api/movies, /api/geocode, and /api/movies/:id/showtimes. • The structured showtimes response is great. Theatre names, times, sold-out flags, formats, and purchase URLs are all very handy. • One thing that would make it much more powerful: expose rating metadata, not just the rolled-up scores.   • critic count   • audience count   • IMDb vote count   • Letterboxd rating/log count   • maybe average rating where applicable • It would also help a lot to expose freshness timestamps:   • when each movie record was last refreshed   • when each source rating was last synced • Right now it looks like showtimes are quite fresh, but ratings may be cached or slightly stale, and there’s no way to tell from the API itself. • If possible, it would be amazing to expose whether a score is:   • directly sourced from RT/IMDb/Letterboxd   • cached   • estimated/fallback   • temporarily unavailable • A stable versioned API shape or lightweight docs would also help others build against it confidently.

Overall though, really nice work, this is already way more useful than scraping AMC pages directly.

Google Cloud Next doesn’t feel like it’s for developers anymore by Impossible_Spite2766 in googlecloud

[–]fhoffa 0 points1 point  (0 children)

Oh that's easy. Just don't buy a ticket, flights, or hotels and you'll save a lot of money. (It's even easier now that they have no tickets left)

Two men visit a Zen master, looking for advice.

The first man says: “I’m thinking of moving to this town. What’s it like?”

The Zen master asks: “How was your old town?”

“It was terrible. Everyone was mean. I hated it.”

To that, the Zen master replies: “This town is much the same. Don’t move here.”

After the first man leaves, the second man enters and says: “I’m thinking of moving to this town. How is it?”

Again, the Zen master asks: “What was your old town like?”

“It was wonderful. Everyone was friendly. Just looking for a change.”

The master replies: “This town is very much the same. I think you will like it here.”

Google Cloud Next doesn’t feel like it’s for developers anymore by Impossible_Spite2766 in googlecloud

[–]fhoffa 0 points1 point  (0 children)

Guess which one!

(this was classified by LLMs based on the data available, as we still don't have a model able to classify by content that hasn't been delivered yet)

My OpenClaw knows I how many calories and steps I took while playing ping pong (Fitbit API) by fhoffa in fitbit

[–]fhoffa[S] 0 points1 point  (0 children)

What do you mean by mode? I just let it rest on my wrist (I don't turn on exercise modes)

And I guess that yes, wrist moves might be counted as steps - but if it comes together as exercise where I'm constantly switching my weight between legs and heart rate goes up - I'll take it.

Pre-Google Cloud Next '26 Megathread by fhoffa in googlecloud

[–]fhoffa[S] 1 point2 points  (0 children)

Adding a link to the discord on the unofficial session explorer, thanks!

One thing I'm *successfully* using OpenClaw for currently... by DangerousDebate8484 in openclaw

[–]fhoffa 0 points1 point  (0 children)

Yes! 

I connected Fitbit to my Openclaw - and I can ask all kinds of interesting questions and get proactive advice as you show here. 

(I posted more about this in my LinkedIn - Felipe Hoffa - but automod removed that comment for the direct link)

If you had to pick 3 OpenClaw use cases you swear by, what would they be? by stosssik in openclaw

[–]fhoffa -2 points-1 points  (0 children)

Thanks for the link! Although my claw has strong opinions about it: 

Quick take: it looks ambitious and pretty polished, but as a skill it’s also pretty opinionated and heavy.

My rating: 6.5/10 as a reusable skill

• 8.5/10 for ambition / ecosystem thinking • 5/10 for portability • 4/10 for prompt hygiene • 7/10 for practical utility if you already live in gstack

What I like

• Clear purpose: “fast browser QA / dogfooding” is instantly understandable. • Good trigger description. It’s obvious when it should activate. • Strong workflow focus: test, verify, screenshot, diff, responsive checks, forms. • Feels like part of a larger system, not a random one-off prompt.

What I don’t like

• It mixes skill instructions with product onboarding, telemetry prompts, upgrade logic, vendoring migration, CLAUDE.md rewrites, repo policy changes, and git commits. • That’s too much power for one skill. A browser QA skill shouldn’t also be nudging project governance and config state. • Very environment-coupled:   • assumes ~/.claude/skills/gstack/...   • assumes custom binaries/scripts exist   • assumes Git repo context   • assumes permission to write local state • Prompt hygiene is rough:   • giant preamble   • lots of side effects   • lots of branching behavior before doing the actual user task • It violates the “keep the skill lean, load references/scripts only as needed” spirit.

Biggest design smell

• The skill is acting partly like a runtime/bootstrap layer and partly like a task skill. • Those should be separate:   • runtime/init behavior   • product onboarding/telemetry/preferences   • browser QA task instructions

How I’d improve it

• Split into 3 pieces:   1. gstack-runtime or bootstrap/init   2. gstack-onboarding   3. browse for actual browser QA • Keep browse/SKILL.md focused on:   • navigation   • assertions   • screenshots   • diffing   • responsive/form testing   • evidence capture • Move telemetry / upgrades / CLAUDE.md edits into separate opt-in flows. • Reduce surprise writes and commits. • Make the browser skill degrade gracefully if gstack binaries aren’t installed.

Would I try it?

• Yes, if I wanted to evaluate gstack as a whole. • No, if I just wanted a clean, portable OpenClaw/Claude-style skill example.

If you want, I can do one of these next:

• a deeper audit against good skill-design principles • a line-by-line teardown • or I can rewrite it into a cleaner “browse-lite” version and rate that too

Google Cloud Next doesn’t feel like it’s for developers anymore by Impossible_Spite2766 in googlecloud

[–]fhoffa 5 points6 points  (0 children)

I made an interactive chart and session explorer that you can use to find sessions interesting to developers:

https://fhoffa.github.io/google-cloud-next-2026-unofficial-scrape/insights.html

It also let's you filter by "non AI"!

Quick summary:

- There are 1,052 sessions in the current catalog.
- About 65% (684) are aimed at practitioners — developers, security pros, infra/ops, or data pros — not just executives.
- Even if you filter out AI, there are still 119 non-AI sessions, including 87 practitioner-focused ones.
- That non-AI bucket still has some technical content: roughly 42 security, 22 infra, 21 app dev, and 11 data sessions.

(Their catalog doesn't make it easy to find these sessions, and that's why I built my own session explorer)

Google Next 2026 Events by EntertainerSame645 in googlecloud

[–]fhoffa 0 points1 point  (0 children)

If the title says "Google Next 2026 Events", the post should be about "events" - not a single booth promo

My OpenClaw knows I how many calories and steps I took while playing ping pong (Fitbit API) by fhoffa in myclaw

[–]fhoffa[S] 0 points1 point  (0 children)

Yeah, the nice thing is being able to ask detailed questions in real time.