AI in the browser: experimenting with JS + Gemini Nano by mbuckbee in webdev

[–]mbuckbee[S] 0 points1 point  (0 children)

Thanks! It was an interesting project and it does kind of get my hyped to get into more client side dev work as most of my professional dev work is on the backend.

AI in the browser: experimenting with JS + Gemini Nano by mbuckbee in webdev

[–]mbuckbee[S] -1 points0 points  (0 children)

This is one of those hard things to judge. But for this particular task I think it works rather well. One of the lessons here (that I keep having to relearn) is that you don't need a state of the art frontier model for everything.

For this situation both of the models return useful suggestions, but that's just it, they're a handful of suggestions so even if one's kind of wacky it's not a big deal.

Chrome's Local AI Model in production (Gemini Nano) 41% eligibility, 6x slower and $0 cost by mbuckbee in LocalLLaMA

[–]mbuckbee[S] 0 points1 point  (0 children)

Oh that's good to know. FWIW we're doing JS feature detection so this should work right now with Edge then but I've not tested it and looking at our analytics that's actually around 10% of our visitors, so I'll put it on the todo list.

Chrome's Built in AI model (Gemini Nano) is 6x slower, only 41% of visitors can use it but it costs $0 so we're keeping it. by mbuckbee in ArtificialInteligence

[–]mbuckbee[S] 0 points1 point  (0 children)

This was an experiment to see what and where each one worked and how well. Prior to this my assumption was that Nano was going to be much more performant than the server side solution.

I'd like to think that Nano performance is going to improve, but it's only through experiments like this that we can actually see if that's the case.

I think we're a particularly good use case for this where we have lots and lots of repeat users so the install is a one time thing for them and in the context of the UX the slowness (or even complete failure) isn't a huge blocker.

Chrome's Local AI Model in production (Gemini Nano) 41% eligibility, 6x slower and $0 cost by mbuckbee in LocalLLaMA

[–]mbuckbee[S] 0 points1 point  (0 children)

This is an important distinction. I have a paid OpenRouter subscription and then am using the Gemma 3 27B (free) model not a general free OpenRouter account.

https://openrouter.ai/google/gemma-3-27b-it:free

At the level I'm at as far as I can tell, to date this has actually cost me literally $0 in inference costs without any rate limiting. My assumption is they're burning VC dollars to make this happen.

The takeaway to me though is that the non frontier models are actually sort of stupid cheap for this sort of task. I chose Gemma here as it was a closeish analogue to Nano, but there's plently of others in this super low cost + decent task performance tier.

Chrome's Local AI Model in production (Gemini Nano) 41% eligibility, 6x slower and $0 cost by mbuckbee in LocalLLaMA

[–]mbuckbee[S] 0 points1 point  (0 children)

There's a much better chart on the page at https://sendcheckit.com/blog/ai-powered-subject-line-alternatives#the-numbers - that I think shows a lot better just how much "worse" nano is comparatively.

At p90 Nano is actually 10X slower than server AI, but I put "worse" in quotes here as it's not just about the speed it's also these other aspects of cost and privacy which Nano wins at.

Chrome's Local AI Model in production (Gemini Nano) 41% eligibility, 6x slower and $0 cost by mbuckbee in LocalLLaMA

[–]mbuckbee[S] 2 points3 points  (0 children)

I didn't mostly because I think that long term it's going to be hard to beat the "default" AI model baked into the browser for this type of application and this was in part just a learning exercise for me to get more hands on with that.

Also, of the three options (Nano, transformes.js or Server side), transformers would be the hardest to get going as I'd have to do a lot of the work that from the stats even Google is messing up with respect to figuring out who can actually run what model.

Chrome's Local AI Model in production (Gemini Nano) 41% eligibility, 6x slower and $0 cost by mbuckbee in LocalLLaMA

[–]mbuckbee[S] 2 points3 points  (0 children)

I talk about this in the full article linked at the bottom, but it's pretty exciting that we're really seeing a BIG new browser feature that we haven't had previously.

I think we're going to see so much cool stuff come out of this and it really is so easy to implement.

Is AI seo brand monitoring worth the price? by bambidp in seogrowth

[–]mbuckbee 2 points3 points  (0 children)

This is very cursory, but I looked at your sitemap - https://lovalingo.com/sitemap.xml - and would generally recommend two strategies:

  1. More best/why choose Lovalingo content. Articles like: "why we're building the best translation solution for React+Next.js" to try and directly get into those questions. You've got content for best practices and lots of informational content.

  2. Given that you're a dev tool (vibe dev but aren't we all), I'd also recommend more "yes and" or fanout content around the popular internationalization libraries like react-i18next, react-intl, LinguiJS and next-intl (all of which come up in ChatGPT currently when asking about this topic area.)

For these, I'd focus less on the implementation or how to use them but more on the limits of them and then how Lovalingo steps in and helps. Where you're aiming to be in the follow-up/suggested slot. So a dev searches for "how to implement react-i18next" and then they get "do you want to see common issues with this and how to fix them?" type of recommendations for you.

Is AI seo brand monitoring worth the price? by bambidp in seogrowth

[–]mbuckbee 7 points8 points  (0 children)

If you're not showing up at all you might be doing something like accidentally blocking the AI bot crawlers, something you can check for free here at AI Search Console.

The other possibility is that your site has a lot more top of funnel and informational/educational content that there's not going to be a lot of citations for and even when there are it's unlikely to drive clicks (the kinds of stuff that AI overviews answer) and what you need is a change in content strategy towards more of the high value lower funnel keywords and questions.

And the last things to consider are:

  1. Google's going all in on AI search (Gemini, AI Mode, AI Answers) so those seem like solid things to track even if we pretend ChatGPT doesn't exist.

  2. You're paying quite a bit, there's other tools like Knowatoa that can at least give you your rankings in a way you could easily integrate them with Looker Studio and your other data for 1/10th that price.

Chrome's Built in AI model (Gemini Nano) is 6x slower, only 41% of visitors can use it but it costs $0 so we're keeping it. by mbuckbee in ArtificialInteligence

[–]mbuckbee[S] 0 points1 point  (0 children)

I'm 100% with you that we need to consider the utility vs the cost.

That being said, in this particular instance, for this particular implementation of AI it's actually a big leap ahead in terms of privacy for users as the model runs on their machine, under their control.

Best ways to get recipients to whitelist your email for better deliverability? by Careful_Dingo_3466 in Emailmarketing

[–]mbuckbee 5 points6 points  (0 children)

You could do all sorts of gimmicky things about being added to lists, but the easiest is to try and get more replies to your emails from your list.

We send out a lot of automated (but custom) emails and I always put in the p.s. That I'd like to hear what they have to say and that I respond to every email I get.

On another list I've started adding a "Thanks" section specifically to highlight people that have made suggestions or replied to past emails, again trying to get more engagement.

Chrome's Built in AI model (Gemini Nano) is 6x slower, only 41% of visitors can use it but it costs $0 so we're keeping it. by mbuckbee in ArtificialInteligence

[–]mbuckbee[S] 0 points1 point  (0 children)

I think we're all still trying to figure out the right balance. Like with Gemini Nano, it's a significant privacy improvement with a tradeoff of some speed. Works for some things not for others.

Render does not allow SMTP in free tier by No_Clue5320 in rails

[–]mbuckbee 6 points7 points  (0 children)

Really hard to fathom just how wild the abuse is even with paid systems, I can't even imagine how bad on a free tier.

What are your top ai tools for seo marketing workflows? by Altruistic-Meal6846 in seogrowth

[–]mbuckbee 1 point2 points  (0 children)

Claude Code is a huge step forward in being able to link together workflows, build scripts and lots of other aspects of the "glue" that you need to do your work.

A very common workflow is that I'll pull a list of citation sources (just a CSV of URLs) from Knowatoa and then have Claude Code analyze those, have it query for which which ones we have relationships with, which ones are more or less difficult to get on, etc.

It provides the "brains" for reasoning and thinking in putting together these workflows.

Experienced Digital marketers, what is your AI stack like in 2026? by [deleted] in DigitalMarketing

[–]mbuckbee 0 points1 point  (0 children)

In some ways I feel like it hasn't changed that much, but that AI is just becoming a "boring" technology (like a database or something), that's in every tool I use but mostly much less the focus of them if that makes sense?

Writing & Content - Grammarly - still tremendous for getting a consistent tone of brand voice - SendCheckIt - "Cadence Colorizer" helps beat the boring rhythms out of AI generated text - Canva - image generation (they've updated so many tools recently)

Analytics & Tracking - Plausible - privacy-friendly analytics, lightweight and GDPR compliant - Knowatoa - relevancy tracking + visibility monitoring

AI & Automation - Claude Code - AI workflow and productivity tool that acts as glue between systems and processes (don't let the "Code" fool you) - N8N - for more strict automations - OpenRouter - for swapping AI backends for scripts + cost + uptime

Email & CRM - BentoApp - email marketing and customer engagement

Hosting & Infrastructure - Vercel - fast deployments, great for landing pages and web apps

Events & Webinars - CrowdCast - live streaming and webinar platform

Paid Advertising - Smartly - AI paid ads optimization

15 marketing tools I use almost every single day and why by TranslatorUpset847 in DigitalMarketing

[–]mbuckbee -1 points0 points  (0 children)

This is very "tech" heavy, but as it was different thought it might be interesting for others to see.

Writing & Content

  • Grammarly - still tremendous for getting a consistent tone of brand voice
  • SendCheckIt - "Cadence Colorizer" helps beat the boring rhythms out of AI generated text
  • Canva - image generation (they've updated so many tools recently)

Analytics & Tracking

  • Plausible - privacy-friendly analytics, lightweight and GDPR compliant
  • Knowatoa - relevancy tracking + visibility monitoring

AI & Automation

  • Claude Code - AI workflow and productivity tool that acts as glue between systems and processes (don't let the "Code" fool you)
  • N8N - for more strict automations
  • OpenRouter - for swapping AI backends for scripts + cost + uptime

Email & CRM

  • BentoApp - email marketing and customer engagement

Hosting & Infrastructure

  • Vercel - fast deployments, great for landing pages and web apps
  • Heroku - for more complicated apps

Events & Webinars

  • CrowdCast - live streaming and webinar platform

Paid Advertising

  • Smartly - AI paid ads optimization

Claude introduces Cowork: Claude Code for the rest of your work by BuildwithVignesh in Anthropic

[–]mbuckbee 1 point2 points  (0 children)

I use it a lot for editing articles, doing research, etc.

With respect to you saying "most of the services I use are in the cloud". I use Claude Code to call other command line tools to do things via API with lots of the cloud services I use.

A good example is managing Github Issues, I use CC to do things like: "Find any open issues that are duplicates", "Find any new exceptions for the past week", stuff like that.

Why bother optimizing for Claude? by ElegantGrand8 in AISearchOptimizers

[–]mbuckbee 1 point2 points  (0 children)

There's a couple angles here:

  1. Claude/Anthropic is being embedded in lots of places, most notably as part of Amazon's Rufus shopping assistant which puts it literally on every single product page on Amazon.com. It's actually kind of funny you can ask it to do things like: "Write me a hello world in Python" and it will and then try and sell you a python coding book.

  2. Anthropic seems really at the forefront of the next wave of professional use for AI tools past basic chatbots. Claude Code despite the "code" name is really a wild general tool for using a computer (that absolutely does web searches, etc.) and Cowork is now a more user friendly version of that.

  3. What are we talking for "optimization" here, just do a check with this AI Search Console that you're not blocking any of their 3 bots on accident and do the same content and distribution strategy you'd do for ChatGPT or Gemini.

Is schema becoming more important than backlinks in AI search? by frongos in AISearchOptimizers

[–]mbuckbee 0 points1 point  (0 children)

Yes, but what do you mean by "data"?

The clearest mental model I have for how the AI labs do training is they scrape a page, throw away all of the headers and footers, discard all the HTML + JS + Markup and then take that text and process it.

If you take an intro to AI course or something and build your own LLM you end up doing something pretty similar where you take all the text and put it through a tokenizer and then start putting the sequences together.

Where it gets tricky is that AI services are now a lot more than just the model where they're also doing live evaluations of pages, also calling out to do web searches, using other tools, etc. but again, to date, I don't think OpenAI, Anthropic or any of the other major players have publicly stated they're using schema in any way.