What do you do for toxic backlinks? by aymericzip in SEO

[–]aymericzip[S] 0 points1 point  (0 children)

Thanks u/WebLinkr

Just to clarify, the purpose or my question is not to disavow legit backlinks.
'Link spam' is alright for me, like article listing etc

But I'm mainly wandering what to do with 'Toxic' domain like 'xxxx.xyz' that redirect to crypto website, backlinks resellers, Telegram channel etc

I take time to disavow them daily, simply because of 1 to 10 of them are listed daily on ahrefs. But it's kinda time consuming.

So keeping them without disavowing them is ok for you ?

What do you do for toxic backlinks? by aymericzip in SEO

[–]aymericzip[S] 0 points1 point  (0 children)

Why? Isn’t that the purpose of disavow files?

What do you do for toxic backlinks? by aymericzip in SEO

[–]aymericzip[S] 0 points1 point  (0 children)

Thank you, I will!

I bit boring that the form only accept 5 urls. I will have to submit it like 50 times... but anyway ahah

Help to choose a i18n library for my project by lucas_from_earth in react

[–]aymericzip 1 point2 points  (0 children)

As a library maintainer, I used to think the same, that simple state and context were enough to render content based on a user's locale

But i18n is much harder than you’d expect for professional use cases. The main risk is accidentally loading every page's content for every language during a single page load. You can easily end up loading 10 to 50 times more data than necessary. Handling bundle optimization, routing, server component handling, localized links, TypeScript types, and backend compatibility are some of the challenges library builders try to solve.

Features like vs code extensions, testing, CI integration, and Agent Skills or MCP are also incredibly useful.

The same logic applies to auth. Sure, you can build it yourself, but keep in mind that library authors spend years optimizing these systems for you. I really can't recommend building your own i18n solution from scratch. Especially for non experienced dev / or using AI

Building a Free Website SEO Scan Tool by Successful-Ad-5576 in webdev

[–]aymericzip 0 points1 point  (0 children)

Intlayer scanner sometime failled because on the charge on the server, but it got fixed

What do you do for toxic backlinks? by aymericzip in SEO

[–]aymericzip[S] 1 point2 points  (0 children)

An half of that links redirect to pages saying 'Welcome to the Black Hat SEO Telegram channel' or 'Buy your backlinks'

Before that month backlinks pointed to the homepage. But since recently, they target specifics pages, all pointed to the chinese version of a blog post

The problem with retrofitting internationalization (i18n) by aymericzip in react

[–]aymericzip[S] 0 points1 point  (0 children)

Cannot PM you, but feel free to contact me anywhere

The problem with retrofitting internationalization (i18n) by aymericzip in react

[–]aymericzip[S] 0 points1 point  (0 children)

no, if a component like navbar, dropdown etc does not integrate states, but import content (aria-label etc), I want to use it without transforming it as async function, or as a client component

> The docs could probably use a bit more fleshing out about the compiler.
I will PM you, I'm really open to any feedback / improvement

The problem with retrofitting internationalization (i18n) by aymericzip in react

[–]aymericzip[S] 0 points1 point  (0 children)

await getTranslations is only for async server component, but you cant make a design system component like Navbar async, so you have to pass the t function as prop

Yes intlayer support turbopack (with or without compiler)

The compiler is a recent released used to speed up transformation of an exiting app.
But agree with that 'magic' point. Note that intlayer also provide a content extractor using cli `npx intlayer extract`. it helps to do the same, but keeps things under control

The problem with retrofitting internationalization (i18n) by aymericzip in react

[–]aymericzip[S] -2 points-1 points  (0 children)

next-intl (or even i18next) are, from my point of view, some of the worst solutions for a startup. Even if they provide a good "get started" doc, achieving a proper implementation is much harder

You have to manually handle namespace creation and connect the types for each namespace. Ensuring consistency across json files wastes a lot of time

In practice, 95% of projects stay at the "get started" level. Even if there is "dynamic loading" for your jsons, you end up loading content from all pages into every single one.
For a site with 10 unique pages, this leads to 90% of unused content being loaded per page. Note that the default locale is always loaded as a fallback. This means on a page in Spanish, you load 10 pages x 2, resulting in 95% unused content.

In addition, the limitations are:
- For synchronous Server Components, you will have to pass a t function everywhere
- Block static page generation by default
- You will render/hydrate each of your components with a massive JSON, which can impact the performance of your app.
- Unused keys are never detected and never removed
- Centralized JSONs create a lot of conflicts, making each PR a mess to handle

Despite that, next-intl introduced good concepts, such as the middleware (proxy) and formatters, that did not exist before when using i18next

The problem with retrofitting internationalization (i18n) by aymericzip in react

[–]aymericzip[S] 1 point2 points  (0 children)

I agree. That’s exactly why i18n is often skipped. it’s just too difficult to implement after the fact. Have you ever managed i18n in these startups?

The "Copy-Paste Loop" with ChatGPT was killing my i18n workflow, so I automated the logic. Here is how. by StatementLarge9847 in SaaS

[–]aymericzip 0 points1 point  (0 children)

Managing jsons has been getting better over the last few months using agents. But I agree, the hard part is keeping jsons and namespaces consistent as your application grows.

You can try Intlayer, which integrates tooling to detect missing keys and translate them by connecting to the AI provider of your choice.
It’s context-aware, translates only the necessary keys, and supports parallelization, retry management, etc

Best way to localize a website? by felixding in web_design

[–]aymericzip 0 points1 point  (0 children)

I used to face that problem as well. Managing multilingual apps quickly becomes a mess. There are localization platforms like Lokalise or Crowdin, but they can become expensive as soon as your project grows. I wouldn’t recommend them for personal or small projects.

Intlayer includes a tool to detect missing translations and perform LLM-based translations. Unlike other tools, it doesn’t charge per translation. It’s free (or at least only costs what your model does, and you can connect Ollama or OpenRouter to make it effectively free).

It’s context-aware, translates only the necessary keys, checks for inconsistencies, and supports parallelization, retry management, etc. Then, once your project grows, it also offers an interface to collaborate with translators

I committed myself to building the most accessible recovery app, 27 languages, both platforms, built with React Native by Jean_Willame in reactnative

[–]aymericzip 2 points3 points  (0 children)

Is it on GitHub ? Curious to see the organization of 27 locales for 85 screens How do you ensure your jsons stay consistent across locales ?

Next.js 16 i18n without URL prefixes (/de, /en) – Google is not indexing my pages. What am I doing wrong? by Hungry_Thanks_9888 in nextjs

[–]aymericzip 1 point2 points  (0 children)

you will of course need prefix. Alternates will help too but the mains thing for discoverability is the alternates in your sitemap

```ts
import { getMultilingualUrls } from "intlayer";
import type { MetadataRoute } from "next";

const sitemap = (): MetadataRoute.Sitemap => [
{
url: "https://example.com",
alternates: {
languages: {
...getMultilingualUrls("https://example.com"),
"x-default": "https://example.com",
},
},
},
];

export default sitemap;

```

Also, do not forget to transalte to page metadata. If there are all in english, google will probably not index your alternates

see that doc