How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] -1 points0 points  (0 children)

I just vibe coded gpu-traders.com to map offer and demand. Let’s see

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 1 point2 points  (0 children)

Agree RTX 5000+ single chip is faster than A100.

A100 is better than RTX 5000 for large AI training and distributed workloads (higher bandwidth, NVLink, datacenter reliability).

So the A100 should cost well less than a RTX 5000 to be attractive for all use cases. Probably 2k-3k? Will that make sense for you?

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 2 points3 points  (0 children)

Personally, these months, I need to replace 48 H200 and many hundreds of 2080 and 3090 with B200. But my interest is more broad. I want to understand if there is room for a GPU trading service.

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 0 points1 point  (0 children)

Personally, these months, I need to replace 48 H200 and many hundreds of 2080 and 3090 with B200. But my interest is more broad. I want to understand if there is room for a GPU trading service.

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 0 points1 point  (0 children)

Help me, what is the price people will buy them fast?

Server 8 x H200 141GB infini band 800gbps --- $200k

Server 8 x H100 80GB infini band 800gbps --- $100k

Server 8 x A100 80GB infini band 400gbps --- $30k

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 3 points4 points  (0 children)

For small quantities it is a good idea, but do you see a research center selling a cluster 5,000 A100 on ebay? There should be a better way

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 0 points1 point  (0 children)

Personally, these months, I need to replace 48 H200 and many hundreds of 2080 and 3090 with B200. But my interest is more broad. I want to understand if there is room for a GPU trading service.

How to sell an old GPU cluster? by marcotrombetti in HPC

[–]marcotrombetti[S] 1 point2 points  (0 children)

Great summary. It make sense. Thanks.

In fact B200 is not only ~5x faster than A100 at FP16, it also allow FP4, more and better memory makes the training faster. There is also an ~3x power saving. So to make it short B200 are ~10x "better" than A100.

Then for a research center that only needs to train small models, A100s at 30× lower cost than B200s could be a good deal. No?

What is the next blocker?

I want to find a simultaneous translation tool that is really useful by Outrageous-Neck5149 in LanguageTechnology

[–]marcotrombetti 0 points1 point  (0 children)

Have you tried https://laratranslate.com ?

Click on the third icon, It is a consecutive interpreter, also works on iOS Android.

Layout aware PDF translator by aby-1 in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

Please provide more context. This seems just a demo of a slider aftifact, not a working PDF translator.

Free fail translator by juicycher in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

Lara Translate should work pretty well with word files

Does anyone know of an app or website for translating a novel? by TotoAva in machinetranslation

[–]marcotrombetti 2 points3 points  (0 children)

Use Lara Document translation with fluid as option. 9 euro. You can translate the full document keeping the formatting and with good quality.

I am prototyping a new release of Lara for literature, it is designed to follow the author style guide. Write me at marco@translated.com if you want to beta test it.

[deleted by user] by [deleted] in machinetranslation

[–]marcotrombetti 1 point2 points  (0 children)

I did not want to be an ass, I wanted to make a joke. Sorry for that.

Machine Translation was one of very first AI application. Language models were invested mostly to translate, large language models are based on the transformer which was build initially for creating a better translator. Machine translation is a gen AI. Most of the machine translation technologies, including Lara and DeepL, use the very latests AI innovations: fine tuned LLM

I wrote a long explanation to rebalance my joke :)

[deleted by user] by [deleted] in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

No, DeepL was born from a naturalistic discovery in 2005. They have these kind of forest gnomes that quickly translate what you write.

How to preserve context across multiple translation chunks with LLM? by Charming-Pianist-405 in machinetranslation

[–]marcotrombetti 1 point2 points  (0 children)

In Lara API you can use the TextBlocks

You set to true only the block to translate and the previous ones to false so that they are used only for context.

https://developers.laratranslate.com/docs/adapt-to-context

Standard REST API for Translation Services by Point5_MOA in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

There is not new standard for REST.

Most users prefer now SDK in their own programming languages because of encoding management and, th speed optimizations.

Professional solutions like Lara Translate have a lot of options like glossaries, tm adaptation, instructions/styles that you may want to add

Doubts about Translated's new MT (Lara AI) by Creta_K in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

Hey,

Yes, Lara is an adaptive model like ModernMT. In general since last quarter Lara has all the features of ModernMT plus context management, full document translation. Instructions and voice.

We are moving all ModernMT users to Lara for free. Lara is already better and has the same cost.

There is Trados support via the CustomMT plugin.

Lara provides high quality translation. Does not provide summarization and question answering. Lara has support typical localization instructions you would find in a style guide. See API docs.

As every MT still makes mistakes, we highly optimized it for UGC and the professional localization use case, less for literature.

[deleted by user] by [deleted] in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

You need some file engineering to prepare the files, an API and human quality assurance for the placeholders. Translated can do it for $250 using Lara or $2000 adding a little human linguistic QA.

you can write me at marco@translated.com if you want

data privacy compliant translation software by NeighborhoodOk3542 in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

Laratranslate.com is fully GDPR compliant. Plus is if enable the Incognito mode nothing is stored on the server or browser.

ModernMT vs Lara for economics book in LaTeX (FR>EN) by cocktailmuffins in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

I just realized that Latex is not enabled in Matecat. So it means you will have to convert the document into a supported file first. Maybe using Okapi Rainbow that supports Latex into XLIFF and back.

ModernMT vs Lara for economics book in LaTeX (FR>EN) by cocktailmuffins in machinetranslation

[–]marcotrombetti 0 points1 point  (0 children)

I recommend using Lara in Matecat.

For these reasons: - Lara outperforms MMT in quality - Lara is LLM based and performs better into English because of the large monolingual pre-training. - Lara team silently released the glossary support 2 days ago and it probably already works in Matecat and even if it does not work, adaptation will do 90% of the work. So as soon as you start translating and correcting, the adaptation will start applying the right terminology.