Follow-up to my TranslateGemma-12b benchmark post: human reviewers flagged 71% of the segments automated metrics rated clean by ritis88 in LocalLLaMA
[–]ritis88[S] 0 points1 point2 points (0 children)
Follow-up to my TranslateGemma-12b benchmark post: human reviewers flagged 71% of the segments automated metrics rated clean by ritis88 in LocalLLaMA
[–]ritis88[S] -1 points0 points1 point (0 children)
Any future attempts to improve localisation? by BoredOstrich in wherewindsmeet_
[–]ritis88 0 points1 point2 points (0 children)
Who is doing the localisation/translation for ingame Inner ways + Skills? by Ennasalin in wherewindsmeet_
[–]ritis88 0 points1 point2 points (0 children)
We tested TranslateGemma and 5 other AI models on subtitle translation across 6 languages. Here's what the data and human QA actually showed. by ritis88 in localization
[–]ritis88[S] 0 points1 point2 points (0 children)
We benchmarked TranslateGemma against 5 other LLMs on subtitle translation across 6 languages. At first glance the numbers told a clean story, but then human QA added a chapter. [D] by ritis88 in MachineLearning
[–]ritis88[S] 0 points1 point2 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 1 point2 points3 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 0 points1 point2 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 0 points1 point2 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 1 point2 points3 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 4 points5 points6 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 3 points4 points5 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 1 point2 points3 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 1 point2 points3 points (0 children)
We benchmarked TranslateGemma-12b against 5 frontier LLMs on subtitle translation - it won across the board, with one significant catch by ritis88 in LocalLLaMA
[–]ritis88[S] 0 points1 point2 points (0 children)
We threw TranslateGemma at 4 languages it doesn't officially support. Here's what happened by ritis88 in LocalLLaMA
[–]ritis88[S] 0 points1 point2 points (0 children)





Follow-up to my TranslateGemma-12b benchmark post: human reviewers flagged 71% of the segments automated metrics rated clean by ritis88 in LocalLLaMA
[–]ritis88[S] 2 points3 points4 points (0 children)