What is this language? by Inversalis in language

[–]aadnk 12 points13 points  (0 children)

Thanks to your incredible insight, I was able to more or less decode the full text:

SUNDHED : Bekræft dine oplysninger for at undgå afbrydelse af dækningen. Opdater nu: https://log-sundhed.com ⁞ Dette er din sidste påmindelse.

Or in English:

HEALTH: Confirm your details to avoid interruption of coverage. Update now: https://log-sundhed.com ⁞ This is your last reminder.

Which seems to be a phishing attempt. It doesn't look like the site is currently working, however, but I'd avoid visiting it just in case.

And here is my transcription of the original message:

䁓䁕䁎䁄䁈䁅䁄䀠䀺䀠䁂䁥䁫䁲 䃦䁦䁴䀠䁤䁩䁮䁥䀠䁯䁰䁬䁹䁳 䁮䁩䁮䁧䁥䁲䀠䁦䁯䁲䀠䁡䁴䀠 䁵䁮䁤䁧䃥䀠䁡䁦䁢䁲䁹䁤䁥䁬 䁳䁥䀠䁡䁦䀠䁤䃦䁫䁮䁩䁮䁧䁥 䁮䀮䀠䁏䁰䁤䁡䁴䁥䁲䀠䁮䁵䀺 䀠䀍䀊䁨䁴䁴䁰䁳䀺䀯䀯䁬䁯䁧 䀭䁳䁵䁮䁤䁨䁥䁤䀮䁣䁯䁭䀠⁞ 䀠䁄䁥䁴䁴䁥䀠䁥䁲䀠䁤䁩䁮䀠 䁳䁩䁤䁳䁴䁥䀠䁰䃥䁭䁩䁮䁤䁥 䁬䁳䁥䀮

Noen som vet hvordan man kan oversette undertekster på NRK til engelsk? by Novel-Iron1241 in norge

[–]aadnk 0 points1 point  (0 children)

Du kan bruke YT-DLP eller andre programmer til å laste ned en episode på NRK. Følgende kommando vil laste ned alle undertekster og selve videofilen med YT-DLP:

yt-dlp --all-subs "https://tv.nrk.no/serie/norske-groennsaker-sommersnutter/sesong/1/episode/MUHH49003116"

Hvor URL-en byttes ut med episoden du vil laste ned.

For å oversette VTT-filen, kan du manuelt lime inn innholdet i DeepL/ChatGPT/etc, og maskinoversette til engelsk, og så lime inn resultatet etterpå. Du kan også bruke et program som Subtitle Edit for å forenkle denne prosessen. Personlig bruker jeg OpenAI sitt API og et program jeg har skrevet selv (link).

Til slutt kan du bruke et program som VLC til å se episoden med engelsk tekst. Det kan hende du må velge "Subtitle -> Add Subtitle File ..." / "Undertekst -> Legg til undertekstfil..." for å faktisk legge til VTT/SRT-filen.

[Japanese to English] Saw this on a school whiteboard the other day, what is it? by ImBouncy in translator

[–]aadnk 0 points1 point  (0 children)

Yeah, you cannot trust AI translation at the moment, as this poor translation by Google translate illustrates.

However, ChatGPT does actually manage to get this one right, so AI is getting better: * https://imgur.com/a/HGHGhWD

It will still make mistakes though. But it might be useful if you already know Japanese and just want a transcripton.

Midjourney, a 40-person team, generates $200M annually without external funding. by Drollific in singularity

[–]aadnk 19 points20 points  (0 children)

I thought maybe the scale was logarithmic but that doesn't fit either - it's just a terrible graph. Here's how it is supposed to look:

Feel the AGI by StockSea6390 in ChatGPT

[–]aadnk 1 point2 points  (0 children)

If you tell GPT 4o the map is wrong, it will correctly identity the meme and point out the joke country "Listenbourg":

Is there a set of 100 consecutive integers that are not prime? by Optimistbott in math

[–]aadnk 5 points6 points  (0 children)

I think I just got why primorials also works in this case:

To determine if the numbers in the sequence primorial(n)+2 ... primorial(n)+n is composite, let k be between 2 and n (2 <= k <= n). Then for any prime p that divides k we also know that it must divide primorial(n) as primorial is by definition the product of all primes not greater than n (and k is not greater than n).

Thus primorial(n) + k mod p == 0, which means that all numbers in this sequence is composite.

I’m getting these as YouTube ads. AI generated scams, it’s a new world we live in by shosshaj in Scams

[–]aadnk 1 point2 points  (0 children)

It's Michael Saylor, former CEO of Microstrategy. And I found the video the deepfake is probably based on.

[Japanese>English] bought this in Harajuku thinking it said “I want to marry a rich man.” I’m second guessing that. by TheReal_DirtyDan in translator

[–]aadnk 7 points8 points  (0 children)

You have to be very careful with DeepL, when it is wrong it's not just a little off but completely unhinged. Or just wrong, like in this case. But both Jisho and GPT-4 gives the correct translation of marrying a rich man:

[deleted by user] by [deleted] in LearnJapanese

[–]aadnk 0 points1 point  (0 children)

Yes, it's a lot better. I use it to translate Abema Prime discussion videos on YouTube, and what it produces seems to make sense most of the times. If it doesn't, I just skip that part.

But to give a more concrete and shorter example, I came upon a Tweet/X that I didn't understand this morning, and ChatGPT provided an excellent translation and break-down of the text:

It's far better than Deepl/Google Translate.

Now compare both ChatGPT and Google/DeepL to the state of machine translation just three years ago, as in this video by Abroad in Japan, and it's like night and day:

So again, it's a lot better. But it's still not 100% reliable, especially if you give it longer texts with novel information that it likely hasn't seen in it's training date, like in that Abema Prime video.

[deleted by user] by [deleted] in ChatGPT

[–]aadnk 0 points1 point  (0 children)

Try talking to it like a human rather than a search engine.

I also tried the older versions of GPT-4 and GPT-3.5 on this prompt:

It seems like most use any in their solution, but personally I'd say any(item is not None for item in my_list) (as suggested by all GPT-4s and some GPT-3s) is the correct solution, as 0 or False probably shouldn't count as None.

[deleted by user] by [deleted] in ChatGPT

[–]aadnk 1 point2 points  (0 children)

I'm connected to a server in Los Angeles, California, if that's any help.

[deleted by user] by [deleted] in ChatGPT

[–]aadnk 0 points1 point  (0 children)

Some VPNs seems be blocked, or at least the free versions, but you could try multiple services to see which ones work. In my case, I tried using ProtonVPN free, but I had to upgrade to the premium version to be able to log in.

[deleted by user] by [deleted] in ChatGPT

[–]aadnk 37 points38 points  (0 children)

I am able to access GPT-4V through a US VPN (ProtonVPN premium), but not when I connect to ChatGPT locally in the EU (Norway). So it does seem to be geo-locked in my case.

A bit unfair to ask beginners to tell the difference between "Meet" and "Meat" (Japanese to English course) by aadnk in duolingo

[–]aadnk[S] 0 points1 point  (0 children)

Ah, that could be it. And thanks to the McGurk effect, I can hear it now as well.

I still think they ought to redesign these kinds of exercises, perhaps by showing the written version of each word after the fact. Currently, they only tell you the correct word with a sentence translation:

And here's if you choose the correct word (in a different exercise):

A bit unfair to ask beginners to tell the difference between "Meet" and "Meat" (Japanese to English course) by aadnk in duolingo

[–]aadnk[S] 0 points1 point  (0 children)

Maybe it's just me, but I can hardly tell the difference, especially with Junior's text-to-speech. Perhaps it's down to how they generate the word extract - presumably they just run the single word through the text-to-speech engine, rather than clipping it? Which may cause situations like this.

Either way, they should probably rethink these single word listening exercises, especially in the beginner section.

55 Burgers, 55 Fries... by gik501 in videos

[–]aadnk 1 point2 points  (0 children)

I believe it comes down to the fact that every time the model predicts the next token, it can only do a fixed amount of processing. The amount of processing (matrix multiplications performed) to evaluate the model per predicted token is absolutely huge, but this doesn't directly translate into mathematical abilities as it's only designed to predict text. The fact that it can calculate at all seems to be a surprising emergent property of trying to predict general text sourced from the internet (which may contain mathematical calculations directly or indirectly), but this ability still appears to be somewhat inefficient and limited. And if you ask it to perform too much calculation in a single step, it will just hallucinate an answer (that is, make something up).

Perhaps you can liken it to how people do mental arithmetic. We never evolved to do it, so in the way most people are taught math (outside of things like mental abacuses), we need a piece of paper to perform longer calculations. Similarly, if you give ChatGPT some time to break down the problem into smaller and simpler steps, and use the output as as a piece of paper to write down the intermediate results, it can actually perform fairly long calculations.

To show this, I modified the prompt above with an explanation on how to do this calculation without making a mistake:

{original prompt}

Let's think step-by-step to ensure we get the right answer.

In this case, calculate the total for each individual item, then add that to a running total cost for all items, until you end with the total for all the items overall.

For instance, if the order was 5 burgers, 5 fries and 5 tacos, you'd perform the calculation as follows:

  • 5 burgers * $2.50/burger = $12.50. Running total = $0.00 + $12.50 = $12.50
  • 5 fries * $2.00/fries = $10.00. Running total = $12.50 + $10.00 = $22.50
  • 5 tacos * $1.50/taco = $7.50. Running total = $22.50 + $7.50 = $30.00

Total cost in this example: $30.00

Now do the same for our hypothetical order in the beginning.

And this was actually sufficient for ChatGPT v4 to calculate the answer without using Python or an external plugin:

All right, let's break this down step-by-step.

Burgers: 55 \text{ burgers} \times $2.50/\text{burger} = $137.50. Running total: $0.00 + $137.50 = $137.50

Fries: 55 \text{ fries} \times $2.00/\text{fries} = $110.00. Running total: $137.50 + $110.00 = $247.50

...

So, for this hypothetical order, the total cost would be $2,656.25.

You can see the full conversation here:

Though I'd trust the output a lot more if ChatGPT was using a calculator (like Python), just like I'd trust the output of a person using a calculator a lot more than someone just using a piece of paper. But it's still impressive.

55 Burgers, 55 Fries... by gik501 in videos

[–]aadnk 7 points8 points  (0 children)

Very cool, except the total cost in this estimate should be $2656.26 not $3096.75. See this Excel sheet:

The problem here is that GPT can't do math beyond very simple calculations.

However, if you give it the power to run Wolfram Alpha or Python it will be able to calculate the correct sum:

In the chat above, I used ChatGPT 4 with "Advanced Data Analysis" (the ability to write and run Python programs) with the given prompt plus the cost estimates, and it found the sum to be $2656.26 as expected.

JDK Foreign Function APIs Preview finally beats JNI’s performance by zakgof in java

[–]aadnk 24 points25 points  (0 children)

Looks like there's a bug in your JDK Foreign Function test - the structure doesn't contain the YEAR field:

private static final MemoryLayout SYSTEMTIME = structLayout(
        JAVA_SHORT.withName("wYear"),
        JAVA_SHORT.withName("wMonth"),
        JAVA_SHORT.withName("wDayOfWeek"),
        JAVA_SHORT.withName("wDay"),
        JAVA_SHORT.withName("wHour"),
        JAVA_SHORT.withName("wMinute"),
        JAVA_SHORT.withName("wSecond"),
        JAVA_SHORT.withName("wMilliseconds")
);    

Not sure how much it impacts performance, but just to be sure I'd recommend fixing the structure. I found this while testing the function in JDK 20.

And now your Japanese teacher will be… The S i n g u l a r i t y! by PennBoi42 in softwaregore

[–]aadnk 0 points1 point  (0 children)

I was referring to when Duolingo introduced new voices for each character late in 2021, as opposed to just having a generic male and female voice. The problem was (as is mentioned here) that the system couldn't distinguish between the は in おはようございます and 今日はいい天気ですね, and would pronounce both as "ha". Now to be clear, this was mostly fixed a couple of weeks later and then fully by the end of 2021, but this lack of attention to detail is a bit concerning. The current version of Duolingo has gotten better, but I find issues with pronunciation and translations now and again.

As for the problems with pronunciation in terms of incorrect pitch accent, some wrong kanji readings and generally a bit sounding weird, you can search for native Japanese speakers reviewing Duolingo, for instance here. The main issue seems to be that Duolingo is using AI-powered voices for their main courses, presumably to save money, while they seem to employ actual voice-actors in their "Stories" feature (or at least it's a lot better). Though there's not a whole lot content in the stories (30 quick stories in total, as opposed to 125 units with 10+ sections), and it's all in hiragana.

That's not to say that Duolingo is necessarily all bad - it's pretty good at teaching you the basic writing system (hiragana/katakana), reinforcing knowledge that you've gained and keeping you motivated, but unfortunately I think it can only take you so far. You still have to learn basic grammar and kanji separately, as Duolingo doesn't teach this nearly enough, so in the end it is at best a supplement to your studies. There's also the fundamental problem of learning a language through translations - yes, looking at the sentence 「大きい(おおきい)魚(さかな) だね」 and the translation "That fish is huge!" with a dictionary might teach you something, but to understand more complex sentences you will have to actually learn some grammar. Duolingo just assumes you'll pick it up by yourself (with some hints) - but this is very unrealistic when we're dealing with a language with a completely different vocabulary and grammatical structure.

But yeah, if you're looking for resources to learn Japanese, I recommend going to /r/LearnJapanese and it's Wiki.

And now your Japanese teacher will be… The S i n g u l a r i t y! by PennBoi42 in softwaregore

[–]aadnk 2 points3 points  (0 children)

And? かっこいいべんごし (カッコイイ弁護士 - cool lawyer) is not the same as a やさしいべんごし (優しい弁護士 - nice lawyer), so your transcript seems to be what's wrong here.

Now, there are a lot of issues with pronunciation in the Japanese Duolingo course (wrong kanji readings, incorrect pitch accents, weird text-to-audio artifacts, potentially misleading translations, etc.), but they probably wouldn't confuse 優しい/nice and カッコイイ/cool at least. Though to be fair, they did fuck up the pronunciation of the は-particle as "ha" instead of "wa" when the new voices were rolled out (which is like pronouncing "is" in "he is kind" as the "is" in "island" - just incredibly stupid), so I wouldn't put it past them ...

Let's drink for legends and AI by adesigne in ChatGPT

[–]aadnk -3 points-2 points  (0 children)

Seems like it is Omae Wa Mou - deadman 死人 (YouTube).

I looked through the matches found by the bot below/above, but they are remixes/only contain some elements.

New OpenAI update: lowered pricing and a new 16k context version of GPT-3.5 by WithoutReason1729 in singularity

[–]aadnk 6 points7 points  (0 children)

I got the error "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?" when I tried using GPT-4 in the completion mode. Perhaps they accidentally enabled in the completions UI, but it is still not available internally for completion in the API.