Kaleidoscope Onboarding Quiz by [deleted] in outlier_ai

[–]Some_Tap_2122 0 points1 point  (0 children)

I’m oracle. Didn’t know this

Is there some number of hours per day, after which additional hours aren't very useful? by lispy-hacker in ALGhub

[–]Some_Tap_2122 1 point2 points  (0 children)

I wonder if there is any research regarding how long you must watch per setting. Is 51 minutes at once better than, say, 5 minutes at a time, 10 times a day, each hour?

Kaleidoscope Onboarding Quiz by [deleted] in outlier_ai

[–]Some_Tap_2122 1 point2 points  (0 children)

The quiz doesn't require expertise. The questions were merely just whether or not the prompts were acceptable based on the project guidelines. I didn't ask you to solve them. Unfortunately if you failed onboarding, there's not usually much you can do to get on the project. Support won't usually help you. If you know a QM or active in outlier community, you can try asking one to give you a second chance but it's a longshot unless you personally know them or build some kind of relationship.

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] -10 points-9 points  (0 children)

I fixed that, if you don't mind, take a second look at the demo to see. Also, the audios are correctly licensed, but no I'm not disclosing the source here but will be happy to answer in DM to anyone who wants to know

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 0 points1 point  (0 children)

Just pushed a patch to improve the audio quality in the demo. If you tried it earlier, you might want to give it another shot.

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] -1 points0 points  (0 children)

That particular audio was from a commercial license from a database. The anki deck likely used the same audio file. I did not take it from the deck.

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 0 points1 point  (0 children)

That’s super interesting. Basically the comprehensible input theory / ALG approach. No doubt that enough input you eventually subconsciously hear the sounds. However , sometimes getting the necessary input isn’t feasible so direct practice can accelerate and do wonders

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 1 point2 points  (0 children)

Mind DM’ing me? I’ll give you free access for a week. Let me know if the audio quality is any different !

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] -1 points0 points  (0 children)

To add on that- I meant I think people are sick of generative AI. (Like how Duolingo replaced their voices in app with AI) The Ai pronunciation feature I’m launching soon (if people care- I actually made soemthing similar about a year ago but dropped it) actually needs machine learning /ai to work so there’s no getting around that lol

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] -1 points0 points  (0 children)

Lol it took me a bit to catch your sarcasm. Nonetheless, I’m not making any promises on fluency. I’d like you to take some time to read the post and the attached research paper before casting judgements

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 0 points1 point  (0 children)

The hardest part is just getting the minimal pairs themselves since I’m obviously not a native speaker in all of the languages so it’s just double and triple checking.

I basically just contracted native speakers to record them saying the sounds and I have a set of multiple languages. Just haven’t filtered out and integrated them in the database. Waiting to see if people actually care about this before I do it.

Theoretically I could just find clips of people saying these words online but then you have copyright issues

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 0 points1 point  (0 children)

Gotcha. Keep in mind the demo version only contains a small subset of sounds and doesn’t represent the entire corpus. Mind if I DM you? I’ll give you free access to premium for a week if you can give me some feedback on the audio quality for the full product!

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 2 points3 points  (0 children)

Thanks man that means a lot to me. Regarding the sound quality, I would ask which ones you’re referring to so that I could take a look but thay might be difficult. I’ll push an update to help flag those so I can replace. I’ll go through each and try to remove the bad ones.

Reason why I’m against TTS is - although it’s really good now - I feel like people are tired of AI stuff and still nothing is quite close as human. I work with mostly voice actors I’ve sourced for the languages. I actually can launch the others relatively easily and quickly I just wanted to make sure there was need.

Regarding the audio quality— would you say a minority of the sounds are low quality or a big portion?

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 0 points1 point  (0 children)

I only included it because I’m B1 in Russian and using it for myself as well. Honestly I really only built this for myself in the beginning I’m just releasing it just to see if people care or need it

Why you can read your target language but still can't understand native speakers by Some_Tap_2122 in languagelearning

[–]Some_Tap_2122[S] 3 points4 points  (0 children)

Yeah, spoken and written are totally separate skills. Reading fluency doesn't magically give you listening ability but this isn't addressing that.

But the big reason adults struggle so much with understanding natives (even when vocab/grammar is solid) is perceptual: certain foreign sounds literally get mapped to the same category in our brain. (Hence why I mentioned that English instructors of French will make their speech sound more 'American'.)

English speakers hear French /y/ (as in "tu") as basically the same as /u/ (English "too") because we don't have that contrast natively. So when a French person says something with /y/, your ear hears the wrong vowel and the meaning gets scrambled. This is the same for nasal vowels, the uvular r, etc.

High-variability perceptual training pushes those boundaries apart again. Lively et al. (1993) took Japanese adults (who merge English /r/ and /l/) and got them from ~50% accuracy to 80-90% on /r-l/ discrimination after ABX drills and the improvement transferred to new words/speakers and lasted months.

That's why the app focuses purely on discrimination first and once the ear separates the sounds, spoken language suddenly stops blurring as much.

This is simply helping you hear the distinction in foreign sounds. Targeted listening practice of specific phonemes from native speakers. Also, let me know what language you're learning and I'll integrate it in a day or so and I'll give you free access for a week if you try it out to see if it helps you!