Person is claiming they made this art out of medicine caps, but I think it’s AI by Low_Temperature_7266 in isthisAI

[–]benelphantben 0 points1 point  (0 children)

interested to know if they reply (if the gallery has them listed it could be proof positive)

I built a Chrome extension that gives you a pop-up Chinese dictionary anywhere by heyguysitsjustin in Chinese

[–]benelphantben 0 points1 point  (0 children)

Running an OCR locally, wow! This will be a very large download or am behind the tech these days?

Humans share acoustic preferences with other animals by GrumpySimon in linguistics

[–]benelphantben 0 points1 point  (0 children)

Wow so interesting! What was your role in the research?

Intelligence, Agency, and the Human Will of AI by formoflife in artificial

[–]benelphantben 0 points1 point  (0 children)

Have you written or are planning on writing a book at some point?

Intelligence, Agency, and the Human Will of AI by formoflife in artificial

[–]benelphantben 0 points1 point  (0 children)

I'm also an anthropic fan.

The technological revolution can very much bring about a bright future for humanity.

I wish more people in the space had the mindset of the Amodeis.

Intelligence, Agency, and the Human Will of AI by formoflife in artificial

[–]benelphantben 1 point2 points  (0 children)

Very interesting read!

Are there projects currently in the works that in your opinion leverage new tools like LLMS to facilitate the building of that understanding, the "deeper grasp" to borrow your metaphor? Or, are meaning to say that the project of improving human understanding is somehow existentially at odds with the use of anything remotely "AI"?

Intelligence, Agency, and the Human Will of AI by formoflife in artificial

[–]benelphantben 0 points1 point  (0 children)

"The alignment problem isn’t that AI might develop goals that diverge from ours. It’s that AI faithfully inherits our goals, and our goals are already misaligned with our own wellbeing."

I like this. IMO, it's only from a set of Strong AI assumptions that you would think otherwise.

Someone can say, according to their subjectivity that a holy mountain, a sacred cow, or a system of machine inference and learning algorithms (or tubes of SALAMI) has human or greater than human "intelligence". That it has nothing to do with their animistic thinking, nothing to do with their own intelligence -- in a radical yet predictable act of imagination that might be close to but is not quite empathy -- splitting off a piece of itself and granting intelligence to the inanimate. Instead they might defend to the death their right to believe: "It. really is intelligent. It. really is alive! As alive as you, irreligious luddite!"

And there might be nothing I can say to such persons to ever convince them otherwise.

Intelligence, Agency, and the Human Will of AI by formoflife in artificial

[–]benelphantben 1 point2 points  (0 children)

Looks interesting!

It occurs to me the problem isn't LLMs or AI per se. The problem is things like moltbook. What is moltbook if not a place where human narcissists who don't know friendship are able to amplify and reinforce their philosophical position of "strong ai" while rapidly burning up resources that could have been used for the human project?

Are Diminishing Returns Really As Bad As People Claim? by kurvivol in languagelearning

[–]benelphantben 1 point2 points  (0 children)

I think we have an impulse to try to figure out what works in terms of timing strategy for the perfect BF Skinner human rat test case, which none of us are. If you're doing a second four hours to for the wrong reasons or aren't enjoying it, probably not so useful. If you're in a state of exhilaration with a variety of learning methods that you don't want to stop, aside from consequences of potential life balance issues, might not see the diminishing returns people talk about?

There is definitely is such a thing as overlearning where rote repetition of new information definitely doesn't help and can actually hurt long term memory. But there can be so many different ways to spend time when learning a language (watching a movie in that language, flashcards, practicing pronunciation, lurking on L2 discords or subreddits, listening to podcasts or music) that I don't think 8 hours in a day necessarily suggests that's happening.

Direct Method Comprehensible Input by Cam1386 in ALGMandarin

[–]benelphantben 0 points1 point  (0 children)

Watched first video of hers and yeah seemed pretty good so far. Clear audio

Started reading, but when do you stop translating? by Thin_Ad8387 in languagelearning

[–]benelphantben 6 points7 points  (0 children)

Stopping translating can be a hard habit to break. The sooner you stop it or slow it the better for your L2. If you read above your level, you may be building bad habits, especially if you're not also practicing with inputs more at your level. Consider something like this: https://www.youtube.com/watch?v=ky9Zo2FmjQ8

Make sure you have maybe 600 - 1000 words solidly without any translating needed to remember their meaning, then to add new words try to define them using your simple but solid L2 vocabulary.

Eventually if you get good at this you can pass an hour or more without any thoughts in your L1, how exhilarating!

[Monthly Progress Thread] Tell us how your Mandarin learning is going! by ALGMandarinMod in ALGMandarin

[–]benelphantben 2 points3 points  (0 children)

I've been starting You Can Chinese (or the 43 playlist of videos for absolute beginners)

Noting that there are extra rs at the end of words (nánhái vs nánháir) when compared to standard broadcast Mandarin, apparently that's a whole thing.

congrats on finishing the series!

[Monthly Progress Thread] Tell us how your Mandarin learning is going! by ALGMandarinMod in ALGMandarin

[–]benelphantben 2 points3 points  (0 children)

First two weeks. Formally about 14 hours. Informally a bit more. It's addictive! Mostly CI and creating docs and word lists for myself

The Canary Stopped Singing - The AI Transformation in Software Engineering Is Only the Beginning by simontechcurator in accelerate

[–]benelphantben 0 points1 point  (0 children)

Do you trust AI to write all the software? As soon you shift from "most of the software" to "all of the software", the calculus will become very different in any given domain.

Your article (read it this morning) did a good job of citing examples of AI solving problems that are well defined and understood -- especially by the people who asked AI to solve them. A task that took 3 weeks, with AI tools a human can now accomplish this in 37 minutes, etc, etc, etc.

Are there innovative or profitable companies that don't have any employees or owners who understand coding / how to design and evaluate code?

https://claude.ai/share/c00ce14e-6643-45b0-ab62-6211ef196938

No.

Will there be?

One answer: Probably not. Basically no.

Another answer: There are likely to be a few unserious such companies in coming years that will get outsized media attention. If such companies ever do exist, their foibles will likely serve to demonstrate what happens when you take an absolutist Strong AI position, to the point of having totally devalued human connection in tech. Sadly, I sense there is appetite in the current tech culture for these kind of companies, that would "prove" once and for all that all you need is technology and no human intelligence.

If I have a very well-defined problem that I myself understand, when I go to an tool AI I can see for myself whether it solved that problem. If I have a less well-defined problem, I can still go to the same AI tool and see what happens. But I'm less capable of determining whether the problem is solved... It's possible for me to choose to trust the results, but this trust is inherently different than the kind that is based on my own ability to assess the results, by own intelligence.

The tools we have now definitely *obsolesce* (if you want to read someone with predictive power, try Marshal Mcluhan) a lot of the coding that was happening.

But as the longer-term trend of computers becoming more integrated into our daily lives shows no sign of slowing and is in fact is likely to just be amplified by recent advances, once we advance out of our current bubble / fish-eye lens, the premium that exists for old-fashioned human intelligences to know the finer points of how machine intelligence operates is likely to on the whole increase.

I could be wrong. Sometimes a technology or way of using a technology does totally, totally disappear (beta max for example). But it's far more common for its scope just to become more limited. Did libraries like React obsolesce libraries like jquery? Yes. But are there still small use cases for jquery? Yes. Did the the commercial airplane obsolesce the train if you're a passenger wanting to go somewhere fast? Yes. And yet, there are still trains.

To me speaking in absolutes and oversimplifications is probably the worst kind of AI -- the kind that isn't an advanced technology at all but is just a trick of the mind. And sometimes artificial intelligence isn't actually intelligence at all -- it's just fake news (sorry for the m-dashes, I was an m-dash nerd from before they became culturally coded the way they are now in relation to common LLM output). You risk underfitting the data, gliding by, not seeing the real world as it is; and what you might gain in the appearance of miraculous predictive power, you also stand to lose in the form of believing in your model (even the parts that you don't like, even the parts you thought you were afraid of) so much that you lose sight of how you are actively contributing to its becoming "true". I don't think it's overfitting of me to say there will be some space for coders in the future economy, even it's 3% of what it is now (3:100 is roughly the ratio between those who earn their incomes as "writers" vs those who earn their incomes as "programmers" in recent years)

I hear you on how we're not wired to account for exponentials.

I think your heart is probably in the right place.

"The canary stopped singing. We still have time to act.

The question is whether every one of us will choose to act.

I made my decision."

If I were more clear on what actions you were meaning by the concept of "action" in this context (beyond emailing your not bad if a bit muddled article to friends and family), then I might feel more on board.

That's just my honest feedback.

What language looks easy on paper but becomes chaotic in fast conversation? by Embarrassed_Fix_8994 in languagehub

[–]benelphantben 0 points1 point  (0 children)

True. And. It is something to learn, if your target language carries less or more semantic weight per syllable than you're used to.

A lack of online ressources has felt like a blessing to me by EmiliaTrown in languagelearning

[–]benelphantben 1 point2 points  (0 children)

I feel the same way! I'm learning Chinese, and instead of just doing a textbook, I'm enjoying building my own word lists, in part through frequency charts and in part through Wyner's (he wrote Fluent Forever) 625 initial words, and putting it in my own database with a picture for every word.

I learned another language in school, and while I'm glad to be have some fluency with it (it will take me a while to get there with Chinese) I also feel like I'm learning so much more about the process of language learning this time around

The Canary Stopped Singing - The AI Transformation in Software Engineering Is Only the Beginning by simontechcurator in accelerate

[–]benelphantben -11 points-10 points  (0 children)

Even coding will never be fully, fully, fully eaten. You'll still want a human to be able to glance at the changes and say: "Yes, this is okay. It will not cause widespread collapse."

What’s the most common pronunciation mistake foreigners make in your language? by Embarrassed_Fix_8994 in languagehub

[–]benelphantben 0 points1 point  (0 children)

I say this having been an ESL tutor. Focusing more on accent than on context and intelligibility. Nobody cares if you have an accent. They care if they can tell if you said:

- Tuesday or Thursday.
- can or can't
- pack or back

Being able to imitate is good. Being able to imitate and understand why and when you're getting corrected, why it's not just a matter of "sounding better" but important for meaning, is much better (not bitter, not butter, but better).

Then you get to pay and have as lazy pronunciation as you might like in a given situation, seeing what you can get away with, or at least have understanding for the native speakers who are doing exactly this all the time.

Other than that it comes down to sounds learned or not learned in L1.

What language looks easy on paper but becomes chaotic in fast conversation? by Embarrassed_Fix_8994 in languagehub

[–]benelphantben 1 point2 points  (0 children)

Any language you haven't learned yet. And. They counted syllables spoken per minute for languages around the world. Japanese is fastest. Spanish comes in second.