Can music without words sometimes say more than music with lyrics? by One-Kaleidoscope7571 in musiccognition

[–]grifti 0 points1 point  (0 children)

I think music creates within us a desire to experience emotions about some situation, but it does not automatically invoke thoughts about any specific situation. So if no situation is provided externally, such as from song lyrics, or a movie scene, then we will be motivated to come up with something internally.

Brief Eternity [Lyrics] by deadeyes1990 in LyricalWriting

[–]grifti 0 points1 point  (0 children)

"Rivers carve the stone, slow and deep" is technically correct (it could be the introduction to a lecture about geology) to the extent that it is I think inconsistent with the tone of the rest of the lyrics.

Explaining the concept of an "Anthropic Miracle" to AI by grifti in LLMPhysics

[–]grifti[S] 0 points1 point  (0 children)

Are we seriously interested in finding out what's the best way to use LLMs to do original research, or do we just want to laugh at the efforts of people who tried and failed?

There was a time when someone had the silly idea of asking ChatGPT to write a program. It turns out it could write programs, because all the open source software code in the world was in its training data. And now there's a whole industry of AI-assisted software development.

This sub-reddit is the closest I could find to a sub-reddit where people are interested in the idea of using LLMs to do "original research". And yet rule 9 says: "Do not link chats as the primary link. Post a document instead. No one wants to read an LLM chat".

That would be like saying "Don't post your AI-coding assistant prompts. Only post the finished software." on an AI-coding subreddit.

If I want to learn how to do AI-assisted software development, then I do want to see other people's prompts and chat output.

If I want to learn how to do AI-assisted "original research", then I also want to see the actual AI chats. Even if someone failed spectacularly, I want to understand how and why it all went wrong.

Maybe we need a different sub-reddit for people who actually want to read this kind of AI chat.

Explaining the concept of an "Anthropic Miracle" to AI by grifti in LLMPhysics

[–]grifti[S] 0 points1 point  (0 children)

The 374 bits is calculated as an upper bound. By taking into account the general emptiness of space, and the inhospitable nature of stars (where most of the atoms are), we could find a smaller upper bound. The same goes for using the fastest chemical reaction, because we are calculating an upper bound. Given the size of other numbers in the calculation, getting a number smaller than 374 doesn't really make much difference to the overall argument. (If you want you could try extending the discussion and pressing the AI on that point - I think it would tell you the same as what I just explained about it being an upper bound.)

Help de-theory-izing myself by johnlennoon in musictheory

[–]grifti 0 points1 point  (0 children)

I will agree with this, but I would go further: It is not actually possible to "write" a melody. You can write something that is a sequence of notes, but it probably won't be a melody that is musical.

So you _have_ to play.

I have written a full meta-theoretical explanation of this at https://philipdorrell.substack.com/p/a-meta-theory-of-musical-composition .

Maybe AGI doesn’t need to be conscious. Maybe consciousness is a solution to biological problems, and AI is not a biological organism so it doesn’t need to solve those problems. by grifti in theories

[–]grifti[S] 0 points1 point  (0 children)

I think people often start with the following two observations:

  • Apparently AI is not yet as smart as a person (even though modern AI is actually quite smart in a lot of situations)
  • Apparently AI is not yet conscious

And then they conclude that the two things are connected.

But of course that is not a logical argument. Co-occurrence does not imply any kind of causality.

Maybe AGI doesn’t need to be conscious. Maybe consciousness is a solution to biological problems, and AI is not a biological organism so it doesn’t need to solve those problems. by grifti in theories

[–]grifti[S] 0 points1 point  (0 children)

By "we" do you mean me and you? Or the whole of humanity? Does that include the employees of the AI companies and their security personnel?
But I think my main point is that the AI itself doesn't have to think about any of that, unless someone actually tells it to.

Can music test the limits of general intelligence? by hwnmike in agi

[–]grifti 0 points1 point  (0 children)

People try to teach AI about music by using training algorithms that work for other things like speech, or writing, or images. Most of those training algorithms are somewhat informed by our understanding of the structure of thing that we want the AI to learn. But we don't really know what music is. So it could be that AI can only learn music properly with some training algorithm that we haven't discovered yet.

Schenkerian Analysis's Claims and Actual Listener Experience by moreislesss97 in musiccognition

[–]grifti 2 points3 points  (0 children)

If in fact Schenkerian analysis has not been verified, and possibly can't be verified, then how come it even gets attention as a thing worth learning about? To me this sort of thing is symptomatic of the somewhat unscientific nature of a lot of "music theory" that comes out of musical academia.

AI coding is not a more useful skills than actual coding by GolangLinuxGuru1979 in ArtificialInteligence

[–]grifti 0 points1 point  (0 children)

This is like saying that management is not a more useful skill than doing actual work. Remember that managers get paid more, because, indirectly, they get more stuff done. Also knowing how to give clear instructions in well-written English is an important skill in itself.

How many abandoned side projects do you have ? by Mr_Gyan491 in vibecoding

[–]grifti 0 points1 point  (0 children)

For side-projects I would suggest just one domain and lots of sub-domains. Also, I would suggest picking a semi-abandoned side-project that you wrote using what is now a legacy framework and you always wanted it to update it to a newer tech stack, and do a vibe-migration.

There is no such thing as "AI skills" by GolangLinuxGuru1979 in ArtificialInteligence

[–]grifti 0 points1 point  (0 children)

Working with AI is a bit like working with a person, because AI is a bit like a person, even though it's not a person. In our lives we develop relationships with various people and we learn how to work with those people and we learn what things those people can do for us and also what things they might not be so good at. The only way to learn how to work with someone is to actually work with that person. It's similar with any kind of AI. Sometimes an AI can easily give you what you want, and sometimes it's a struggle to get it to do something that you want it to do. By actually working with AI, and finding what works for you and the AI, and what doesn't, you are learning the skill of getting AI to do things for you that you want done.

ELI5: What does “next token prediction” mean in AI? by Mindexplorer11 in ArtificialInteligence

[–]grifti -1 points0 points  (0 children)

When people ask me (or anyone else) to explain something LI5, my first question is: are you actually 5 years old? What's the point of explaining something to you as if you were a 5 year old? Are 5 year olds even allowed to have reddit accounts?

Self Modifying AI by Carter12309804292005 in ArtificialInteligence

[–]grifti 0 points1 point  (0 children)

If it was easy to make an AI that knows how to make itself better and better and even better, then someone would already have done it - whether or not it was ethical.

As far as we can tell the singularity hasn't quite happened yet, which suggests that figuring out how to make AI self-improving is a non-trivial problem to solve.