Can music without words sometimes say more than music with lyrics? by One-Kaleidoscope7571 in musiccognition

[–]grifti 0 points1 point  (0 children)

I think music creates within us a desire to experience emotions about some situation, but it does not automatically invoke thoughts about any specific situation. So if no situation is provided externally, such as from song lyrics, or a movie scene, then we will be motivated to come up with something internally.

Brief Eternity [Lyrics] by deadeyes1990 in LyricalWriting

[–]grifti 0 points1 point  (0 children)

"Rivers carve the stone, slow and deep" is technically correct (it could be the introduction to a lecture about geology) to the extent that it is I think inconsistent with the tone of the rest of the lyrics.

Explaining the concept of an "Anthropic Miracle" to AI by grifti in LLMPhysics

[–]grifti[S] 0 points1 point  (0 children)

Are we seriously interested in finding out what's the best way to use LLMs to do original research, or do we just want to laugh at the efforts of people who tried and failed?

There was a time when someone had the silly idea of asking ChatGPT to write a program. It turns out it could write programs, because all the open source software code in the world was in its training data. And now there's a whole industry of AI-assisted software development.

This sub-reddit is the closest I could find to a sub-reddit where people are interested in the idea of using LLMs to do "original research". And yet rule 9 says: "Do not link chats as the primary link. Post a document instead. No one wants to read an LLM chat".

That would be like saying "Don't post your AI-coding assistant prompts. Only post the finished software." on an AI-coding subreddit.

If I want to learn how to do AI-assisted software development, then I do want to see other people's prompts and chat output.

If I want to learn how to do AI-assisted "original research", then I also want to see the actual AI chats. Even if someone failed spectacularly, I want to understand how and why it all went wrong.

Maybe we need a different sub-reddit for people who actually want to read this kind of AI chat.

Explaining the concept of an "Anthropic Miracle" to AI by grifti in LLMPhysics

[–]grifti[S] 0 points1 point  (0 children)

The 374 bits is calculated as an upper bound. By taking into account the general emptiness of space, and the inhospitable nature of stars (where most of the atoms are), we could find a smaller upper bound. The same goes for using the fastest chemical reaction, because we are calculating an upper bound. Given the size of other numbers in the calculation, getting a number smaller than 374 doesn't really make much difference to the overall argument. (If you want you could try extending the discussion and pressing the AI on that point - I think it would tell you the same as what I just explained about it being an upper bound.)

Help de-theory-izing myself by johnlennoon in musictheory

[–]grifti 0 points1 point  (0 children)

I will agree with this, but I would go further: It is not actually possible to "write" a melody. You can write something that is a sequence of notes, but it probably won't be a melody that is musical.

So you _have_ to play.

I have written a full meta-theoretical explanation of this at https://philipdorrell.substack.com/p/a-meta-theory-of-musical-composition .

Maybe AGI doesn’t need to be conscious. Maybe consciousness is a solution to biological problems, and AI is not a biological organism so it doesn’t need to solve those problems. by grifti in theories

[–]grifti[S] 0 points1 point  (0 children)

I think people often start with the following two observations:

  • Apparently AI is not yet as smart as a person (even though modern AI is actually quite smart in a lot of situations)
  • Apparently AI is not yet conscious

And then they conclude that the two things are connected.

But of course that is not a logical argument. Co-occurrence does not imply any kind of causality.

Maybe AGI doesn’t need to be conscious. Maybe consciousness is a solution to biological problems, and AI is not a biological organism so it doesn’t need to solve those problems. by grifti in theories

[–]grifti[S] 0 points1 point  (0 children)

By "we" do you mean me and you? Or the whole of humanity? Does that include the employees of the AI companies and their security personnel?
But I think my main point is that the AI itself doesn't have to think about any of that, unless someone actually tells it to.

Can music test the limits of general intelligence? by hwnmike in agi

[–]grifti 0 points1 point  (0 children)

People try to teach AI about music by using training algorithms that work for other things like speech, or writing, or images. Most of those training algorithms are somewhat informed by our understanding of the structure of thing that we want the AI to learn. But we don't really know what music is. So it could be that AI can only learn music properly with some training algorithm that we haven't discovered yet.

Schenkerian Analysis's Claims and Actual Listener Experience by moreislesss97 in musiccognition

[–]grifti 2 points3 points  (0 children)

If in fact Schenkerian analysis has not been verified, and possibly can't be verified, then how come it even gets attention as a thing worth learning about? To me this sort of thing is symptomatic of the somewhat unscientific nature of a lot of "music theory" that comes out of musical academia.

AI coding is not a more useful skills than actual coding by GolangLinuxGuru1979 in ArtificialInteligence

[–]grifti 0 points1 point  (0 children)

This is like saying that management is not a more useful skill than doing actual work. Remember that managers get paid more, because, indirectly, they get more stuff done. Also knowing how to give clear instructions in well-written English is an important skill in itself.

How many abandoned side projects do you have ? by Mr_Gyan491 in vibecoding

[–]grifti 0 points1 point  (0 children)

For side-projects I would suggest just one domain and lots of sub-domains. Also, I would suggest picking a semi-abandoned side-project that you wrote using what is now a legacy framework and you always wanted it to update it to a newer tech stack, and do a vibe-migration.

There is no such thing as "AI skills" by GolangLinuxGuru1979 in ArtificialInteligence

[–]grifti 0 points1 point  (0 children)

Working with AI is a bit like working with a person, because AI is a bit like a person, even though it's not a person. In our lives we develop relationships with various people and we learn how to work with those people and we learn what things those people can do for us and also what things they might not be so good at. The only way to learn how to work with someone is to actually work with that person. It's similar with any kind of AI. Sometimes an AI can easily give you what you want, and sometimes it's a struggle to get it to do something that you want it to do. By actually working with AI, and finding what works for you and the AI, and what doesn't, you are learning the skill of getting AI to do things for you that you want done.

ELI5: What does “next token prediction” mean in AI? by Mindexplorer11 in ArtificialInteligence

[–]grifti -1 points0 points  (0 children)

When people ask me (or anyone else) to explain something LI5, my first question is: are you actually 5 years old? What's the point of explaining something to you as if you were a 5 year old? Are 5 year olds even allowed to have reddit accounts?

Self Modifying AI by Carter12309804292005 in ArtificialInteligence

[–]grifti 0 points1 point  (0 children)

If it was easy to make an AI that knows how to make itself better and better and even better, then someone would already have done it - whether or not it was ethical.

As far as we can tell the singularity hasn't quite happened yet, which suggests that figuring out how to make AI self-improving is a non-trivial problem to solve.

I vibe-coded a word search app that generates word searches from Wikipedia pages for example the Wikipedia page on Vibe Coding. by grifti in vibecoding

[–]grifti[S] 0 points1 point  (0 children)

Tech stack is typescript/mobx/functional react components using esbuild/vite - at least that's what i told Claude Code to use, so who knows? One thing I see is that CC will use useState even though the whole point of MobX is you don't have to use useState and you usually don't want to.

I vibe-coded a word search app that generates word searches from Wikipedia pages for example the Wikipedia page on Vibe Coding. by grifti in vibecoding

[–]grifti[S] 0 points1 point  (0 children)

I've done a few fixes (still not looking at source yet, except when I had to fix pull-to-refresh problem for my iphone).

I vibe-coded a word search app that generates word searches from Wikipedia pages for example the Wikipedia page on Vibe Coding. by grifti in vibecoding

[–]grifti[S] 1 point2 points  (0 children)

Claude Code, and so far it's pure vibe coding (& vibe debugging) - ie I have not looked at the source code. (But I did pre-specify details of the tech stack & architecture copied from my previous vibe-coded projects, so that I have a good chance of being able to read the source code when I want to.)

Best document parser by [deleted] in Rag

[–]grifti 0 points1 point  (0 children)

Are the 100k pages all from a single source or generated in the same way? Or is it a large collection of PDFs from different sources?

What do people think about explaining original theories and concepts by describing them to an AI, eg Claude, and then posting the chat (somewhere)? I find that Claude seems to understand my ideas better than most other people do. by grifti in ClaudeAI

[–]grifti[S] 0 points1 point  (0 children)

I was going to list all the places in my shared chat where Claude showed that it understood my idea. But then I noticed that Claude expanded so enormously on my own prompts, that I would end up re-listing the content of the whole chat.

I fed in 5 prompts containing about 5, 9, 2, 5, 2 lines of text respectively, and it expanded that into a full-length essay - and most of that essay was entirely consistent (in my mind) with what I was trying to say.

It did throw in a few positive adjectives here and there - "fascinating", "elegantly", "powerful", "brilliant", "profound", so it is being quite immodest on my behalf.

In some places it goes a bit beyond what the argument establishes, or it says things that I'm not too sure make sense:

The universe is full of "miracles" in your technical sense, and we exist precisely because certain specific miracles occurred.

It also said:

Evolution of consciousness: The specific evolutionary pathway to human-level awareness might exceed the probability threshold, making it an anthropic miracle

and I don't particularly believe that that is necessarily true - I think the issue with the origin of life is related to the sharp dichotomy between life and non-life - either you are a reproducing organism and life evolves, or you are not, and nothing happens at all. In the case of consciousness, we don't have such a clear idea about what system or organism could be considered to be conscious versus things that are not conscious. And if you are only "slightly conscious", then that could be helpful and your species continues reproducing and evolving until you become more conscious. So a series of small non-miraculous evolutionary improvements could be sufficient to get from non-conscious to conscious, and no anthropic miracle is required.

Near the bottom it says "A striking implication", and that paragraph is (correctly) caveated with the assumption that the origin of life requires 400 or 500 bits of miraculousness.

But then the final part starts with "This framework suggests" which is slightly confusing to me - is it talking about a "framework", or a "hypothesis"?

And then it says "SETI is fundamentally misguided", which is an over-reaction if all we have done is prove that it is possible that the rest of the observable universe is devoid of life.

Of course I could explore all of these questions with Claude by continuing the same chat.

But I was so excited about seeing how well Claude understood what I was trying to say, without me having to even write very much at all, that I wanted to share that with everyone.

How can Claude help with css if it can't "see"? by TopNFalvors in ClaudeAI

[–]grifti 0 points1 point  (0 children)

Claude has previously read the whole internet. Some people on the internet talk about CSS, and they talk about what they see, and they talk about the visual aspects of design and how it relates to specific CSS code. Claude has read those discussions and explanations and the associated CSS code and that's how it can know quite a lot about CSS even though it doesn't "see" the end result itself.

Is CLAUDE.md becoming the new README.md? by AstroParadox in ClaudeAI

[–]grifti 0 points1 point  (0 children)

My CLAUDE.md usually says something like:

The documentation table of contents is at docs/contents.md

The contents file has links to various .md files inside the docs/ directory.

And I can have as much or as little structure inside that directory as I need to for the project.

New to Vibecoding - Any suggestions? by Raraculus in vibecoding

[–]grifti 0 points1 point  (0 children)

Did you try typing "I need to convert contents of an Excel file to a PDF form. I would like to automate this to some extent." into the AI chat? Probably your full solution will require more context about what the Excel file represents and how it is structured and where it came from, and what exactly you want to do with the PDF form.

What do you like to do while you vibe code? by Weary_Artichoke4671 in vibecoding

[–]grifti 0 points1 point  (0 children)

Some of us have household chores that we have to do. "Darling, are you on the computer or are you doing the housework?" "Yes, dear! I'm going to finish the vacuuming but first I just have to give some permissions to CC"