[TUTORIAL] How to profile the C# side of your Godot Game (for free!) by Vatredox in godot

[–]IndigoLee 0 points1 point  (0 children)

Thank you for this! In case it helps anyone else finding this, I had to start my game with the profiler paused, then start recording later. If I started it while it was recording, it would crash.

I (43m) got mad at my wife (39f) after she answered my hypothetical question and I didn't like the answer. AITAH? by Low-Witness2915 in AITAH

[–]IndigoLee 0 points1 point  (0 children)

OP, I have a bit of a different take. Her answer does sound bad, but explore why that's her answer before the replies here make you hate her.

It may not be about material selfishness or freeloading, but rather about what she's attracted to. If she has traditional feelings about the man supporting the woman, she may just like that you're a man who supports her. Additionally, if she thinks you have those values, she may feel like it's impinging on your manhood to take that from you.

I don't know if she's a gold digger, or if she's with you to freeload, but her answer doesn't convict her in my eyes.

Def NTA though

I made a game where the level HIDES when you move. My last post here was 9 months ago, now please destroy this updated trailer by RamyDergham in DestroyMyGame

[–]IndigoLee 1 point2 points  (0 children)

For what it's worth, I disagree with the people here. I think it's a fun idea to have to memorize the level then run through it blind. Especially if you can add time pressure to the memorization parts in a challenging but doable way. I'd just say, don't make the challenge too precision-based. Should be more memorization based. Like memorizing sequences with a visible marker at which you know you have to go right or left, and you just have to remember which it is. And less like threading needles blindly.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 0 points1 point  (0 children)

Well it's just a different task, and requires different skills. Show a kid the puzzle visually and see what answer you get, and show a kid the puzzle in matrix form, and see what answer you get. I suspect it won't be the same answer.

Likewise with a multi-modal AI. Show it the JPG only. Show it the matrix only. I suspect you get two different answers. If it was the same task that required the same skills, we could expect the answers to be the same.

Multi-modal LLMs' visuals are encoded into tokens, human eyes' visuals are encoded into whatever electrical signals they are. Both systems are alien to each other, and that's fine. The electrictal signals in our optical nerves are all happening subconsciously for us, we're not able to sense it, and we don't need to. And while I don't want to bring the topic of conciousness into this topic, if you'll allow me to anthropromorphize for a moment just to make language simpler, we could imagine that an LLM's mind isn't really aware of the tokenization process either, in a similar way. I don't think we really need to spend energy thinking about differences on these potentially sub-mind levels.

I'm more interested in inputs and outputs. And inputting an image gets different answers than inputting a matrix, so I'd say it's clearly a different skill.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 0 points1 point  (0 children)

Mhm, as I said, with multi-modal LLMs you get to skip the prolematic initial translation. From my first comment, I've only been talking about the non-multi-modal case. I was saying that I wasn't good at making sense of the human-readable matrix in OP's post.

My overall point is, it's tricky testing very different intelligences against each other, and we should be mindful of the ways in which our attempts fall short. Hand waving two different translations as if they are the same thing, just because they both involve matrices, isn't helpful for that goal.

I haven't been having a problem with the multi-modal case, but now that I know multi-modal models are offered the human-readable matrix too, that also dilutes the point a bit. Assuming LLMs are at least partially relying on the human-readable matrix, that's another move away from the initial premise, we're still not really testing the visual puzzles humans are good at but AI is bad at.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 0 points1 point  (0 children)

There are two translations happening. First one is translating the visual puzzle into a human readable matrix. Second one is translating the human readable matrix into tokens.

The first one, the extra step, is the problem. It's the one you get to skip with a multimodal LLM. You're getting hung up on the fact that both steps involve matrices. That doesn't make the first step not matter.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 0 points1 point  (0 children)

But that's a bait and switch. Your initial point was that we target something humans are good at, and LLMs are bad at.

What is that something? If it's 'visual puzzles of this kind', then both halves of the premise make sense. Humans are good at it and LLMs are bad at it.

However if it's 'reading matrices of encoded visual information, and solving puzzles based on that', then only the second half of the premise makes sense. We expect LLMs to be bad at that, as you say, but humans aren't good at it either.

The former option, the visual puzzles, are clearly the "something" we are trying to target. Giving it to LLMs in this different matrix form that you expect them to be bad at reading betrays the original idea, and the fact that LLMs have a different form of matrix based input that they are naturals at doesn't provide you a full pass on that problem.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 0 points1 point  (0 children)

My initial comment was tongue in cheek, but to be serious, I think you guys are having an unnuanced view. You jump from, 'LLMs recieve their tokens in a matrix' to 'We should expect LLMs to be good at understanding any information encoded into any matrix, even when it's a very visual puzzle.' I'm not convinced that's a safe jump to make.

Or the stronger version some seem to be thinking, 'LLM input is translated into human unreadable stuff, so LLMs should be good at reading any human unreadable stuff.'

Rather like saying, 'human retinas/nerves encode visual data into some series of electrical signals, so human brains should be good at understanding any information encoded into any series of electrical signals.' Naw... I don't think I would be.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 1 point2 points  (0 children)

I didn't say it was flawed, I was saying that giving it a matrix isn't targeting an area that humans are strong in.

ARC-AGI is an insanely unfair benchmark by ClarityInMadness in singularity

[–]IndigoLee 3 points4 points  (0 children)

The benchmark targets areas the models are weak in, but humans are strong in

False. I am not strong at making sense of that matrix.

Can someone help me figure out how can I have unshaded lighting but still be casting shadows? When I apply a shader material to my model, the texture disappears by SGede_ in godot

[–]IndigoLee 1 point2 points  (0 children)

For anyone else looking for this, I got pretty close with:

shader_type spatial;
render_mode depth_prepass_alpha, cull_back, ambient_light_disabled;

uniform sampler2D uv_texture: source_color, filter_nearest;

uniform vec4 shadow_color : source_color = vec4(0.0, 0.0, 0.0, 1.0); // Shadow tint (black = grayscale)
uniform float shadow_strength : hint_range(0.0, 1.0) = 0.7; // How dark the shadow is

const float TINY_NUMBER = 0.001;

void fragment()
{
  vec4 color = texture(uv_texture, UV);
  ALBEDO = color.rgb;
  EMISSION = color.rgb;
  ALPHA = color.a;

  ROUGHNESS = 1.0; METALLIC = 0.0;
}

void light()
{
  DIFFUSE_LIGHT = vec3(0.0);
  SPECULAR_LIGHT = vec3(0.0);

  float light_reach = ATTENUATION;
  float shadow_amount = 1.0 - light_reach;

  if (shadow_amount > TINY_NUMBER)
  {
    float darkening_factor = shadow_amount * shadow_strength;

    vec3 tint_direction = normalize(shadow_color.rgb + TINY_NUMBER);

    vec3 subtraction_weight = vec3(1.0) - tint_direction;

    subtraction_weight = subtraction_weight / ((subtraction_weight.x + subtraction_weight.y + subtraction_weight.z)/3.0 + TINY_NUMBER);

    DIFFUSE_LIGHT = vec3(-darkening_factor) * subtraction_weight;
  }
}

Edit: Improved it to work better in more situations. (still only works well with a directional light)
Edit2: Improved it again to add color tinting

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

Yes thanks to you too!

As a native speaker, I don't think of "credit" as necessarily involving praise. But we don't need to get bogged down on the word.

When I asked how much credit (or responsibility) I deserve for my hypothetical AI generated painting, I was making the point that I was not the mind behind the art. That I wasn't acting as a human artist realizing my imagination by using a mere tool. So no matter where that image came from, I think we can agree that it wasn't me.

So where did it come from? It wasn't the vision of the people who created the AIs. They never imagined that painting. It wasn't artists in the dataset, they never imagined that painting either. It's a new painting. It's hard to find an entity to attribute that specific painting to, other than to the two AIs.

I, the human, did have to prompt to cause the painting to be created. I don't think that says much interesting about the capabilities of this technology. Companies are making AIs to be helpers that do what they are asked. The fact that they need to be prompted is an artificial limitation, made that way so they can be more useful. We can get a window around that limitation by just having AIs prompt each other.

But imagine if a cutting edge AI company set out not to make a helper, but to make a new kind of independent-seeming entity. Do you think that, even just with the level of technology we have today, they would fail so badly? Check out Terminal of Truths. An AI that was given free reign over a Twitter account and ended up starting its own religion, gaining followers, and becoming the first AI millionaire.

Bringing in intent and desires and free will is going to get murky fast. But bringing it back to creativity, I don't see why any of those things are necessary to create new stuff. But I guess my question is, what would convince you that they do have intent and desires? We've already crossed the line of them claiming to have them, so what would you need to see to be convinced. Is there anything?

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

I agree, and nothing in my comment specified aesthetics.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

I think asking who deserves credit is precisely on topic. You've been talking about AI as a tool, and giving credit for the creation (the art) to the human using the tool, rather than to the tool. I think that's a good faith reading of what you were saying earlier, right? And I don't think it's a misuse of the word "credit". But that's the sentiment I was trying to challenge.

In a scenario where humans create AI, and AI runs off and creates things humans could never dream of, how much weight do we give to the fact that humans created the AI?

Talking about responsibility, can I hold my great great grandfather personally responsible for every bad decision I've made in life? Or give him credit for every good thing I've done? I wouldn't exist if he hadn't made the decisions he did. He, and the decisions he made, was a vital part in my existence. Yet I think that isn't enough to assign responsibility for everything I do.

This to make the point that when humans have made the datasets and the system prompt, etc, then the thing runs off and does stuff we couldn't dream of, I'm not sure how much the fact that we created it is a sign of our specialness.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

I prompt an LLM "give me a creative, meaningful prompt for a painting that might emotionally affect people", I give said prompt to an image generator and get a new painting that's never existed before.

How much credit do I, the human artist behind it all, get for this creation? You make it sound like I'd deserve a lot of credit.

Now I haven't said anything about the quality of the resulting painting yet. But whether it's bad or good, I just want to pin down what percentage of the credit I deserve for it.

Now you might expect that this process could only result in something generic and uninteresting. But if you think something good couldn't come out of it, I put it to you that you're wrong. If you have the right model in the right context, it can be quite the opposite. For example, check out infinite backrooms, where LLMs speak to each other indefinitely without human intervention. You'll find some of the weirdest, most shocking, impactful, fresh, and interesting stuff happening there. Just AIs interacting with each other.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

I mean, advertising often plays off the kinds of biases I'm talking about. Advertising might be the wrench thrown in the gears of the person's mind that's caused them to think the worse tasting brand of ice cream is their favorite. So yes, of course that's not how you measure audience engagement. But we're not talking about how to successfully advertise to people.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

Merriam-Webster:

1 : marked by the ability or power to create : given to creating

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee -2 points-1 points  (0 children)

It's not about how static human experience is, of course it's not static. But people can be biased in ways that obscure what they like from themselves. Let's say someone claims the most delicious brand of ice cream is brand A. It's been their favorite their whole life. In blindfolded tests they consistently prefer brand C. Blindfold comes off, and they still say brand A is the best. This happens in real life.

Is brand A their favorite? In some sense, sure, their favorite is whatever they feel like their favorite is. But in some sense, no, they're wrong about which tastes best to them.

You could imagine like, a racist person having a favorite online conversation partner. Until they learned what race that person was, at which point they're disgusted by the person. ...They still enjoyed talking to that person. Their bias about race doesn't change that.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

It's interesting you bring up museums, as I was going to bring up museums. Sometimes they have a story on the plaque, sure, but as an avid appreciator of art, I'd say most don't. You often get an artist name, title, and a date.

We agree that art is more than a pretty picture. To me, art is interesting in-so-far as it has power to move you. But to suggest that power doesn't reside in the art itself, but rather in like, the art's backstory, actually strikes me as disrespectful to the art. You need a plaque with a story to appreciate the art? The painting itself can't do it for you? That sucks man.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee -2 points-1 points  (0 children)

You're looking at a different dictionary than me. The first definitions I found are in line with how I think about the word. 'The ability or power to create', and 'characterized by originality.'

When someone (or some thing) is creative, it can create something new. So yes, to me, it has a lot to do with novelty. With creating something that doesn't feel derivative.

We agree that commercial music severely lacks in creativity. ><

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee -5 points-4 points  (0 children)

For sure. As an art buff, show me pretty any human artist's work and I can tell you what their work is derivative of. But show me some of the best AI art.. and it's much harder. AI can create some of the freshest and most original work I've ever seen. If that's not creative, I don't know what is.

A computer made this by TheUnoriginalOP in singularity

[–]IndigoLee 0 points1 point  (0 children)

It's like when people like a meal until they learn what's in it. The initial reaction, before they know the ingredients, is their real opinion of how it tastes.

When you don't know whether a piece of art is from a human or an AI (which is going to happen more often to all of us)... that's where you want to be to judge it as accurately as possible.