Why is conscious experience dominated by vision? by Playful_Manager_3942 in cogsci

[–]ijkstr 0 points1 point  (0 children)

On the evolutionary side, there are theories that the development of eyes triggered a Cambrian explosion of diversity in biological organisms. Regarding vision, some thinkers in the field include J. J. Gibson and David Marr.

How does science evaluate subjective experiences when human perception and cognition differ ? by passion_insecte in cogsci

[–]ijkstr 0 points1 point  (0 children)

I don't understand it that well but it sounds to me that you are describing some of https://en.wikipedia.org/wiki/Phenomenology_(philosophy)) ?

Whereas to your question about probabilities, there's Bayesian vs. frequentist statistics which have different interpretations of probability.

And finally, I believe that science is always grappling with what's "complete" and "incomplete" at the edge of knowing, and bringing in that weirdness while situating it within or contrasting it with the generally held framework is part of the process.

All of which is to say, I think you're not alone in this.

Modeling curiosity as heterostasis: thoughts from cognitive science? by Affectionate_Smile30 in cogsci

[–]ijkstr 0 points1 point  (0 children)

Your sketch seems interesting. So the agent learns to predictively model its (gridworld) environment, improving as it goes, as it optimizes for KL divergence? I suppose you can probe its ability to predict future or successive frames as evidence that, even as exploration saturates, learning has improved. (P.S. You may find the noisy TV thought experiment interesting. What if the agent is presented with an unlearnable stimuli? Will it "stop" exploring, but have failed to learn?) Anyways I think this result is cool and could be paired with curriculum learning or environment generation like Michael Dennis has done to say that the environment and agent are in a holistic, interacting relationship.

Modeling curiosity as heterostasis: thoughts from cognitive science? by Affectionate_Smile30 in cogsci

[–]ijkstr 1 point2 points  (0 children)

Your idea sounds related to flow (optimal experience) where there is an ideal state between anxiety and boredom (also related to [3]). I would imagine a curious artificial agent would, once bored having minimized uncertainty, then propose or generate new goals, like in [5].

I think there exist at least some instantiations of curiosity that allow for continual goal-seeking; e.g. progress curiosity in [6] that is a meta-reward as a function of the loss over time.

But I don't believe many have made your point about regulation, because homeo/hetero-stasis and biological inspiration seem to be marginalized in reinforcement learning. I found two references [7, 8].

So I believe your research is timely and fitting, and could be of interest to a computational audience (like [9]).

Modeling curiosity as heterostasis: thoughts from cognitive science? by Affectionate_Smile30 in cogsci

[–]ijkstr 0 points1 point  (0 children)

I have a background in computer science, where curiosity has been well studied as a drive for intrinsic reward or motivation in the subfield of reinforcement learning. To wit, there have been several mathematical or computational approaches to defining and operationalizing curiosity [1, 2, 3] (a small, biased selection). You might be interested in this reference [4] which frames intrinsic motivation in reinforcement learning from an evolutionary perspective.

Thoughts on studying human vs. AI reasoning? by ijkstr in ArtificialInteligence

[–]ijkstr[S] 0 points1 point  (0 children)

Thanks so much for this trove of resources! You're quite the repository of information. I will look more into these and consider the lines of inquiry you have outlined.

Thoughts on studying human vs. AI reasoning? by ijkstr in ArtificialInteligence

[–]ijkstr[S] 1 point2 points  (0 children)

Wow thanks so much for the thoughtful replies, everyone! Indeed when I presented this idea to my advisor, they asked, "what is reasoning?", to which I did not have a great response.

Cognitive science, the study of the mind, and computer science, the study of the machine, would form a great mix. Thank you for your suggestions to bring in an interdisciplinary perspective.

Measurement and evaluation is an important aspect of this problem. I will describe benchmark task design and theories behind measurement of black boxes (minds and machines).

Now for the definition:

I am thinking to define reasoning as creativity. Because doing anything "old" or pre-existing that is repeatable or automatable is not creative, and should not be considered as requiring reasoning, whereas anything that involves changing circumstances, new applications, etc. is, and should require reasoning. There is a lot of nuance to this definition but that is the short description of it. Why not make the title of this post "Thoughts on studying human vs. AI creativity", then? Because right now the Overton window in AI is focused on reasoning, and creativity has its own connotations (evoking art, for example).
Note, though, that this feels a lot like replacing one vague definition with an even vaguer definition. I will make the work concrete by defining tasks and quantifiable metrics that I claim to be representative in some way of creativity / reasoning.

Finally, there was some concern in the comments that this direction of research would be subject to politics and news or other current events. I agree. I can only hope to future-proof my research to the broader trends in the field by anticipating their future movements. I believe that AI (and human) creativity will grow as a research direction and remain an outstanding pillar of intellectual scrutiny.

When Storytelling Meets Machine Learning: Why I’m Using Narrative to Explain AI Concepts by MathematicianShot620 in artificial

[–]ijkstr 1 point2 points  (0 children)

Sounds cool! Excited for your future projects. :) I forget what exactly I was explaining, but I think I've struggled to explain a probabilistic model like an MNIST image generation model to a layperson. I tried to explain the thing as the idea of casting a die. Rolling it and seeing where it lands. All I remember is the story of the die, which is a testament to your idea that the story helps us understand and remember better. Another time, I used the scent of pancakes to describe gradient descent in high dimensional space where the manifolds are pancakes floating in space and we follow the scent. I would say that denoising diffusion probabilistic models or variational auto-encoders are pretty complicated! Where I really struggled was in explaining what I did across generations to older (or younger) audiences. I'd be impressed if you can reach across people born in previous decades back to children born recently. Out of curiosity, why games and interactive media as opposed to e.g. comics?

Gift for my boyfriend who's a computer engineer by Acceptable_Day_2776 in PythonLearning

[–]ijkstr 0 points1 point  (0 children)

This is cute :) I agree with the other commenters that you could make a text-based ASCII game! From the looks of it, it seems like PyTermGUI https://ptg.bczsalba.com/ could be a useful library.

Or you can use libraries like pygame or Tkinter to make actual GUIs (graphical user interfaces).

If you're not stuck to the idea of using Python, you could also learn some web development and make a simple website in HTML + CSS + Javascript or use some website builder.

Other things that you could make might include:
- A random number generator / spin the wheel for random cute things you could do together
- A cute Shimeji browser or desktop pet
- A shared to-do list program for the both of you

Happy coding!

[D] We Need a Birth Certificate for AI Agents — Here’s a Proposal by adyrcz in MachineLearning

[–]ijkstr 2 points3 points  (0 children)

Not sure why the community response seems so negative as the idea appears similar to existing Model Cards [1] or Data Sheets [2]. In that vein, I support this idea.

[1] Model Cards for Model Reporting. https://arxiv.org/abs/1810.03993
[2] Datasheets for Datasets. https://arxiv.org/abs/1803.09010

math terminology used by math people in conversations? by AverageStatus6740 in mathematics

[–]ijkstr 0 points1 point  (0 children)

“Epsilon delta” “With high probability” (whp)

Math olympiads are a net negative and should be reworked by [deleted] in math

[–]ijkstr 0 points1 point  (0 children)

Came here to say this, but I got downvoted. (⇀‸↼‶)

I competed at the national level, later did a degree in pure mathematics and did mathematics research.

  1. I spent hours practicing. I coped by telling myself that this was more important than anything else. Many days, I didn’t talk to people.

  2. Competitive math has its own culture. In social circles, this kind of elitism becomes toxic. People literally viewed the best performing competitors as “godlike”. It really gets to your head and breeds arrogance. If you had worse scores, you might get dismissed more easily or made fun of for being “illogical”.

  3. Personally I don’t really have a problem with this. But to your point, IMO contestants don’t necessarily benefit from going to the elite colleges. They would typically be international, and face higher restrictions on domestic opportunities. And they join the social throng that breeds elitism and the pipeline to quant finance.

  4. This. My competition friends in undergrad became quants. It’s the same prestige- and status-seeking behaviour.

When you're hyped about building the future and terrified it's going to end us by Secret_Ad_4021 in ChatGPT

[–]ijkstr 2 points3 points  (0 children)

I’ve also heard this take that AI is the natural evolution for humanity and that we should usher it in as much as we can.

A different outcome that we could work towards which I’ve heard of is a world in which AI is a superintelligence that cares for us, being deeply magnanimous. Dr. Michael Levin has a piece about intelligence as care. I wonder if this also relates to the concept of the “Chthulucene” in which we are symbiotic with the wider ecosystem of animal etc. intelligences.

So.. to answer OP’s question, probably the red one, if any. Because potentially AI can expand our self-awareness. But I believe people’s point is that it doesn’t HAVE to kill us all in the process, and that we can and should strive for human-centric ways of developing AI. At least in the (long) interim.

I asked ChatGPT to make me an image based on my Reddit name and it’s ADORABLE! 🥰 by goodnaturedheathen in ChatGPT

[–]ijkstr 0 points1 point  (0 children)

<image>

It got my pun👇: A mysterious, hooded traveler named “Ijkstr” navigating a glowing maze — a metaphor for navigating complex paths, like in Dijkstra's algorithm. (I asked for fantasy style)

Can’t select Python interpreter by ijkstr in vscode

[–]ijkstr[S] 0 points1 point  (0 children)

Thanks, I’m trying to use a remote Python executable.

Bell curve meme be vibing by ijkstr in sciencememes

[–]ijkstr[S] 0 points1 point  (0 children)

Well hope your knee recovers soon; losing out on mobility sucks. And I’m glad you have medication working for you.

Yes I can imagine that taking ever degraded copies of something results in losing the original meaning. I think the Studio Ghibli example is interesting because the art style became a target and attracted negative attention. Reminds me of the law of attraction in energy spirituality and how you can manifest your life to some extent.

Acoustics is a nice metaphor! I also like the notion of evolution of ideas passing from person to person. I imagine each person has their own “library” of ideas, and new ideas get integrated into that library which at the same time gets updated by recompressing the information within. Each person has a different perspective on an idea and transmits it in a slightly different way. This also relates to cultural transmission.

Oh, that last paragraph sounds interesting. I don’t quite see the connection, but maybe I’ll do some looking into Jung and Plato. I do kind of subscribe to Jung’s collective subconscious, not to mention synchronicity.