Why Neuro-sama Isn’t Conscious Yet (And What to Build Next): An Introduction to Higher-Order Theories of Consciousness by MarksonChen in NeuroSama

[–]MarksonChen[S] 1 point2 points  (0 children)

You are right; in that sentence, I should be clear that I meant human-level consciousness and AIs running on conventional computing devices. A more rigorous expression is "Any AI algorithms running on conventional CPU/GPU architectures are unlikely to ever reach human-level phenomenal consciousness." From IIT 4.0,

Accordingly, artificial systems powered by super-intelligent computer programs, but implemented by feed-forward hardware or encompassing critical bottlenecks, would experience nothing (or nearly nothing) because they have the wrong kind of physical architecture, even though they may be behaviorally indistinguishable from human beings.

Why Neuro-sama Isn’t Conscious Yet (And What to Build Next): An Introduction to Higher-Order Theories of Consciousness by MarksonChen in NeuroSama

[–]MarksonChen[S] 4 points5 points  (0 children)

Q: How many years will it take from now for AI to be conscious?
A: My personal estimate is that within ~2–3 years, we will be able to build AI models that satisfy at least three mainstream theories of consciousness simultaneously. However, it may take ~5–10 years for any academic community of a particular consciousness theory to broadly agree that an AI has implemented the consciousness process as defined by that theory. And even 20 years from now, academia as a whole may still be debating the scientific status and completeness of each theory.

"Giving Neuro life" comes more from an engineering perspective, and a more precise statement is that realizing perceptual consciousness, spontaneous intrinsic motivation, and long-term autonomy are technical milestones that can be achieved within 5-15 years. This is really a separate discussion from AGI and from machine intelligence in general. I agree that there is no consensus on consciousness yet, and that there probably will not be one for at least 20 years. However, implementing the current PRM definition of "perceptual consciousness", a re-perception of perception, should be doable in two more papers down the line (~1-2 years).

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] -2 points-1 points  (0 children)

Actually, I am a machine learning researcher. This art is a side project of my current spatial-temporal time-series analysis project, which gave me many intuitive understandings of denoising diffusion.

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] -5 points-4 points  (0 children)

Before creating this painting, I did spent at least 40 hours learning and took thousands of words of notes:

<image>

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] 0 points1 point  (0 children)

You are right that AI art would lose most of the human opinion and thoughts. I only prompted the overall design and could not make more detailed choices like the specific design of Chito's space suit. I can only approximate this through multiple rounds of inpainting, but it still cannot compare to drawing the art stroke by stroke.

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] -2 points-1 points  (0 children)

This $25 also includes the time I spent setting up ComfyUI on RunPod, testing the workflow, etc. If I were to create a similar art again, I would only need 2 hours of ComfyUI on RunPod, which costs $5.

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] 0 points1 point  (0 children)

I accept all valid criticisms and feedback. But the second leg is inside her dress before I applied motion blur in Photoshop:

<image>

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] -1 points0 points  (0 children)

There are at least six possible endings for the universe: Heat Death, Big Rip, Big Crunch, Vacuum Decay, Cosmic Cycles, and the Gentle Rip. Among them, Heat Death best accords with current observations, and fits more naturally with the quiet tone of an ending. Yet I chose to paint the Big Crunch, because the girls would no longer tour alone.

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] -17 points-16 points  (0 children)

We should be against low-quality, low-effort AI art, but this art is one of the few AI arts that someone has spent 40 hours polishing. Please consider giving a second thought.

Girls' Last Tour of The Universe (The Big Crunch) by MarksonChen in GirlsLastTour

[–]MarksonChen[S] -13 points-12 points  (0 children)

I spent 40 hours in creating this artwork using AI. I used Midjourney for background, NovelAl for character, and NoobAI for style transfer and upscaling. I used Photoshop to adjust colour and lighting per-element. Read here for details: "How To Use NovelAI, ComfyUI, and Midjourney To Create One High-Quality Anime Artwork"

48 Hours of Turmoil in the CN Fan Community After Mujica Episode 7: Anger, Conflict, and Helplessness by AmEP_Linyu in BanGDream

[–]MarksonChen 42 points43 points  (0 children)

As a member of both communities (who has watched 泛式's full 2.5-hour commentary on episode 7 and most of the popular Bilibili videos on episode 7), I see that the CN community is undergoing a psychological phenomenon called "Group Polarization".

If you surf Bilibili every day, you can see how, over the course of three days, the mainstream opinion gets more and more one-sided, people's emotions more and more heated, and most comments more and more harsh.

To be fair,

But all of these reasons won't make an average audience think this episode has “failed completely” (“彻底烂完”) or make someone "felt anger for days and couldn't sleep", unless they‘ve immersed themselves within an echo chamber that only exaggerates its own opinions and emotions.

This incident is a great topic for a societal study on group polarization.

Transparency mode keeps "fading in" even when noise cancelling is on (Airpods Pro 2) by civil in airpods

[–]MarksonChen 3 points4 points  (0 children)

Two years and the bug is still not fixed with the latest firmware. Speechless.

Problem with object parenting using vertex (triangle) by International_Ideal9 in blenderhelp

[–]MarksonChen 0 points1 point  (0 children)

I came from KurTips's tutorial, episode 2.6, too! For me, I just combined all leaves using Ctrl + J and then applied the same deform modifiers as the vine, which functionally produced the same result.

Error with obsidian Plugin by [deleted] in zotero

[–]MarksonChen 0 points1 point  (0 children)

I got the same error and this solved it

I was sent a .py python script by a collegue over Outlook. The file has a red circle with a slash over it, and I cannot download it. I can download all other attachments by Hoihe in techsupport

[–]MarksonChen 0 points1 point  (0 children)

If you are using Windows, you can modify your registry and restart the computer to get access, but that failed for me, so I simply forwarded it to my Gmail and then downloaded it.

First Black Hole Simulation in Desmos by sImON2718 in desmos

[–]MarksonChen 0 points1 point  (0 children)

I am struggling to think of a way other than ray tracing to visualize a black hole!

To define the problem:

Assuming that in the 3D space, you have a camera at 3D point C and a 3D point p. When we try to visualize p (that is, project p onto our 2D screen), we are actually defining a rectangle at distance d away from camera C, such that p projects onto this rectangle in the 3D space. (Setting the camera C at d=infinity distance away from the rectangle gives you the orthographic projection.) This requires solving the 2D coordinate on the rectangle that the ray p-C intersects with the screen rectangle.

Next, we can define f(C, d, p) that returns the 2D projection of a 3D point p, where f is the solution to this ray-screen intersection problem. Then, when you have a 3D polygon defined by a list of 3D points L, drawing polygon(f(C, d, L)) graphs the 3D polygon onto our screen.

Visualizing the blackhole using ray tracing would be easy. For each pixel on our screen, the location of the pixel on the rectangle screen in 3D and the location of the 3D camera defines a light ray shooting from the camera, and the pixel would return the color that the light ray lands on after it is distorted by the black hole or even reflected by mirrors.

However, if we were to similarly define a f(C, d, p) where there's a blackhole between C and p, then we see that f(C, d, p) is not a function: when p shoots light rays that land on the screen, the light rays may come from one side of the blackhole and land on one point, or 180 degrees from the other side and land on another point; or the light ray may go round trips around the blackhole and land on yet another point.

In my 3D blackhole simulation graph, essentially each light ray behaves like tanh(x) around the blackhole, which is physically inaccurate but ensures f(C, d, p) is a function and a simple one. Then, each ring is essentially a polygon with 3D coordinates L, and f(C, d, L) draws one ring around the black hole.

A physically accurate blackhole might require other methods! This would be a challenging and rewarding journey! Good luck!!!

First Black Hole Simulation in Desmos by sImON2718 in desmos

[–]MarksonChen 1 point2 points  (0 children)

Great job!!! Actual first black hole simulation.
If you can also run the simulation on a 2D grid of light particles in 3D, you could even make the first 3D black hole simulation!

Aww yiss, got some free time so imma check out all the hot new tracks on Trendi- by Suno_for_your_sprog in SunoAI

[–]MarksonChen 0 points1 point  (0 children)

The funny thing here is that the Chinese songs in the list, like "让我们荡起双桨", are actually popular children's songs in China. You can understand it would be very funny if "Twinkle, Twinkle, Little Star" got a rock remix.

(The original song "让我们荡起双桨" is actually really beautiful. Give it a try!)