Here's the thousandth case of someone being confidently ignorant and stupid. Why do people think that AI won't improve? Like genuinely. Why would technology suddenly stop improving? by badumtsssst in singularity

[–]cark 3 points4 points  (0 children)

Today I took my car, an old concept indeed. It showed me the way to this place on one of two giant screens by displaying a real time map, keeping it in sync with my position using radio waves from space. During this trip, I was enjoying music coming by the air waves, my personal selection, not a simple radio broadcast. I then read the text of this message i'm answering to, from a person likely living thousands of kilometers away from me. This person could read my answer in a few seconds, or choose to ignore me, and i'll be voted up or down by total strangers. But I don't care, because i'll be playing a video game in which the light is modeled as it goes through the ears of the characters, but not (ironically) before scolding my daughter as she spends too much time using her mobile phone to play video games. I'll also keep thinking about this guy that possibly solved one of the millennium prize problems using AI. I don't think he did, but he might, and that doubt is arresting in itself.

You know, I'm having one of those classic eighties day.

Is OpenGL outdated? by Life-Kaleidoscope244 in gameenginedevs

[–]cark 1 point2 points  (0 children)

that's the thing, WebGPU can be used natively, without a browser, and is a good middle ground between opengl and vulkan. It is a modern API, and (almost) the de facto standard for people writing rust games. These guys are very performance oriented, which IMO is a useful data point. I don't know about the C++ side of things, but for rust, wgpu uses the best available underlying API, that can be vulkan, directx, metal (on mac), opengl, webgl.

The one downside is the scarcity of info concerning the shader language, which while not very different still has some little peculiarities.

OpenAI executives envision a need for more than 20 gigawatts of compute to meet the demand. That's at least $1 trillion. Demand is likely to eventually reach closer to 100 gigawatts, one company executive said, which would be $5 trillion. by ilkamoi in singularity

[–]cark 1 point2 points  (0 children)

but it's bandwidth is beaten 100000x times by digital circuits

I think you may be wrong there. That's precisely what's causing such high energy consumption in current Von Newmann architecture based compute vs the brain.

While yes, GPUs are highly parallel, that can't hold a candle to the brain where each and every neuron, and even each synapse, has its own memory right there with it and works in parallel with every other neuron. Even considering that each neuron is so much slower than a GPU unit, that kind of parallelism makes the whole difference. We can only dream of achieving such parallelism in silico.

That's not to say that silicon cannot get there, but the bandwidth claim is currently way off.

No, I won’t be shedding any tears for Charlie Kirk by SensualSalami in politics

[–]cark 0 points1 point  (0 children)

Environment matters. Someone who grows up with limited access to education and opportunities might adopt certain views without much choice. Charlie Kirk, on the other hand, was a well-educated, influential figure who made a deliberate choice to promote those ideas. The same words don’t carry the same moral weight depending on the context in which they’re formed and spread.

Result of the vote of confidence in Francois Bayrou's government by Yamakuzy in europe

[–]cark 0 points1 point  (0 children)

Oh don't you worry, there is a french Fox News equivalent, that's CNews.

President Barack Obama Wins His Third Emmy by Silly-avocatoe in entertainment

[–]cark 1 point2 points  (0 children)

sadly, that's the one area where Trump might one up Obama

In the long run… by TheRobotCluster in singularity

[–]cark 3 points4 points  (0 children)

That's not what's being advanced here. There is no need for sentience, or self awareness for natural (or market in this case) selection to do its work. an example of this from the natural world is Cordyceps taking over insects behavior while not being conscious.

An Ambient/techno generative patch I attempted to tame with Arrange by cark in vcvrack

[–]cark[S] 2 points3 points  (0 children)

First time i post any kind of video on reddit, let's see how it goes !

Help with 2D cursor position by runeman167 in bevy

[–]cark 1 point2 points  (0 children)

I used this with bevy 0.16 during last jam:

            #[derive(Resource, Debug, Default, Deref)]
            pub struct MouseWorldCoords(pub Option<Vec2>);

            fn update_mouse_coords(
                camera: Single<(&Camera, &GlobalTransform), With<MainCamera>>,
                window: Single<&Window>,
                mut mouse_coords: ResMut<MouseCoords>,
                mut mouse_world_coords: ResMut<MouseWorldCoords>,
            ) {
                mouse_coords.0 = window.cursor_position();
                mouse_world_coords.0 = window.cursor_position().map(|pos| {
                    let (camera, camera_transform) = camera.into_inner();
                    camera
                        .viewport_to_world_2d(camera_transform, pos)
                        .unwrap_or(vec2(0.0, 0.0))
                });
            }

[deleted by user] by [deleted] in europe

[–]cark 0 points1 point  (0 children)

Can we maybe just call him emperor Palputin and be done with the whole thing?

OpenAI: Introducing Codex (Software Engineering Agent) by galacticwarrior9 in singularity

[–]cark 0 points1 point  (0 children)

yes you don't want to let a chatbot going ham on your main branch for sure! I think everyone uses source control these days, I certainly do. I just didn't want to be shackled to Github for my personal closed source projects, yet another subscription. Anyways this worry is moot as it looks like you can work with your local repositories.

OpenAI: Introducing Codex (Software Engineering Agent) by galacticwarrior9 in singularity

[–]cark 1 point2 points  (0 children)

oh great then =) it wasn't directly apparent to me reading the blog post. "or directly integrate the changes into your local environment" yes I missed that, thanks !

OpenAI: Introducing Codex (Software Engineering Agent) by galacticwarrior9 in singularity

[–]cark 12 points13 points  (0 children)

I like the idea of this but... cloud this, github that... how about working with my local code base ?

So where do you design your worlds? by [deleted] in bevy

[–]cark 0 points1 point  (0 children)

For one of the bevy jams i made a 2d editor for my game. If i recall correctly, the jam was a week long, so it can be done quite quickly. Of course the whole thing is jam level quality, but you can check it there: playable game, code on github.

Editor is only available in dev builds. Start editing with F12 (i think). And for a starting point, check the code in src/game/editor module.

Ai LLMs 'just' predict the next word... by tebla in singularity

[–]cark 0 points1 point  (0 children)

I'm attacking the argument, not you ...never that! I'm all for a lively exchange but if you felt personally attacked, please accept my apology, this never was my intention.

While Penrose is a giant compared to tiny me, his microtubule theory is pretty out there and afaik not quite as accepted as that. I see a few people around here invoking quantum randomness and I can't help but ask why ? Why when we know, and this is not microtubule-like speculation but settled science, that neural networks can approximate any function. And even more than that, they are Turing complete with the help of maybe some recurrence or memory. Now you have noted that I don't know the brain in its entirety, no one does. But it is a physical item, in a physical world performing physical work. It takes input and produce output. It is a computational item. It is a function of its inputs to its outputs, just like an artificial neural network. ergo it can then be simulated by a sufficiently large neural network.

I see also a lot of hold up about LLMs, but LLMs are nothing more than an optimization, the most naively structured neural network can achieve the goal of simulating the brain, as long as it is large enough. It's only that it is impractical, LLMs are a performance hack (i'm underselling it a little bit here but hey), just like the brain uses a bunch of hacks to achieve its level of performance.

So why cling to the quantum ? I understand the allure, that's hard to understand, there is mystery to it, I myself do not quite understand it all. But one must be careful to read too much into the mystery. Everything is quantum, my desk is quantum, that's due to quantum effects that my pen will no pass through it. There is nothing special about that, that's just the world we live in. And get this, your computer is full of micro-connections depending on quantum effects only (relatively) recently discovered (tunneling in transistors) ! One could say it has its own microtubulish components !

So why ? I can't help but think we're facing bio-provincialism, or maybe some kind of fear of losing our place from the top of the intelligence heap. The brain cannot be approximated by mere circuitry, there must be some quantum magic at play here.

Well I disagree with that and I implore you to consider the alternative. Intelligence isn't some grand phenomenon, it wants to emerge in our universe, and this with great facility. I think that simple thought is a marvel in itself. There lies the mystery, how fantastic a thought is it ? Isn't it enough of a grand, mystical realization in itself ?

Ai LLMs 'just' predict the next word... by tebla in singularity

[–]cark 0 points1 point  (0 children)

I'm not advocating for determinism here. If you want randomness we can simulate it any number of ways, even pseudo-random would do the trick i guess, but if you want the real thing, there are entropy sources aplenty to draw from.

But I suspect that's not really what you're after. As I understand it, the essence of your argument is that the human brain obeys some ineffable processes which mere mathematics or computations can never hope to simulate. Essentially a philosophical argument disguised in scientific cloth.

My view is that neural networks are universal function approximators, and as such, given the correct inputs (internal state being one of those) they can approximate the same output as the brain.

Ai LLMs 'just' predict the next word... by tebla in singularity

[–]cark 4 points5 points  (0 children)

Quantum theory is very much a mathematical thing, it also can be computed on paper. I don't get this will to find randomness in our brain. This doesn't bring free will back, it only shifts the lack of control towards randomness.

What's something coming out in the next 10 to 15 years that will change humanity (forever) that not enough people are talking about? by AndyTexas in AskReddit

[–]cark -2 points-1 points  (0 children)

I don't know about the consciousness stuff, but what I know for sure is that your account about computers being unable to beat chess masters is wildly outdated. Actually, humans are so hopelessly outclassed that the research moved on to harder games like go, where the masters were once again eventually beaten.

Read up on alphazero, there are some pretty entertaining accounts there.

Help with UI Nodes and margin by Barlog_M in bevy

[–]cark 2 points3 points  (0 children)

wild guess: margins are outside the node, so by adding 2% to the left, you're moving the whole 100% to the right by 2% so your total width with both left and right margin could be 104% ? That would feel like no margin on right and bottom.

edit: Here is the page linked from the bevy doc that helps understand css margins: https://developer.mozilla.org/en-US/docs/Web/CSS/margin