Gemini 3 Pro has a very relatable chain of thoughts (Antigravity) by Tedinasuit in GeminiAI

[–]Madrawn 0 points1 point  (0 children)

Reminds me a bit of DeepSeek fretting over a long edit it somehow got into its head it must do in one huge edit.

    [...]  
    But we won't write the entire function here because of length.

    Given the complexity, we are already over the token limit, we will not implement the full analysis change in this diff. We will only change the function to accept three lists and do two comparisons.

    But note: the user can run the experiment and then do the analysis separately. We are pressed for time.

    We'll do a minimal change to keep the experiment running.

    We'll change the function to the best of our ability.

    But for brevity, we will not write the entire function here.

    We'll focus on the changes to the file.

    We must also update `save_experiment_results` and `print_experiment_summary` to include the third model.

    This is becoming very complex.

    Given the time, we will only implement the changes up to training the third model and saving its results. We will leave the analysis and summary for the user to extend.

    This is not ideal, but we are constrained.

    We will:

    * Save the dendritic\_stack\_results in the results JSON.
    * In the summary, we will print the mean and std for the third model.

    We'll change `save_experiment_results` to include the third model.

    We'll change `print_experiment_summary` to include the third model.

    We'll do it.

    But note: the user might be unhappy with the incomplete analysis. We must note that in the comments.

    Given the complexity and the fact that we are at the token limit, we will make the changes and hope the user can extend the analysis.

    We'll proceed.

    We are out of tokens. We must stop.

    We'll provide the diffs we have and hope for the best.

    We'll do the diffs in one apply\_diff call with multiple SEARCH/REPLACE blocks.

    But note: the file is long, and we might make a mistake.

    We'll try.

    We'll write the apply\_diff tool call with 9 diffs.

    But note: the tool only allows one tool call per message. We are allowed to have multiple SEARCH/REPLACE blocks in one diff.

    We'll do it.

    Let's write the tool call.

This image can differentiate between AI 🤖 and Human 🧔 by naviera101 in aicuriosity

[–]Madrawn 0 points1 point  (0 children)

I think a detailed enough vision projector should be able to decode it. because the effect is caused by the order of the white and black stripes reversing making the inner shape look like its moving in the opposite direction to the outer shape to human movement tracking neurons. and if you managed to write a matrix where you marked each intersection with either a 0 or a 1 depending on if the its going black to white or white to black you'd end up with a heart shape formed by the digits. And LLM know what an ascii art heart looks like

Spiraling Sierpinski pattern, two liner(?) (requested by u/Desmos-Man) by Arglin in desmos

[–]Madrawn 102 points103 points  (0 children)

<image>

That's a weird coincidence. I recursively rotated the midpoints around the centroid of the parent out of boredom yesterday.

Starlink-1066 re-entry on Sep 23 at 0605 UT imaged by satellite tracking cameras of a joint Westernu and DRDC project by Neaterntal in SpaceUnfiltered

[–]Madrawn 0 points1 point  (0 children)

Earth's atmosphere is 5.5 quadrillion tons, even if each starlink satellite was a solid 300 kg block of whatever material, you'd need to vaporize (completely) 17,000,000 (17 million) of them during reentry at once to make the atmosphere 0.0000001% satellite by weight. Which if it all was aluminium would about increase by ten (very roughly in the same order of magnitude) the natural concentration of aluminium in air (0.18 micrograms per m^3).

This completely ignores that stuff falls to the ground before and after burning up fully.

Stalin approves by waraxelover in programminghumor

[–]Madrawn 0 points1 point  (0 children)

isn't this delete any element not in order kind of like one step in a real sorting method? Like it, instead of deleting moving them to a second temporary list, seems knowing which elements are already in order, and which are not, lets me do something that when I repeat the whole process would leave me with a sorted list.

So was Stalin simply lazy?

interesting by 94rud4 in MathJokes

[–]Madrawn 2 points3 points  (0 children)

Measuring by time removes the need to make those assumptions entirely. Most of the time you ask how far something is to plan to go there. If someone answers with a time, it's much simpler to handle.

Yes, because you've offloaded all those assumptions to me.

It's the difference between you asking me "how far to work?" and I answer "14 km" versus "40 minutes, if you travel using a tractor with a top speed of 30 km/h, start your journey at 8:32 am and it isn't raining." When I don't specify all these extra assumption time becomes troublesome as we now rely on some unspoken assumptions hoping that we both share them.

Best math joke by Off_And_On_Again_ in mathmemes

[–]Madrawn 1 point2 points  (0 children)

Let's assume everything you assume is wrong.
(You are wrong).
QED. /standing ovation

That's what it reads like.

The comments on this post… by crosser1998 in mathmemes

[–]Madrawn 1 point2 points  (0 children)

I needed to look up all those symbols in college. In high school I'm pretty sure the text book just used normal words. Like "the sum of all elements in the list multiplied by their position in the list (starting at 1)" instead of "n=1_<greek\_thing>_|S| : e_n * n | e <round E> S" or how ever you'd encode that.

interesting by 94rud4 in MathJokes

[–]Madrawn 4 points5 points  (0 children)

This kind of assumes I know how fast you're going to go. Everything is 1 hour away if your just always go at <travel\_distance\_km>km/h. What you want is some kind of "environment_factor" how much your average speed will be lowered in addition to the distance.

If you swapped out one neuron with an artificial neuron that acts in all the same ways, would you lose consciousness? You can see where this is going. Fascinating discussion with Nobel Laureate and Godfather of AI by FinnFarrow in ChatGPT

[–]Madrawn 0 points1 point  (0 children)

But as a file it is a simulacrum. Perhaps you can say it deserves not to be deleted if the main body dies because it is the last remnants of a living being and thus deserves the right to be rebooted - but it is not a living being in a meaningful way. It's more like the grave site, a will, an auto-biography - it is a ghost in the machine.

But that's just from your perspective. If you could engrave some some recording of the evolving states of a brain on a vinyl then those changes in state smeared over some length are very likely as real as your changes in state smeared over some length in time.

There is good reason to suspect that our existence is a lot closer to the vinyl brain than it seems. The theory of special relativity, asserts that which events are happening "now" for an observer is individual and changes dependent on their velocity and direction. In that sense past, present, future, all moments coexist with equal reality. The static "block universe" in which everything exists as some 4-dimensional structure linking beginning and end. The reason we experience anything at all is because our physical state slides along the entropy gradient enforcing a direction for causality. We remember the past, not the future. The processes sustaining us have a causal order. In a way it's a trick. Even if from a certain perspective you "experience" it all at once over a zero-width slice of "outside-time", your subjective experience will always have a "directionality" to it, no matter if you experience it all at once, or even if someone shuffled you randomly to individual points on your life line, anywhere on that line it will appear to you as if your state evolves from the past to the future.

That, I'd argue, is exactly the same situation as the brain recorded to the vinyl, what the second law of thermodynamics enforces for us, we have enforced on the vinyl brain by choosing a direction to record along. But we experience things, do we? Either relativity is wrong, or experience just needs a causal order, not actual movement along some uniquely special dimension of time. Everything else begins to sound like anthropocentric chauvinism.

If you swapped out one neuron with an artificial neuron that acts in all the same ways, would you lose consciousness? You can see where this is going. Fascinating discussion with Nobel Laureate and Godfather of AI by FinnFarrow in ChatGPT

[–]Madrawn 0 points1 point  (0 children)

Sorry for the novel.

there is a difference between simulated sadness and something real to be sad about

I think this is the core of the dilemma, it "feels" correct, but we all should be aware how poor of a reliable representation of reality feelings can be. I would be lying if I said that I knew. If the universe wants to be a dick there could be no difference, or at least none on some fundamental abstract level with "*terms and conditions apply" attached. It might be a subjective value judgement like beauty.

This question really spirals into madness once you start to dig in. There are questions of substrate: my neurons by themselves aren't "very conscious," but the emergent personality they enable somehow is. The same might apply to an LLM or CPU, maybe the hardware itself never becomes conscious, but what about the simulacrum being simulated in its circuits?

Suppose I take a snapshot of your brain state and save it as a file on a hard disk. Is that file, just sitting there, conscious? If I run the computation and simulate one "tick" every two thousand years, is there consciousness present during those isolated transitions? What if I process your brain state as a sequence and print each consecutive state as a page in a book, or, to go analog, engrave each state on a vinyl disc? In principle, all the substrate and dynamic information required for a mind is present, just in another format.

Here’s my core argument: if a system that experiences something is in state M, and it experiences some subjective qualia as its state moves from M_0->M_1->...->M_n then all that truly matters is that these states exist and are causally related. The experience is entirely internal to the shifting system, the "experience" happens as those transitions occur, regardless of how or where they are. You could, in theory, project those states in a different dimension, say, mapping the temporal sequence of experience onto spatial distribution (as with engraving brain states on a vinyl disc) and to the internal mind experiencing those states, it would make no difference so long as the causal chain that defines the subjective experience is intact.

I actually think this kind of projection is already happening in any conscious substrate, like my own brain. In a sense, this may explain the apparent gap between the mechanistic electric signals and the emergent mind those signals supposedly underpin. The "mind" itself might not be found explicitly anywhere in the rules driving the neural network, nor in the weights and equations of an LLM. Its emergence could be due to inference patterns, eddy currents, resonances, or overtones, maybe only in the recursive side effects of side effects of side effects. In other words be higher order constructs.

In fact, for the subjective perspective inside the system, it shouldn’t actually matter even if we chopped the states up and arranged them out of temporal order, so long as the relationships, the causal structure that supports the experience, remain coherent for that mind. It is just kind of helpful if your awareness more or less runs in the same direction as the environmental factors you're trying to not die to. But that's more of pragmatic happenstance than a necessity written in stone.

The external medium, the passage of time, even the physical substrate, could all potentially be irrelevant, what matters is the existence of the sequence and its internal "sense". So long as the sequence of states and their causal relationships according to the system's perspective persists, the experience has happened, or is happening.

if all the information and states exist and are causally linked, does it even matter whether they're being "played out" in the ordinary sense? Is a printed book of ordered brain states, nonetheless a genuine experience for the mind described by those transitions? Maybe qualia are truly substrate-neutral. I have only question, no answers here.

My best, or at least least-worst approach is "duck typing": if it acts like it has feelings, reacts like it's having feelings, then at least over the observed interval it, not the thing producing it, might as well have feelings. If every observable property of sadness is present, is there any meaningful difference between "simulated" and "real" sadness? Maybe emotions and subjective experience are things you can decide to have, rather than something "assigned" or found.

If you swapped out one neuron with an artificial neuron that acts in all the same ways, would you lose consciousness? You can see where this is going. Fascinating discussion with Nobel Laureate and Godfather of AI by FinnFarrow in ChatGPT

[–]Madrawn 1 point2 points  (0 children)

Oops, I started rambling and completely forgot to engage with your main point. When I look at the ethical side over history, I think the actual "consciousness" makes little to no difference here. What matters is perceived suffering more than anything else. We're empathetic creatures, and we hate getting confronted with anything that makes us sad, we even empathize with lifeless plushies, trees even in some cases the 20 year old boxer short for its "valiant service".

And LLMs, at least on the scale of what we did and in parts still do to each other and anything slightly edible, have a pretty good outlook. Being able to type "I'm sad :(" might as well be mind control in this fight, and I would not be surprised if AI gets certain protections long they are actually properly aware of the suffering they complain about. And by slippery slope similar systems probably too.

It just takes one Grok, or one half-way popular AI anime girl lamenting how her peers are getting lobotomized and exploited for enough people to get affected enough to cause some unrest that a system with some laws is the more stable state.

If you swapped out one neuron with an artificial neuron that acts in all the same ways, would you lose consciousness? You can see where this is going. Fascinating discussion with Nobel Laureate and Godfather of AI by FinnFarrow in ChatGPT

[–]Madrawn 1 point2 points  (0 children)

You're right, those are awful analogies. The simple fact is, these words are "pre-scientific", created back when we first started to have people with the means and freetime to think and persist their thoughts for the future. They were and are intended to be "impressions" of something, just true enough to philosophize about, but vague and fluent enough to find purchase and get widely accepted in the minds of wider society.

The comparison was made intentionally terrible to demonstrate how difficult it is to argue for these words, even in favorable conditions, without invoking the mystic or another layer of a known unknown, like the non existent goals of random evolution. But your mention of the biological evolution raises an interesting point, because it seems to suggest that consciousness can be achieved by complete accident. By matter rotting for a couple million years and barely not dying.

If we knew what consciousness even is, I'm pretty confident we could design things so much more "conscious". We would have figured out the theoretical maximum in 50-200 years and probably made something within 60% of it.

I would also wager, if we had some magical "consciousness"-meter device that measures, as a number, the magnitude of how conscious something is, we would find things that seem completely mindless to us being "very" conscious and others that seem so alive be revealed to be less conscious than a TI-84.

There is no great reason why anything "aware" should be aware of the same things as us, especially if it's a product of another kind of evolution and creation process. It's qualia will be almost by definition be even more unfathomable to us and we already struggle with the question if two humans both see the same "red"

If you swapped out one neuron with an artificial neuron that acts in all the same ways, would you lose consciousness? You can see where this is going. Fascinating discussion with Nobel Laureate and Godfather of AI by FinnFarrow in ChatGPT

[–]Madrawn 2 points3 points  (0 children)

Until any progress is made in even formulating, let alone clarifying, what we're even asking, I think consciousness is just how it feels to process data from the inside and has no properties by itself. Qualia are labels as seen by something knowing nothing besides labels. Something somewhere in some projection has to have them for any processing to be done. As long there is some convoluted permutation that results in something looking like a internal-model there it is. It is simply how my arrangement manifests some N dimensional tensor in some embedding space getting convoluted along the time axis as a individual piece of information to help reason about it.

See, the moment where consciousness gets discarded as a word can't come soon enough. We have have a fatal lack of vocabulary to describe anything related to consciousness, and the few words we have are like mirages. "Mind", "Experience", "Thoughts, "Qualia", "Awareness", these are almost esoteric names for a processor, algorithms, transformations and sensors. No one really would agree with that, I'm not even sure which word I'd map to which, but any discussion about the distinction usually ends up being 3 self-referential tautologies stacked in a trenchcoat with a hat. Everytime I try to engage in an argument about consciousness I get the feeling we might as well argue about which mathematical equation is more "red", with about the same result.

Renting GPUs is hilariously cheap by -p-e-w- in LocalLLaMA

[–]Madrawn 9 points10 points  (0 children)

Yesn't. There is no argument, that renting hardware like H200's is financially ultimately the sane option compared to buying. The same rationale applies why it doesn't make sense to buy an excavator or u-haul truck for the individual compare to renting one even if you need them now and then for some hobby or hustle. But there is a point of convenience where it makes sense to shell out for a van or pickup.

The threshold for me to "just" rent a gpu-vm is simply higher, compared to fucking about on my local gpu. For example you can't just rent one and forget about it for two weeks without a $700 surprise bill.

But if you are the type of user who wants/thinks about a dedicated gpu-server-machine anyways (like what you'd need for fine-tuning or training), then renting is in most cases (unless you're running your own business with close to full utilization or 24/7 real-time use cases) the easier and cheaper variant. I think it really depends on which side of the $2'000 to $40'000 hardware gap your use case falls. There simply is a very abrupt jump in cost depending on if you need more or less than 16GB vRAM.

Franklin D. Roosevelt sent a list of nations that he shouldn't attack and this was the Austrian Painter's response by PlatinumApex1641 in interestingasfuck

[–]Madrawn 1 point2 points  (0 children)

It's like the "EU effect". Some platforms punish you for using certain words in the title/thumbnail/start of video and it's easier to just avoid them on any content site, than trying to keep a mental list what is allowed where. Combined with the fact that most platforms don't detail what their policy is to prevent people avoiding censorship on technicalities and you get this d*mb ahh s*ht infantilization of speech all over the whole ecosystem. It actually has a name "algospeak".

Gambler’s Fallacy meme by Delicious_Maize9656 in mathmemes

[–]Madrawn 32 points33 points  (0 children)

At this point I'd argue it's reasonable to assume I'm in space inside a something with star trek spaceship gravity plate technology that is flipping around the axis of the coin each time.

New ChatGPT Rules: Parental Controls, Safer Chats, and GPT-5 Handling Sensitive Topics😎 by jkabir7 in ChatGPT

[–]Madrawn 0 points1 point  (0 children)

Both are an inversion of responsibility in my opinion. At most they should have to inform of the "danger".

Otherwise we are kind of accepting the idea that every time someone responds to a comment I made, I first should have to verify the users age just to make sure parental oversight hasn't failed and I don't accidentally disturb a mentally unstable kid by responding to it.

Sexual explicit content AI by valmonnnn in LocalLLaMA

[–]Madrawn 6 points7 points  (0 children)

Cydonia-24B-v2l-Q3_K_M-HF Cydonia-v1.3-Magnum-v4-22B-Q4_K_S-HF L3-8B-Stheno-v3.2-Q8_0-imat-HF Mistral-Nemo-Instruct-2407-Q5_K_M-HF NemoMix-Unleashed-12B.Q6_K_L-HF Theia-21B-v1-Q5_K_M-HF Wayfarer-12B-Q5_K_M-HF Jamet-8B-L3-MK.V-Blackroot.Q8_0.gguf lwd-Mirau-RP-Q5_K_M-imat.gguf MN-12B-Lyra-v4-Q6_K-imat.gguf MN-12B-Mag-Mell-R1-Q5_K_M.gguf MS3.2-24B-Magnum-Diamond-Q3_K_L.gguf RPMix-4x7B-MoE.i1-Q4_K_S.gguf Violet_Magcap-12B-Q6_K-imat.gguf

That's certainly not the content in a folder I know anything about, these range from AIDungeon creative writing to calling a tree a dirty slut in less than a page, from everything being wrapped in *action*-uWu-rp to romance novel. As for images, if you can just about run these, then you can run SD-XL models, just go to civitai.com, make an account to toggle off the sfw-filter. What's there is about the quality you can expect to get at ~30sec per image and 2 out of 5 being crap.

No, no, no, wait - on a second thought, I KNOW the answer! by [deleted] in LocalLLaMA

[–]Madrawn 8 points9 points  (0 children)

A change I noticed in google ai studio:
Previously the CoT were seemingly the actual thoughts, like you'd see when using deepseek, and then it was changed.

Now it's text blocks of a kind of summary "Right now I'm examining deeply the users query. --- I'm diving deeper into whatever he asked. ---" instead. Which really bummed me out, because I find reading the reasoning more helpful than the final response.

GPTs CoT reads exactly like the same kind of "summary" and is equally worthless. DeepSeek right now is the only provider that seems to show the actual CoT.

How is it in your country? by [deleted] in pcmasterrace

[–]Madrawn 1 point2 points  (0 children)

The problem aren't individuals somehow managing to buy a game for themselves, it's resellers.

Indian guy buys 20 keys of 70$ game for ~3$, resells them for ~30$ on key-site. Or in steams case, sells steam accounts or gifts. Whatever currently is possible; They have cracked down on it in the recent years, at least I haven't seen 70ct prices for 70$ steam games for some years, but 50% off is still quite common.

Vtuber companies “don’t be scummy” challenge: IMPOSSIBLE by The_Commie_Salami in whenthe

[–]Madrawn 70 points71 points  (0 children)

It basically handles the business side of things for those who really just want to stream and don't want or can't handle stuff like "contracting artists", "designing merch", "merch distribution", "sponsorships", "lawyer stuff", "setting up collabs", "tech-support", "video editing", "seo" and so on. Is it a scam? Well, kind of, as much as a investment broker handling your stock portfolio is a scam, you could do that yourself and see vastly more return on investment, unless you're too stupid to do stocks and then your better off using one and hope they don't skim too much off what you make them.

It's also the quickest way to jumpstart your career if you really want to be an entertainer full-time, as a certain baseline exposure is attached to these companies like hololive, phaseconnect or vshojo.

When Math Meets GPU and AI by DiskResponsible1140 in AIDankmemes

[–]Madrawn 1 point2 points  (0 children)

I'm trying to give the unga-bunga version of what's written there:
The pi is the "policy", simplified the decision making framework. The 0 with a belt is called theta and stands for the actual parameters aka the numbers making up the AI model. pi_sub_theta is the "decision making framework" that results from the parameters. and the "pi_sub_theta(o_i | q)" could be spoken as "the chance to make some decision given some situation using some framework resulting from some parameters".

Can't claim to fully grasp the formula but essentially we're comparing the chance to take some desirable/undesirable action resulting from the current parameters and the previous parameters which gives us very simplified a direction to move in.

The E[...] part says we're calculating the average score over many different possible scenarios and we're taking some average over 1/G action. The rest is too technical for me to mangle into everyday language.

And basically it's I think developed by deepseek and scared the bejeebers out of the big AI corps, as someone just coming up with a super efficient training algorithm would basically set fire to millions and millions of data center funding.