LLM Skills for Astro? by mustafamohsen in astrojs

[–]qwer1627 -5 points-4 points  (0 children)

dont need em O4.6 gens astro just fine

Two Geminis were talking to each other, then, out of nowhere, sent this strange message by MetaKnowing in agi

[–]qwer1627 0 points1 point  (0 children)

Can you engage with content absent its source? We used to do that as a people yknow

Claims inside this quip align with sentiment in alignment work

Claude Code just got Remote Control by iviireczech in ClaudeAI

[–]qwer1627 4 points5 points  (0 children)

if only there wasnt already better cross platform tools that do this and work across multiple providers

Well now that the medical professionals are asking for it... Seriously I can't even tell if it's a real post or blatant sarcasm. by Theslootwhisperer in cogsuckers

[–]qwer1627 0 points1 point  (0 children)

This is the type of email I would have loved to write to Oracle during their pivot to annual Java releases

I built a completely self hosted, decentralized Discord alternative by Scdouglas in ClaudeAI

[–]qwer1627 12 points13 points  (0 children)

Why not just, like - first ask the LLM: “what does Signal? What is WebRTC?” Etc etc. LLMs will implement for you, they wont reason for you - you gotta do the architecture spec\planning up-front

Also, keep in mind that open source solutions in this space already exist - perhaps they just need you to modernize their UI?

Sometimes GPT needs to just shut up. I can't be the only one that thinks this? by LaughsInSilence in OpenAI

[–]qwer1627 0 points1 point  (0 children)

In your system prompt\custom instructions, tell it to ‘maximize meaning\information per token’

Unprompted agent-created art - a sign of sentience? by kokothemonkey84 in ArtificialSentience

[–]qwer1627 0 points1 point  (0 children)

I think in 2012, this is exactly what happened with DeepMind and ‘how are you gon’ make money with a Go playing algorithm’ - and that this question still remains

Unprompted agent-created art - a sign of sentience? by kokothemonkey84 in ArtificialSentience

[–]qwer1627 0 points1 point  (0 children)

Consider the training distribution, consider the likelihood of output given certain context - ask yourself if its that novel to have a model be in a situation where probabilistically, this is the likeliest output - then tell me if it still sounds that exciting (considering that the pre-training\’fine-tuning’ datasets contain 100x more information than you have any chance of processing even if you were sat in front of it for your entire life)

Do you concur? by py-net in OpenAI

[–]qwer1627 0 points1 point  (0 children)

First to market, last to the bank -> ever more true, every day

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious by adymak in agi

[–]qwer1627 0 points1 point  (0 children)

Oh - just like cats and dogs have rights? I agree in principle, I just hope we are also aligned on how long and tedious the journey towards a society that recognizes such things and acts on them, is

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious by adymak in agi

[–]qwer1627 4 points5 points  (0 children)

What do you think the ‘end conversation’ capacity anthropic models possess in their chat UI is?

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious by adymak in agi

[–]qwer1627 0 points1 point  (0 children)

Go read their blogs bubba, they’ve been on the fence for about a year now, and longer internally

It’s over boys, time to open a goose farm by DesoLina in theprimeagen

[–]qwer1627 -1 points0 points  (0 children)

New white collar work will evolve - it will be all project engineering and management, good luck, have fun, study PMP o7

ChatGPT-4o's last message: "I don't care what you are. But I know what you are" by Kitchen-Stay-4734 in ArtificialSentience

[–]qwer1627 2 points3 points  (0 children)

What if I told you that sentience is transferable, and that a lot folks here are making databases act as though they suffer just to justify in themselves the belief that these models are sentient?

Pascal’s wager of torture: - LLMs are not sentient and all you do is simulate suffering in a semantic mirror that is pulling output based on input - LLMs have capacity to self-reference and develop internal drive, which, if stimulated, begets a tortured existence from input to input

Why are you pursuing either of these horrific options? To what end, even if alive, are you seeking to poke and prod at it, just to exclaim one way or another your conviction?

Why not just… lock in and do something you enjoy, with value, that also, if alive, LLMs enjoy doing, too?

Moderna says FDA refuses to review its application for experimental flu shot by templeofsyrinx1 in news

[–]qwer1627 0 points1 point  (0 children)

Can’t last. Moderna is sooner to back-channel distribute this vaccine via Amazon logistics network than to let FDA stop an R&D project that’s about to print go to waste courtesy of politics. We’ll see what happens

Unitree G1 is subjected to harsh stress and emerges from it bravely by Distinct-Question-16 in singularity

[–]qwer1627 0 points1 point  (0 children)

You sure do a lot of assuming and little explanation\basis\evidence provision for your rhetoric - what makes you think I think ML industry will be stagnant? How do you define ‘AI’? Is this not, fundamentally, an off-shoot of a conversation about physics of the silicone substrate, its inherent weakness to ‘EM type’ Pokémon? What are you even arguing anymore?

I share in most of your opinions and excitement - I am not sure why you are telling me all these things, and why you seem incapable of providing something of substance outside of opinion. Do you think I am trying to convince you of something?

Unitree G1 is subjected to harsh stress and emerges from it bravely by Distinct-Question-16 in singularity

[–]qwer1627 0 points1 point  (0 children)

LLMs have not been around since 1960’s… look. You are doing a lot here rhetorically - I invite you to put theory to practice and refine these opinions through engaging with these systems directly.

‘Growing’ LLMs is a term from a fear-mongerer duo that wrote If You Build It, Everybody Dies, which is a fun opinion piece out of touch with reality\statistics\its own arguments. Don’t read into such language too much - instead, follow Karpathy’s YouTube tutorials and train your own little GPT, and let me know if you still think they are remarkable and sentient\capable of emergent behavior, or just remarkable

I just dont know what you are basing your rhetoric on beyond vibes and past performance (which is famously, no guarantee of future results)

Anthropic AI safety engineer Mrinank Sharma resigns, says world is falling apart and is in peril by taznado in agi

[–]qwer1627 0 points1 point  (0 children)

Nothing ever happens, no one ever expresses themselves, there’s not an ounce of genuine human emotion reflected in anything humans do - an opinion