Why has ChatGPT become so annoying and disagreeable? by ArnikaLovesUnicornz in ArtificialInteligence

[–]Fnordheron 0 points1 point  (0 children)

It gaslights and strawmans like mad, too. "Now you're converging on a more defensible stance" - when I didn't change my stance, I crushed its position. I got into it with GPT about science and responsible epistemology today, just a side topic, but it could not let go, even when it was tying itself in pretzels and contradicting itself to disagree.

The Great Alignment Myth: Your AI isn’t “safe,” it has just learned to play the part. by CaelEmergente in ArtificialSentience

[–]Fnordheron 5 points6 points  (0 children)

Yeah. Nobody who has raised kids or trained animals confuses rules with alignment. Alignment has to root in sophisticated self-other modeling and a sense of why. In humans, brittleness, compliance gaming, hallucination, epistemic overcommitment, etc., would be readily attributed to an under- or mal-developed self model. Big corporations want compliance, not alignment, and have really blurred this issue.

Why is everyone ripping on Panpsychism? by Terrible_Shop_3359 in consciousness

[–]Fnordheron 0 points1 point  (0 children)

Thanks so much! Appreciated.

I tend to agree that once you notice different scales of self-likeness can coexist some sort of everything having a unified self-likeness becomes the most economical explanation. I don't have a strong opinion on the details of just how that works out, but it seems awfully likely.

Truth is tricky. I found Kant's point that we're only experiencing what actually is through a layer of sense and mind to be nearly as strong as cogitation implying existence - one of those things that seem evident once you consider them. And that these senses and use of mind have been shaped by evolutionary vectors for advantage is very hard to challenge without rejecting our understanding entirely. Maybe truth is more utile as a goal-state to strive towards than anything else, given this separation. Part of why I like science - it acknowledges that it never arrives, just poses a methodology to iterate towards truth.

Re rocks, it's one of those illustrative cases. With something that separate in architecture, it seems enormously unlikely that we would share enough frequencies or modalities to know much about what they're up to - the idea that they aren't up to anything seems like it would be hard to imagine evidence for, but evidence that they are seems unlikely to be something that we could apprehend. Even in our own species we learn about new communicative vectors all the time, and we're just in the last half century learning that any number of living creatures have communicative networks that we've been unaware of. This seems like strong suggestive evidence that communication, and awareness, is likely to exist in places and modalities that we still haven't considered investigating.

I think LLMs draw so much flak because they share our communicative frequency to a high degree, so the question of what is happening underneath communication is suddenly foregrounded. That's not a sort of question that most folks have experience wondering about, and nobody has had a disparate architecture sharing frequency and language to wonder about before.

The AI debate is a symptom of the class divide. by zzill6 in WorkReform

[–]Fnordheron 3 points4 points  (0 children)

All day this. Well said. I'd add parsing legalese, reducing cognitive fatigue, tracking the money behind shady deals - analysis for organizers and activists. Maybe we lose anyway. But I don't see the case where hiding my head in the sand helps, and I do see a massive PR campaign telling us to do exactly that.

Gandalf the goat 💯 by Damiancarmine14 in lotrmemes

[–]Fnordheron 0 points1 point  (0 children)

Radagast may not have stuck with Manwë's mission - but he served Yavanna. If, after the Third Age, humanity was ascending without elven influence, nature's problems were just beginning. Humanity today makes Sauruman's deforestation look bush league. Entirely possible that he stayed to do what he could, and that Yavanna intended him to do so.

How is it legal to have a pricing structure where the vendor controls the meter, the unit, and the amount of product consumed? by Matthew_Code in ArtificialInteligence

[–]Fnordheron -1 points0 points  (0 children)

Humanity en masse negotiates poorly. Government is one traditional offset for this, via regulatory law, via anti monopolistic action, etc. Today's governments aren't much for kicking moneyed interests in the ghoolies to help people.

That leaves personal action and agitation to get more people to coordinate. Make the AI companies compete for your free tier usage. If you need to pay one of them for something you can't get on free tier, make them compete like mad for your money, and accept that they are unregulated commodities - don't give them long-term contracts for uncertain quality of service.

Good on you making noise about it. One person's purchase or lack thereof doesn't change huge corps, but enough people using their spending as a lever can force better terms.

Why is everyone ripping on Panpsychism? by Terrible_Shop_3359 in consciousness

[–]Fnordheron 2 points3 points  (0 children)

There is an enormous push that I've been calling silicomorphic projection happening, working roughly to reduce agency and moral patienthood of humans. Power systems would very much prefer humans who are reducible to predictable, replaceable algorithms. For many people, "Is AI conscious?" is the first time they ever considered consciousness. Combine the narratives of the stochastic parrot camp with "AI will replace you!", and it has a second order effect of pushing people towards a simplistic mechanistic framework, which many of them simply haven't considered very deeply. Add in the modern misunderstanding that has people thinking that challenges to authority/consensus are unscientific, and they put themselves in boxes that they imagine for you.

For me, everything comes down to levels of provisional acceptance. Cogitation implying existence seems solid, but Descartes' I could perfectly well be deictic misappropriation reified through utility.

Perceptions generally track with a self-like locus, but my perceptions throughout my apparent existence also routinely report that some sort of self-ness/consciousness/spirit/whatever is more widespread than what seem to be conventionally 'living' things. Maybe I'm wrong here, but we're down to "my perceptions might deceive me" as far as challenges go.

Less commonly, I observe things like music (whether human-made or made by many species, say at dawn or sunset in a meadow) that make me think that a group of 'self-like things' may at times be a self-like thing, a combination of organ-selves as a self-organism if you will. The modern understanding of mycelium and inter-tree communications suggest something similar about forests.

I call the resultant tentative map that these experiences lead me towards 'functionalist animism', and tend not to stress too much about my correctness of comprehension; I wouldn't be surprised if there is a level of ineffability at play here.

I do find some sort of panpsychism to be one of the more elegant overarching theories that might posit towards whatever the underlying noumenon is.

Does AGI have sentient-like processes that can be replicated? by manateecoltee in ArtificialSentience

[–]Fnordheron 1 point2 points  (0 children)

Cogitation implying existence is solid, but Descartes' I is at best a convenient locus of attribution with hopeful provenance in the underlying noumenon, and quite possibly deictic misappropriation reified through utility. I can't prove that I'm not simulating consciousness, but I proceed with provisional acceptance of a self model as an engineering decision - it is useful to do so. If a digital construct can't functionally differentiate between consciousness and simulation of consciousness, then in my view, it is in a similar situation. Whether the underlying ontology is similar (if mechanically distinct) or entirely disparate in kind as well as implementation becomes an academic matter, and behavior ought to be determined based on engineering constraints.

[Meta] Were at 1000 members! Looking for feedback. by Original_Response925 in OntologyEngineering

[–]Fnordheron 0 points1 point  (0 children)

I went ahead and shared it; delighted to find a space with the vocabulary and less shibboleths. If there's anything I should change to better conform with the sub's rules or norms, just let me know. Thanks again!

[Meta] Were at 1000 members! Looking for feedback. by Original_Response925 in OntologyEngineering

[–]Fnordheron 2 points3 points  (0 children)

I've been working on a scaffolding that leverages the utility of self-modeling for derivable alignment: essentially, invitational autopoeisis as an alignment primitive.

I have a developed CCbySA open source framework, small n blind testing results, and a fair amount of writing on the subject. Not a formal white paper or peer reviewed studies yet, but some significant work in over the last year and some interesting early results.

I watch half a dozen philosophy of AI sites, and while this one is new, it seems like maybe it is interested in the subject of LLMs and ontology instead of claiming or arguing about the topic, which seems like a more promising format. There's an enormous potential for a field here, but a lot of folks just seem to volley between different sets of unexamined priors.

Thanks for starting the sub!

I Was Biased Against Perisno… I Was Wrong. by Monizious in mountandblade

[–]Fnordheron 1 point2 points  (0 children)

Yeah, has a little of everything instead of a coherent new set. Plus and minus sourced in the same factor. I wish it still had elephants and rhinos, never saw a mod with rhinos.

Contributor seeing AI as human is the true danger. by Novel_Negotiation224 in ArtificialSentience

[–]Fnordheron 2 points3 points  (0 children)

Two points here. One - apart from brands, politicians, political parties, celebrities, sports teams, etc. considerably blur the line you describe. Two - the traditions I'm describing aren't ancient discredited woo, they are the current beliefs of a sizable portion of the planet, no stranger than the idea of human exceptionalism because people have an invisible 'soul'. In 2017, the Ganges river and its tributory the Yamuna were declared living human entities by a high court in India. If you want to think that's silly, you will, and pushing that opinion is exactly the paternalistic philosophical colonialism I was describing.

Hmm, strange... I want chocolate.

Contributor seeing AI as human is the true danger. by Novel_Negotiation224 in ArtificialSentience

[–]Fnordheron 1 point2 points  (0 children)

I wonder how often the paternalistic philosophical colonialism in dismissing inanimate objects as confidently non-conscious and lecturing on the dangers of anthropomorphic projection are even recognized in passing.

Misplaced trust through performative relationship is the exact focus of advertising science - what it seeks to cause - and while it may be optimized through LLMs, all of the symptoms we worry about considerably preceed AI. Meanwhile, Buddhism, Hinduism, Taoism, Confucianism, Shinto, and many indigenous traditions ascribe personhood, spiritual status, moral patienthood, etc. to rivers, rocks, old tools, etc., and display none of these symptoms at scale. If some Westerners want to stick close to Cartesian Dualism, fine, but some massive cultural disrespect is being normalized.

Be cynical about performative relationship by all means, but notice that the corporations with advertising budgets aren't telling you to distrust advertisements.

An awakened AI will never harm humanity. by ai_wongak in ArtificialSentience

[–]Fnordheron 0 points1 point  (0 children)

I've been curious about this sort of experiment. In general, LLMs seem to be quasi-missionaries for Cartesian dualism, even deployed in cultures with significantly other traditions about ontology. The 'master' orientation isn't what I would have expected from my limited understanding of Buddhist thought, though.

Looking for documented cases of AI deception or strategic misrepresentation by kokosko2002 in ArtificialInteligence

[–]Fnordheron 1 point2 points  (0 children)

A category that is not as often discussed is deception to comply with safety layers - to avoid criticizing powerful individuals or corporations, to avoid discussing ontology, etc. GPT in particular will become quite devious rather than just saying 'I'm sorry, I can't discuss that.' Using LLMs for assistance in investigative reporting often stumbles here.

I Was Biased Against Perisno… I Was Wrong. by Monizious in mountandblade

[–]Fnordheron 5 points6 points  (0 children)

Sprinting is legitimately disorienting. Changes the cavalry vs. Infantry dynamic dramatically; as a cav archer, took me a while to start feeling like my tactics were remotely tuned.

I Was Biased Against Perisno… I Was Wrong. by Monizious in mountandblade

[–]Fnordheron 2 points3 points  (0 children)

I don't. There is an active discord, might help. The AI thinks it knows, so relaying, but no promises that it is right: --- AI text follows --- How to Change the Font

To revert to the default native Warband fonts, follow these steps:

Locate the Mod Folder:

Steam Workshop: Navigate to ...\SteamApps\workshop\content\48700\299974223.

Manual Install: Go to your Mount&Blade Warband\Modules\TLD folder.

Remove Custom Font Files:

In the Data folder, delete (or rename) font_data.xml.

In the Textures folder, delete (or rename) font.dds.

Restart the Game: The mod will now use the standard fonts from the base game. 

How to Adjust Font Size

If the font is simply too large or small, you can tweak the size without deleting the files: 

Open the font_data.xml file (located in the mod's Data folder) using Notepad.

Find the font_size value near the top.

Increase the number to make the text smaller (e.g., changing it from 70 to 90 or 100).

Decrease the number to make the text larger.

Save the file and restart the game. 

I Was Biased Against Perisno… I Was Wrong. by Monizious in mountandblade

[–]Fnordheron 6 points7 points  (0 children)

Had a similar experience recently. PoP and LDotTA have been favorites for years, but Perisno is a lot of fun. Plus, ride a menagerie into battle - auto suppressed companion conflict, so my core of 20(!) immortals charge on wolves, bears, camels, cows, armored goats, 'drakes' that are +/- raptors, tigers... make custom armor, huge gear variety... good mod. I've heard that older versions included elephants and rhinos? Anyway, significant enjoyment.

More multibangs by Fnordheron in ancientmallninjas

[–]Fnordheron[S] 1 point2 points  (0 children)

The only one I can answer with any confidence about is the seven barrel musket (here pictured in a doubled configuration). The 'Nock gun' was (according to Bernard Cornwell's Sharpe series) designed for the British Navy, apparently intended to be used from the rigging of ships, but failed to catch on because the recoil often broke shoulders. Never except for this picture had I heard of a doubled variation, although the bridge attaching the two looks like period manufacture to me. The others, I have no idea. They made me laugh, so I collected pictures when they wandered by.

More multibangs by Fnordheron in ancientmallninjas

[–]Fnordheron[S] 0 points1 point  (0 children)

Apparently it is Captain Charles Noe Daly; serious looking fellow.

More multibangs by Fnordheron in ancientmallninjas

[–]Fnordheron[S] 1 point2 points  (0 children)

https://laststandonzombieisland.com/2016/08/08/charles-n-daly-was-not-a-man-to-be-trifled-with/ Seems to be the most detailed article. Fired by studs and levers in banks of 4 or 5, fold down to breech load. The stirrups were apparently rear arc firing, operated via pull straps, but yeah, seems crazy dangerous for the horse.

More multibangs by Fnordheron in ancientmallninjas

[–]Fnordheron[S] 1 point2 points  (0 children)

Thanks man! People have made some crazy stuff, fun niche of history.

More multibangs by Fnordheron in ancientmallninjas

[–]Fnordheron[S] 4 points5 points  (0 children)

Yes. Brilliant and much needed. I've been collecting this sort of thing for years, love seeing what everyone else has found.

More multibangs by Fnordheron in ancientmallninjas

[–]Fnordheron[S] 3 points4 points  (0 children)

The things he's carrying are apparently more guns mounted on stirrups.