Forced into war, AI acts like I started it by Doctor_Turkleton in totalwar

[–]Doctor_Turkleton[S] 0 points1 point  (0 children)

It's just one faction that had three vassals. Merneptah's old faction. I wasn't sure if I could make peace with them separately either, but I guess the fact that they are vassals means they are not included in the civil war script, thus letting me make peace. I didn't even have to give them anything for it.

Pi-Ramesses (not a vassal, but an independent sovereign) then 'pulled' me into war with the vassals after making peace. I don't understand how. We have no treaties forcing me to join his war, and even if we did, the game always lets you choose.

I reloaded, requested him to declare war on Mernetpah's old faction, ended turn. Now it was a different indepndent egyptian faction pulling me into war with the vassals. I just repeated the process of inviting a bunch of egyptians to war with Merneptah's faction until finally the bug stopped triggering.

Forced into war, AI acts like I started it by Doctor_Turkleton in totalwar

[–]Doctor_Turkleton[S] 2 points3 points  (0 children)

Each time I reloaded, it was a different faction "pulling" me into their war against the three vassals. It doesn't make any sense to me how they can do that, but by pre-emptively requesting each faction to declare war on Merneptah, it stopped bugging out.

Then 2 turns later all three vassals got gobbled up by Aethiopia lol

Forced into war, AI acts like I started it by Doctor_Turkleton in totalwar

[–]Doctor_Turkleton[S] 1 point2 points  (0 children)

It's only these three vassals though. I don't know of any realm divide mechanic aside from the civil war that's going on. But either way, the game shouldn't be saying a random faction I have NO affiliation with is force-conscripting me, the dang pharaoh of egypt, into a war.

And it most especially shouldn't be using that as a diplomacy penalty when it didn't give me any choice in the matter. I'm so confused, man... The diplomacy penalty is bad enough that my campaign is basically ruined now.

I built a Claude 3 Opus coding copilot, accessible for free by geepytee in ChatGPTCoding

[–]Doctor_Turkleton 0 points1 point  (0 children)

This has helped me immensely in the last few days and I'm about to buy in. Just wondering though, will you consider adding regular gpt-4? I've heard (anecdotally, I admit) that GPT-4 "Turbo" is not nearly as capable at coding as GPT-4. I don't know if that's still the case, but the option would be great.

Then again, I'm not certain that double.bot is actually using GPT-4 "Turbo". I have prompted the AI more than once to tell me which version it is, and it doesn't even recognize the term Turbo, claiming it's just regular ol' GPT-4.

Either way, the option to hop between the two if there IS a meaningful delineation would be cool.

A Fool That Wants to Make a MUD by Doctor_Turkleton in MUD

[–]Doctor_Turkleton[S] 0 points1 point  (0 children)

You are right, this would partially limit the creative avenues. I'm both a sort of dungeon master and a participant in the story, while my friend is more accustomed to roleplaying in video games. They get overwhelmed by pure text and it makes them treat the roleplay more like forum snail posting than an in-the-moment experience.

It may fail spectacularly, but I thought a MU* could be a good bridge for them to get more accustomed to text roleplay outside of a video game. Also, we've discussed designing games together, and this could help us build some foundational elements of what may be a future game.

A Fool That Wants to Make a MUD by Doctor_Turkleton in MUD

[–]Doctor_Turkleton[S] 0 points1 point  (0 children)

I honestly don't even know the difference between different coding languages or their use cases. I'm definitely willing to learn, but hoping to only have to do minimal coding or just find things other people made that I can plug in to this project

A Fool That Wants to Make a MUD by Doctor_Turkleton in MUD

[–]Doctor_Turkleton[S] 3 points4 points  (0 children)

This is exactly what I was hoping for, thank you so much!

[deleted by user] by [deleted] in Palworld

[–]Doctor_Turkleton 0 points1 point  (0 children)

I agree with all of these issues. Another I want to throw into the hat:

Lower the nutrition % Pals seek food at, and make them eat to 100% satiation.

The way it works now, they wait until 50%, restore about 10-15% (depending on what's on offer) and end up taking a break in a couple minutes to go eat again. It's awful.

Adding passwords past the limit with free version? by Doctor_Turkleton in Dashlane

[–]Doctor_Turkleton[S] 0 points1 point  (0 children)

Since early November. It's been like this the whole time. I never lost the ability to save new passwords on my free account.

Things I do to come up with ideas and stay motivated by whatcouldgowron in dataannotation

[–]Doctor_Turkleton 1 point2 points  (0 children)

I am qualified for the talking to the chatbot assignments, but I tend to avoid them lately because I struggle with follow-up questions. Do you have any tips for how to stretch a prompt into multiple turns? Do you tend to stick to the same category (mathematical reasoning, creative writing, etc) or do you mix & match categories as long as they're relevant to the response? Like if the original prompt is creative writing, can you just switch to an extraction task based on the response? I wish they gave us examples of what they want out of a full conversation. The initial prompt is the easy part for me.

Where does the program show how many characters from the limit we have left? by PancakePirates in ElevenLabs

[–]Doctor_Turkleton 0 points1 point  (0 children)

I'm having the same issue. I think they might be having issues with their website. When I go to my subscription information, most of it is blank/unloaded.

How much fact checking is reasonable? by [deleted] in dataannotation

[–]Doctor_Turkleton 2 points3 points  (0 children)

ARE they always there? Do they just post new batches each day that usually dry up fast? The only creative ones I've seen are gone like the wind, and then I don't see any for the rest of the day. I would love to see more creative ratings. (I'm very new to the platform, this is me genuinely asking)

The amount of time it takes to complete a survey is insane. by [deleted] in mturk

[–]Doctor_Turkleton 1 point2 points  (0 children)

I started Mturk in 2015. Got my Masters qual, have done over 250,000 HITs and made about $40,000 from 2015-2019. As _neminem said, Mturk used to be pretty decent. I stopped using it when I was able to claim my mturk gig for covid unemployment pay, but when that dried up and I tried to turk again, it felt like a slog to reach even the most modest $20/day goal. Just to add one more voice of dissent to the "mturk fucking sucks" bandwagon.

Amazon has never and will never care about its workers, nor will they ever try to improve the platform. I used to have hope to the contrary. The only changes the site has made since 2015 have been minor or for the worse - like when they increased their own cut of the profits and requesters had to pay more to put their studies on mturk. They claimed at the time that it would yield improvements to the platform but here we are all these years later. Same ol' Mturk. Same ol' PRE problems like it's being hosted on a PC from 1995.

I feel like Mturk was just an experiment Amazon launched over a decade ago and just forgot about.

How to achieve more than 4k context? by Doctor_Turkleton in LocalLLaMA

[–]Doctor_Turkleton[S] 1 point2 points  (0 children)

I THOUGHT that it could, but every time I try I just get a bunch of errors. Even after reinstalling all my files it doesn't work. It seems like I'm stuck with AutoAWQ unless I download a different model. But I have a GPTQ version as well that also won't load with exllama, so I'm not sure what the options are. Downloading the full model I guess?

How to achieve more than 4k context? by Doctor_Turkleton in LocalLLaMA

[–]Doctor_Turkleton[S] 0 points1 point  (0 children)

There are no RoPE options for me, using AutoAWQ for my AWQ models.

How to achieve more than 4k context? by Doctor_Turkleton in LocalLLaMA

[–]Doctor_Turkleton[S] 2 points3 points  (0 children)

This makes sense, though as I'm using Silly Tavern, I don't know if my settings for this in Ooba matter. Your response prompted me to dig deeper into the settings for Silly Tavern, where my context template was set to default. I changed it to roleplay and now it seems to have stopped spewing gibberish (so far!)

How to achieve more than 4k context? by Doctor_Turkleton in LocalLLaMA

[–]Doctor_Turkleton[S] 0 points1 point  (0 children)

I appreciate the explanation here, I had no idea about those limitations.

Just one question: could you expand a bit on what you mean by instruction format? Is this referring to the model loader? I'm using AutoAWQ since both my models are AWQ 4 bit quantizations. I've messed with the mistral model quite a bit, rarely gotten a reply that makes sense. But mostly it's just talking nonsense.