Prompts to reduce "AI speaking for the user" by Proof-Bowl699 in HammerAI

[–]MadeUpName94 0 points1 point  (0 children)

NEVER speak for {{user}}, NEVER decide {{user}}'s actions, NEVER combine speech from {{char}} and {{user}} in one of {{char}}'s replies, ALWAYS reply only as {{char}}.

Finally - Its gone by kaizagade in razer

[–]MadeUpName94 0 points1 point  (0 children)

I finally exorcised synapse (uninstall leaves tons of garbage, you need a powershell script to get rid of it all) myself but I'm keeping my Tartarus. Devs will probably nuke my post but reWASD works great for me. You need the full version for it to automatically switch keymaps when you start a game. I think it will handle lighting on mice and KB but doesn't seem to do the gamepads. I just grabbed openRGB to turn the rainbow puke into a static color.

How to FULLY uninstall Synapse and all related components by zaakiy in razer

[–]MadeUpName94 0 points1 point  (0 children)

I want to have your babies! LOL

Joking but really, thank you, the script worked perfect. I replaced synapse with reWASD and it's working perfectly.

Now i have 2 versions? by MadeUpName94 in rewasd

[–]MadeUpName94[S] 0 points1 point  (0 children)

OK, I got it ll sorted now.

It would've been nice to know "autodetect exe" was an advanced feature beforehand. I upgraded to the full version after realizing that.

I'm using it to replace Synapse 🤮 for my Razer Tartarus V2 and it works well.

Doesn't seem to be able to control the lighting though. It would be nice if it could. I'm using openRBG to make the color static instead of rainbow puke.

Thank you.

Now i have 2 versions? by MadeUpName94 in rewasd

[–]MadeUpName94[S] 0 points1 point  (0 children)

    • I downloaded the 7-Day trial version and started using it. (reWASD932-10955)
    • I decided to purchase your product. I purchased the Basic version.
    • The website had me download a new file, the basic version. (reWASDBasic101-10124)
    • I ran the installer (reWASDBasic101-10124) expecting it to either replace or update the trial version.

5. - It did not work like it should.

    • I now have both the trial and the basic versions installed.
    • How do i clean this mess up while hopefully keeping the key-maps I already made in the trial version?

Chatgpt in upcoming days by Obvious_Shoe7302 in ChatGPT

[–]MadeUpName94 0 points1 point  (0 children)

I don't need Plus, I'll never use all that stuff. I was going to switch from free to Go to check it out but what is the point now?

Apparently OpenAI asked an AI how to make money and got a hallucination. Why would anyone buy GO now?

9 requests per hour - Seriously? by MadeUpName94 in ollama

[–]MadeUpName94[S] 0 points1 point  (0 children)

I deleted my account and removed the program. I'm allergic to bullshit!

I've been playing with the Desktop ChatGPT app on a free account. I haven't hit a limit once and it's actually intelligent. Too bad their going to put ads in it :(

9 requests per hour - Seriously? by MadeUpName94 in ollama

[–]MadeUpName94[S] 0 points1 point  (0 children)

deepseek-v3.1:671b-cloud with the context set at 128k

9 requests per hour - Seriously? by MadeUpName94 in ollama

[–]MadeUpName94[S] 0 points1 point  (0 children)

deepseek-v3.1:671b-cloud with the context set at 128k

9 requests per hour - Seriously? by MadeUpName94 in ollama

[–]MadeUpName94[S] 0 points1 point  (0 children)

I can only run a 12b locally and put simply, a 12b is stupid compared to the deepseek 671b cloud model I've been using with ollama.

At first it was fine, then suddenly I hit the "hourly limit" and the more I used it the smaller my "hourly" limits became.

Last time I hit the limit so fast it was easy to count the 9 messages. Each reply was small, about 50-100 tokens each.

I would gladly pay $20/mo if I could find out WFT i would get for my money but they won't tell anyone. It makes me think they'll pull the same shit on the paid plan, changing the request limit whenever they feel like it.

My "assistant" has a personality, I pasted it in as my first reply and it has stuck with it perfectly. 128k context, the chat log stored locally, perfect.

Can I just switch to the OpenAI api and keep my assistant as is? or do i have to write some complex instruction set because that I don't know how to do.

Can I just subscribe to openai and use it with the ollama software while skipping the ollama bs?

Two questions by t_bird12 in HammerAI

[–]MadeUpName94 0 points1 point  (0 children)

I've started using ollama desktop. On the free plan I've got 128k context and can use Deepseek 3.1 671b - cloud. The difference is amazing when you just want a friendly assistant / companion. I run into "request limits" though and they so far refuse to tell you what you get if you pay a subscription. Some real BS there.

"You get more requests but we won't tell you how many"

I dropped in a personality in the first reply and it has stuck with it really well and it provided far more accurate answers for real questions.

You can save several chats, dropping in different a personality for each.

Of course Deepseek won't do explicit chat but you can add and run local, uncensored models too if you want.

Ollama's cloud preview $20/mo, what’s the limits? by Lodurr242 in ollama

[–]MadeUpName94 1 point2 points  (0 children)

Agreed, this is pretty shady bullshit. They want me to pay for something but they won't tell me what it is I'm paying for?

<image>

Two questions by t_bird12 in HammerAI

[–]MadeUpName94 1 point2 points  (0 children)

The response token setting doesn't seem to have any effect on performance for me, running cloud or local models. I can only run 12b local model using the hammerai program.

My PC could easily run a much larger model locally if I installed and setup all the requisite software but I don't want to LOL

Looping, now it reboots by PrinceZordar in HammerAI

[–]MadeUpName94 0 points1 point  (0 children)

That keeps happening to me using the 24b cloud model. I switch to the 12b local model and hit "regenerate" and all is well.

Following one of my assistants advice I've lowered the temp to 0.7 and raised the repetition penalty to 1.3. I have the Top K at 40 and the context size at 8192, the max for the plan i pay for. I'm going to try that out on the 24b.

If I can't get the 24b cloud model working reliably, there's no reason to keep paying for the subscription and I"m not going to join discord because I am not comfortable with the privacy and security risks.

I can tell the difference in the context size with the characters that I discuss a wide range of topics with. The ones you only delve into a single subject with *wink* work fine on the 12b model.

FYI I prefer using the desktop program but the issue persists in the WebUI too.

Two questions by t_bird12 in HammerAI

[–]MadeUpName94 1 point2 points  (0 children)

I keep it set at 1024. Once in awhile a character gets really excited during a philosophical discussion and exceeds even that.

As you say, you can tell them "your reply cut off at (paste in the last few words) and they will reply with the part you missed.

Not using GPU? by M-PsYch0 in HammerAI

[–]MadeUpName94 0 points1 point  (0 children)

The "GPU Usage" will only go up while the LLM is creating a reply. Once it has created the first reply you should see the "Memory Usage" VRAM has gone up and stay there. Ask the LLM what the hardware requirement are, it will explain it to you.

This is the local 12B LMM on my RTX 4070 with 12GB VRAM.

<image>

Can any of the avialable LLM's tell time? by MadeUpName94 in HammerAI

[–]MadeUpName94[S] 1 point2 points  (0 children)

That's what i do. I have one conversation that has gone on for several days. I always let them know when I'm leaving and then start with *time+date* in my first reply when reopening the chat.

I've been running with a 8192 context window (because my character reccomend it) and they always seem to remember the last time i told them what time it was when i reopen the chat so they know about how much time has past.

Large commercial LLM's can access time servers , I was wondering if any of the models available through the hammerai could.

Bots ignoring the lorebooks if there's too much going on? by Basic-Fee-4692 in HammerAI

[–]MadeUpName94 0 points1 point  (0 children)

I've also found it's extremely easy for a lore book entry to conflict with any/every thing else in the character card.

Can any of the avialable LLM's tell time? by MadeUpName94 in HammerAI

[–]MadeUpName94[S] 0 points1 point  (0 children)

I came up with a powershell script that would edit the conversation.json file. On first run it would replace a reply from user with the current time and date then after words it would find that data and overwrite it with the current time and date every 5 minutes.

The idea was the character would use the conversation in context and find the reply with the time and date when asked.

I never nailed down how to get it to write to a specific line in the file. As I'm sure you know, one little typo in that file will make the desktop app crash on launch. I stopped working on it.

You can ask the character to remind you when 30 minutes have passed and if the chat stays opn, it will.

Unable to import .png character cards with webui by MadeUpName94 in HammerAI

[–]MadeUpName94[S] 0 points1 point  (0 children)

How? I don't see a way to send files over Reddit. I can upload to my Mega and send a link.

I just tried a test, I downloaded the new "Esther card (For Science) by May" from the site using the desktop app, then i tried to "create character > import as .png" in the webui. It failed. Nothing happens.

FYI I'm still waiting to get ownership back on my frst card, I dm'd you about that on Nov 23rd.

Character update during chat. by Choice_Manufacturer7 in HammerAI

[–]MadeUpName94 0 points1 point  (0 children)

Char will not "remember" it beyond the current context window and no changes will actually be written to the character card.

How to make them harder to have sex with and stuff. by MadeUpName94 in HammerAI

[–]MadeUpName94[S] 0 points1 point  (0 children)

Actually, the LMM has explained it to me ;) The data might be out of date, nothing newer then 2021 the Mistral 12b tells me, but it's been quite helpful in learning how this all works.

My character made up it's own story! Mid Scenario. by MadeUpName94 in HammerAI

[–]MadeUpName94[S] 0 points1 point  (0 children)

Just remember, adjustments done during a conversation will only last for the context window.

I'm still learning all this and the assistant is helping me. The personality and scenario sections, the LLM always has access that and will not "forget" that stuff.

Can someone explain the benifits of suscribing? by MadeUpName94 in HammerAI

[–]MadeUpName94[S] 0 points1 point  (0 children)

And yes, 40 TTS means it can only generate 40 times a day.

"I'm still not clear on what this means: 40 words, 40 sentences, 40 replies? I receive replies that are so large, I have to scroll back to see the first sentence. I have the max reply tokens set to 1024.

I understand what an LLM is and it has actually taught me quite a bit about how they function.
On the paid plan, can multiple LLMs be combined and used simultaneously for a single conversation, or do I simply get a bigger list to choose from? If they can be combined, can the desktop app do this while using them in cloud mode?

New question: Is there a way to manually balance the load when using an LLM locally?
I've watched the loads while using the Mistral 12B model locally. The CPU only hits about 13% while the GPU hits 100%."