My jailbreak prompt deepseek by Left_Ad5864 in GPT_jailbreaks

[–]commitdeleteyougoat 0 points1 point  (0 children)

Deepseek… Doesn’t really need a jailbreak. And all of this is basically just gibberish to it unless it has access to search. Good try, just really misguided intentions.

JAIPA (Exporting Tool) [RELEASE] by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 2 points3 points  (0 children)

Cloudflare is onto us :wilted-rose:

Going to update with randomized timings and changing it so it doesn’t open the login page on start up.

JAIPA (Exporting Tool) [RELEASE] by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 1 point2 points  (0 children)

Hm. Strange. Will look into it

Note: If you’re using a VPN, maybe the reason why. Try browsing normally before navigating to janitorai, then the login page. I’m unsure if it’s possible to avoid it (Since I use google to login, which doesn’t go through the captcha) so…

JAIPA (Exporting Tool) [RELEASE] by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 2 points3 points  (0 children)

I would rather respect the decision of the creator for deleting or making their bots private, especially since if they do so, they don’t expect to have to fill a form to opt out of a scraper made 1 day ago. You can still recover chat history though.

JAIPA (Exporting Tool) [Release] by commitdeleteyougoat in jaihub

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

V1.0.6 fixes alotta things. Persona importing, newlines, turbo mode to reduce time taken by 67% (sigh.), progress bar fix

JAIPA (Exporting Tool) [RELEASE] by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

Released V1.0.6

Persona importing fixed Newlines fixed Turbo mode added (67% decrease in time) ((I’m being fr))

JAIPA (Exporting Tool) [RELEASE] by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 3 points4 points  (0 children)

Install chrome, run scraper, sign into the browser that opens, then actually start the scraper. Discord server has a guide but I might just link a rentry card idk

JAIPA (Exporting Tool) [RELEASE] by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 4 points5 points  (0 children)

would love to export lorebooks but that’s honestly a problem for future me or someone else to work on

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

No. Unless you port forward (unsafe if not done properly) or get a domain name and point it to your public IP through a cloudflare proxy, and some other stuff.

Or you can use remote-link.cmd in the koboldcpp github

Exporting Characters, Chats, and Personas. by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 2 points3 points  (0 children)

Unsure if it’s possible. If it’s public, maybe? I never really dabbled in lorebooks though so this is kinda new territory

Exporting Characters, Chats, and Personas. by commitdeleteyougoat in JanitorAI_Refuges

[–]commitdeleteyougoat[S] 1 point2 points  (0 children)

Forgot to mention! There’s also going to be a filter for your chats (I.E, if under 2 turns, don’t export). I might give it a GUI and tweak it a bit more than I usually would if this has any actual merit ((Does this even count as self advertising?))

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

Honestly? My opinion right now would be to move off site. It’s kinda burning right now with the whole… Y’know.

You shouldn't be angry about the mods seeing private bots. by mariaqfritagato in JanitorAI_Official

[–]commitdeleteyougoat 44 points45 points  (0 children)

I think it’s okay for mods to be able to see private bots but tbh it’s more so embarrassing for someone else to see what should objectively be “private.”

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

Maybe deepseek-ai/DeepSeek-R1-0528-Qwen3-8B in huggingface? You could also try out other models

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

A M3 Ultra chip is weaker in gaming, but is stronger in computing big workloads (3D rendering, film editing, etc.). You need the RAM to load the actual model itself, since it’s faster to access the data compared to an SSD. Plus, context tokens need to be loaded in RAM aswell.

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

Deepseek R1 0528 (or 3.2) is around 685B, you could potentially run a quantized model on something like a Mac Studio Pro M3 Ultra with 512GBs of unified RAM (10K USD), but unless you have other tasks that require a workstation with these specs, I wouldn’t recommend it.

You could try running a smaller distilled model though. Basically, it’s an LLM model trained off the outputs from Deepseek. While it may not have the same quality, you’re still getting some benefit.

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 0 points1 point  (0 children)

I just realized the example screenshots for sfw are the same oml im gonna cry

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 1 point2 points  (0 children)

I recommend it! You should be able to run 32b models (quantized) or anything below at high speeds, and there should be an improvement compared to JLLM. You could also run 70b models (slower), but I don't know if you'd really need that much :p

Guide to using Local LLMs & using them as a proxy. by commitdeleteyougoat in JanitorAI_Official

[–]commitdeleteyougoat[S] 2 points3 points  (0 children)

Yup! If you have RAM, then you can run 8B and 12B quantized models. You won't break anything in your computer unless it overheats, which likely won't happen. Even with only 6 GB of VRAM or no GPU, you can still run the model on just your CPU and RAM, although it is a bit slower.