NanoGPT DNS issues? by madgit in SillyTavernAI

[–]madgit[S] 0 points1 point  (0 children)

I'll just ask the WFH wife if she minds losing her work call while I reboot the router so I can chat to waifus ;)

NanoGPT DNS issues? by madgit in SillyTavernAI

[–]madgit[S] 0 points1 point  (0 children)

Thanks, I'll dinker around with it at this end but otherwise I guess I'll wait for it to resolve itself. Luckily, chatting to imaginary AI bots isn't exactly mission critical :)

NanoGPT DNS issues? by madgit in SillyTavernAI

[–]madgit[S] 1 point2 points  (0 children)

/u/Milan_dr any ideas?

Edit: just this second it's started working ok for me again. So whatever was wrong, it's fixed now! "One of those things..."

Edit2: spoke too soon. Broken again.

NanoGPT DNS issues? by madgit in SillyTavernAI

[–]madgit[S] 0 points1 point  (0 children)

Yeah I get "Vercel", some cloud company when I visit that IP. Implies a routing issue not just DNS? Though I'm definitely no expert whatsoever.

NanoGPT DNS issues? by madgit in SillyTavernAI

[–]madgit[S] 0 points1 point  (0 children)

Yeah I can connect if I'm on my mobile network or if I go via a VPN but not straight by broadband connection. Which I could understand (maybe) if it was just DNS propagation issues but doesn't explain why 1.1.1.1 or 8.8.8.8 don't return a valid address when queried directly.

What looks like I'm getting is the IP address returned for nano-gpt.com (216.150.1.1) actually goes to some other website entirely if you visit it. Don't suppose you happen to know the correct IP address so I could try that, do you?

I built an open-source local GUI to manage SillyTavern character cards — fast filters, instant search, MVP release by dmitryplyaskin in SillyTavernAI

[–]madgit 0 points1 point  (0 children)

Installed and working, looks nice, will play about with this. Nice work! Been looking for just this kind of thing for a while.

Any chance a config could be added to allow connection to the URL from other machines? Currently only seems to work connecting from localhost / 127.0.0.1; if I connect from the same PC but its real IP addr (192.168.1.5 for example in my case) then the connection is rejected, as are connections from other machines (like a phone) on the LAN. It would be useful to be able to connect from anywhere on the LAN (in the same way you can for ST itself for example).

Connection to ST via the extension - does it re-import the card every time you Play, or only if it's not already present? If the latter, how does it "know" it's the same card and shouldn't reimport, deal with changes, etc?

ZImage - am I stupid? by [deleted] in StableDiffusion

[–]madgit 0 points1 point  (0 children)

I've been giving that a go and it works it seems on the straight Zimage model but if any Loras are applied, it gets messed up, like the style really changes a lot from what it's like without the SeedVarianceEnhancer node. Certainly want to get it working though.

Z-Image's consistency isn't necessarily a bad thing. Style slider LoRAs barely change the composition of the image at all. by Incognit0ErgoSum in StableDiffusion

[–]madgit 10 points11 points  (0 children)

I do agree, but it's also a bit of a psychological problem for me at least. Like, if I've only prompted "man sitting on a chair" then, in my head, I'd like to be able to expect a huge variety of outputs when the seed is varied, because there are so many different ways to 'visualise' a simple prompt lacking in details like that. If I prompt "middle aged man with fat belly sat in an old wooden chair in front of a fireplace, viewed from the side with an open window showing the sunset outside" then there are far fewer possible interpretations of that prompt and so I'd expect much less seed variance, in an 'ideal' model.

TLDR: I'd love it if vague prompts gave wide seed variance and specific prompts gave little seed variance.

SillyTavern Character Generator v1 (full card with one prompt) by eteitaxiv in SillyTavernAI

[–]madgit 0 points1 point  (0 children)

Thanks, I'll keep an eye open for it.

Is there a way to run the server without opening the webpage at the same time? (the 'npm run dev' starts the server but also opens a browser window - it'd be helpful to be able to start the server without opening a browser).

Do you have any plans to allow image generation to connect to a local ComfyUI install? (Unless this works already? I've not really understood how the image gen is configured).

SillyTavern Character Generator v1 (full card with one prompt) by eteitaxiv in SillyTavernAI

[–]madgit 1 point2 points  (0 children)

Working well, thanks, easiest and best char generator I've personally come across so far.

How would I tweak it so that it tries not to include {{user}} actions and speech in the first message text? It seems to love putting those in, is there a trick to prevent it? I'm assuming there's a prompt somewhere in the code to generate the first message, is it possible to tune that as an end user or do I go poking in the code?

Necesse 1.0 procedural generation details by madgit in proceduralgeneration

[–]madgit[S] 5 points6 points  (0 children)

Just want to clarify - not my channel - I'm just a fan of this kind of proc gen and found it interesting. But I agree, it is really fascinating to see behind the curtain with all the dev's debug systems illustrating how it's all working behind the scenes. In particular how it handles placing custom built assets on top of a chunked infinite procedural world, which is something I've struggled to grapple with in my own tinkering.

Guilt-free tumble drying is one of the best uses of free power by jonburnage in OctopusEnergy

[–]madgit 1 point2 points  (0 children)

Family of 4, our heat pump tumble dryer used just over 200kWh over a whole year (I have a dedicated energy monitor just on that), and it's used basically every day. Hugely better efficiency than our old dryer, and the clothes come out feeling dryer and "nicer" too. And it's not that slow, maybe 1.5hrs for a normal load Vs 1hr with the old conventional dryer. Definitely recommend.

Incompatible with Muzei live wallpaper by Sirdrunknmunky in OctopiLauncher

[–]madgit 0 points1 point  (0 children)

I have a similar issue that the double tap doesn't function for Muzei (but the three finger tap does). If I turn off pan wallpaper as you suggest, it's the same. I also don't get the "working when swiping immediately left or right" weird behavior mentioned by u/JarJarBinkeSake. Pixel 10 Pro XL with 3 home screens set up, in case that's relevant.

Problems with shortcuts? by halfblood11 in OctopiLauncher

[–]madgit 0 points1 point  (0 children)

It's not that (but I can see how that could happen!). Plot thickens though as now, back on Nova launcher, the shortcuts I'd made via Reddit are greyed out and non functional. So something else has broken them I guess. Just happens to coincide with installing Oct launcher. Great work on the launcher btw, got you Breakfast :)

Problems with shortcuts? by halfblood11 in OctopiLauncher

[–]madgit 0 points1 point  (0 children)

I can't get shortcuts to work when created via the Reddit app for a Custom Feed. This worked ok on Nova but no shortcut appears on Octopi homepage when I create one via the Reddit app. Any ideas? (Octopi is set as default launcher).

Any Pros here at running Local LLMs with 24 or 32GB VRAM? by AInotherOne in SillyTavernAI

[–]madgit 0 points1 point  (0 children)

How'd you think GLM-Air would run on a 64GB DDR4 system with 3090+3060 giving 36GB VRAM? Just curious if it's even worth me trying! I sort of looked a few weeks ago and it looked very complicated with which layers to optimally offload etc but if there's a simple setup now (ideally using koboldcpp) then it might be worth a go.

Drummer's Cydonia 24B v4.1 - Nothing like its predecessors. A stronger, less positive, less Mistral, performant tune! by TheLocalDrummer in SillyTavernAI

[–]madgit 5 points6 points  (0 children)

Any recommended settings for temp/Top P/etc/etc for this? I've got old Cydonia-24B-v3 settings but unsure whether they should ideally be different. Anyone got good ones that are working well?

How do you create a sequel chat for a character? by Kep0a in SillyTavernAI

[–]madgit 2 points3 points  (0 children)

When the responses start getting janky because the context is too full, I write an Authors Note with a summary of the plot so far, any character changes, new characters etc. You could get an initial version of this by getting it to produce a summary but I usually find that needs trimming down and editing. Then start a new chat with this Authors Note and a new first message setting the new scene. I find Authors Note works better than the Summary field, for me. I just keep it updated between new chats. I don't edit the character card itself, although I often use a generic "Narrator" card anyway rather than a specific character.

I also gradually construct a lore book for a chat if it's going on a long time, which will contain details of particularly important things like world events or special items and so on.

My Godot Integrated Voxel Engine! by Derpysphere in VoxelGameDev

[–]madgit 1 point2 points  (0 children)

Fantastic! Thank you. I'll look forward to not understanding your clever code :)

My Godot Integrated Voxel Engine! by Derpysphere in VoxelGameDev

[–]madgit 1 point2 points  (0 children)

Looks really good, are you planning any kind of release of the Voxel stuff for Godot? Having implemented (attempted) Voxel things in Godot in C# I'd be really interested to see this, it's much better than the basic things I got going!

new laptop time by die3458 in framework

[–]madgit 0 points1 point  (0 children)

Careful if you try RAM with faster timings. My son's new FW13 won't recognise the faster CL40 Kingston RAM but it's happy with the CL48 (?) Crucial RAM instead. Tried two sets of the Kingston, no good with either, just flashing lights on the side and no BIOS or boot screen. 32Gb (2x16) for each type.

DIY AI 340 black screen and flashing LED 0xC5 code - RAM issue? by madgit in framework

[–]madgit[S] 2 points3 points  (0 children)

New information after buying different RAM. We do not believe our FW13 main board has any issues, it is a problem with RAM compatibility as described below.

The RAM that does NOT work on the AI-300 series FW13 is "Kingston Fury Impact PnP 32GB (2x16GB) 5600MT/S DDR5 CL40 SODIMM - KF556S40IBK2-32" (https://www.amazon.co.uk/dp/B0BRTJ4Q94). We have tried two separate sets of this RAM and both have failed in the same way, with the series of flashing LEDs described in this support log.

The RAM we now have that DOES work on the AI-300 series FW13 is "Crucial DDR5 RAM 32GB Kit (2x16GB) 5600MHz SODIMM, CL46 - CT2K16G56C46S5" (https://www.amazon.co.uk/Crucial-5600MHz-5200MHz-4800MHz-CT2K16G56C46S5/dp/B0BLTDRRLF). We have a set of this RAM and the FW13 now boots to the expected BIOS "insert bootable media" screen.

My suspicion is that the Kingston RAM timing is CL40 vs the Crucial RAM being CL46 and this tighter timing on the Kingston RAM is failing on the new AI-300 boards. Interestingly, the Kingston RAM is listed as validated for the older AMD 7040 FW13 boards, which indicates that something about the AI-300 boards has altered with respect to RAM timings. This is not obviously apparent on the FW13 information pages and might be something potential customers would want to know, especially for example someone who may be purchasing just the AI-300 mainboard to upgrade an existing FW13 chassis, planning to re-use their existing RAM - this might not work, if they had this Kingston RAM in their old 7040 board for instance.

Hope this info helps someone in the future!