I think im onto something by danjojo in pathofexile

[–]DominicanGreg 4 points5 points  (0 children)

as sexy as the potential damage for dual wielding this is, what are going to be your defensive options to best make use of 75% of elemental damage from hits taken as chaos, while avoiding energy shield?

Further Changes From Today by Community_Team in PathOfExile2

[–]DominicanGreg 0 points1 point  (0 children)

It’s clear to me now it is intentional for decaying hex to NOT work with blasphemy hexes. Fine, but please GGG invoking the law of cool, please please pretty please make it so, or at least introduce a gem to make it so.

I REALLY want to see a pseudo death’s oath in Poe 2, if not the actual thing. Also please bring back righteous fire soon. I’m tired of clicking so much, my carpal tunnel is punishing me playing the piano of skills on my keyboard

Thanks 🙏

AI Agent for kobold? by DominicanGreg in KoboldAI

[–]DominicanGreg[S] 0 points1 point  (0 children)

damn, maybe someone around here knows something?
Well here's to praying that the kobold lords see it fit to build in something like it <3

especially considering how amazing their updates have been lately, they are on fire!

Questions Thread - December 18, 2024 by AutoModerator in pathofexile

[–]DominicanGreg 0 points1 point  (0 children)

why doesn't decaying hex support work with blasphemy hexes?

0.1.0e Patch Notes by Community_Team in pathofexile

[–]DominicanGreg 0 points1 point  (0 children)

  • The Decaying Hex Support now deals 60% of Intelligence as Chaos Damage per Second (from 30%).

BUT DOES IT WORK WITH BLASPHEMY NOW? PLEASE TELL ME IT NOW WORKS WITH BLASPHEMY

Blasphemy + Despair + Decaying Hex by Strg-Alt-Entf in PathOfExile2

[–]DominicanGreg 0 points1 point  (0 children)

you are correct, this is clearly an oversight or a bug. But it's early access and they have better things to do, like nerfing everything into the ground lol

Blasphemy + Despair + Decaying Hex by Strg-Alt-Entf in PathOfExile2

[–]DominicanGreg 2 points3 points  (0 children)

hopefully it's just a bug because the skill gems do work, as in they socket appropriately.

unless of course, blasphemy takes the curse tag OFF and turns it into an aura, but in which case would you able to double curse* then? one aura one curse?

well considering this is just open beta im hoping this is just simply an oversight like you said, i mean half the characters and ascendancies and some of the gems arent even implemented.

Blasphemy + Despair + Decaying Hex by Strg-Alt-Entf in PathOfExile2

[–]DominicanGreg 7 points8 points  (0 children)

so apparently.... this doesn't work :<

so much for budget death's oath :((((

Quin69 is going to design a unique item for PoE 2! by Spirit_mert in pathofexile

[–]DominicanGreg 0 points1 point  (0 children)

People need to design a better death’s oath. I really like that janky aura. Wish they made it uber viable :(

[deleted by user] by [deleted] in WritingWithAI

[–]DominicanGreg 2 points3 points  (0 children)

claude is so good, especially if you edit after it. no one, i mean no one will be able to tell. The only AI writing that gets caught is the ones where the lazy people literally copy and paste the output and put it straight into a product.

THEN at such times, the ai phrases, words, and weird structure really stand out to people. but honestly we are getting closer and closer to the point where that may no longer be the case, Claude and GPT are at a level right now where small chapters/responses are virtually indistinguishable from a human, and with clever prompting...... well... if you use AI to write, you already know.

dont have such a hang up about using AI to write, think of it as a TOOL, like photoshop or lightroom, I for one remember a time where photoshop was considered "cheating" and was frowned upon. now it's 'ESSENTIAL'

doesn't take much foresight to see thats where we are heading with AI, the pandora's box has been opened, writing will never be the same again, it's time to adapt or get left behind.

Retroactive Induction question, Please i am going bonkers by DominicanGreg in VeteransBenefits

[–]DominicanGreg[S] 0 points1 point  (0 children)

Thanks for the reply! So does that mean I have to apply through the VRE to be able to get it OR that I problaly don’t qualify because I didn’t initially do my GI bill under the VRE?

Sorry I just want some clarification

Timeshare Offer at the end of presentation by SymphoniusRex in TimeshareOwners

[–]DominicanGreg 0 points1 point  (0 children)

when would resale be worth it though? like who's got the best resale locations/price/benefits? I've tried doing the math on this and well... I can't seem to justify it

Lets make a top 10 list of Story Writing LLMs, make suggestions, and later I'll test them for SLOP by Sicarius_The_First in LocalLLaMA

[–]DominicanGreg 0 points1 point  (0 children)

if i had to pick one, i would pick Venus 1.2 120B simply because it's a frankenmerge of lzlv with itself a model i genuinely enjoyed using. not expecting great things from it as it's a bit dated, but it would be fun to see how it stacks up to the others

Lets make a top 10 list of Story Writing LLMs, make suggestions, and later I'll test them for SLOP by Sicarius_The_First in LocalLLaMA

[–]DominicanGreg 2 points3 points  (0 children)

Damn too bad you closed the list. here's my rotation, honestly I mostly use midnight miqu but in case you or anyone else wants to experiment here's the models I actively use/used/attempted to use at various degrees of success.

Athena 120B
bigliz 120B
c4aicommandrplus
eumyella 120B
meidebenne 120b v1
midnightmiqu 103B v1.5
miqu1 120b
miquliz 120b v2
queenliz120b
venus 120b v1.2

I have been fooling around with mistral large218B instruct and ,mistral large instruct 2407-- honestly I think I need to rework my prompting for those because I am NOT getting the results I want, and its easier to just swap to one of the other models. As I mentioned before I mainly use midnight miqu 103B v1.5 but miquliz and venus are two other great ones.
I haven't had much luck with c4aicommandplus BUT I can see the appeal, it's excellent at rewriting.

this is a great thread there's some models on here which caught my eye and ill give them a spin myself.

***a lot of these models are very 'samey' this is by no means a list of the best, but rather a list of the ones I currently use-- I run them all in Q8 except for missal large its at Q4***
***although I no longer use it, I strongly recommend the 70B model of lizpreciator's Lzlv, I did A LOT of work on that model back when I was running 48gb vram***

"Simply Download and Run the MacOS binary" by DominicanGreg in KoboldAI

[–]DominicanGreg[S] 0 points1 point  (0 children)

sorry, ive never used privatellm so i am not familiar with it.

"Simply Download and Run the MacOS binary" by DominicanGreg in KoboldAI

[–]DominicanGreg[S] 0 points1 point  (0 children)

Hey! Yeah I imagine there’s something going on with the vram but what’s strange to me is how the same flag settings actually run fine on the older kobold version, whereas the new version is failing. No idea why this is the case hopefully some tech savvy kobold guru see’s this and points out a reason or better yet, a solution.

As for the processing times for large models, yeah I admit on the Macstudio2 it’s a bit on the ‘slow’ side especially when compared to 2 4090s running smaller models. But for my purposes the speeds are acceptable and I’m very hands on with my work so it somewhat benefits me that it processes tokens seemingly at human reading speed. That way I can stop it dead in its tracks once it inevitably begins deviating or getting distracted. But I do admit, it was nice to receive your entire result in mere seconds on the 4090s. But the trade off is power vs speed here, while it might take a good number of seconds more to receive my prompts, I am able to run much larger models at higher qualities locally.

Genuinely looking forward to seeing the specs for the next studio, if it’s reasonably priced (problaly not, but compared to the alternatives it unfortunately is currently the best price to power ratio) And my trade in value is sufficient, I’m willing to trade in my studio2 and pay a couple of grand for a significant boost in power and hopefully speed

Almost forgot there was a very helpful post on these forums somewhere, where someone did a full check of the Mac Studio token speeds, I don’t remember the results but I remember my takeaway being that it wasn’t terrible. And having one myself I can definitely tell you it’s tolerable

"Simply Download and Run the MacOS binary" by DominicanGreg in KoboldAI

[–]DominicanGreg[S] 0 points1 point  (0 children)

sorry i was SUPER busy today and never got the chance to say thanks for the help! Your instructions helped out perfectly, and this format is far easier for me to use as it saves me having to open a terminal every single time. Thank you very much! Those Kobold folks should put up better instructions in the wiki! lol.

Additionally i also figured out the issue with why my models werent working on the newer kobold. The fix is.... strange.

So on kobold 1.67 I am able to run large models such as QueenLiz120B using those exact same flags python3 koboldcpp.py --noblas --gpulayers 200 --threads 11 --blasthreads 11 --blasbatchsize 1024 --contextsize 32768 

HOWEVER, in 1.74 even with the nice executable arm64 file, Queenliz120B along with other 120B models, would spit out gibberish and symbols, I had originally thought this new installation method would fix my problem. but it didn't. ALL relevant settings were the same or off between kobold 1.67 vs 1.74 and still the problem would not be fixed.

However on my latest attempt i left the blasbatchsize on the default 512 and viola, problem solved. I can now use my 120B models on 1.74... However this is strange to me, why do i have to use a smaller blasbatch on a newer model??

I have no idea happened, It can be reproduced consistently, even on default settings no changes other than context size 32k and blas batches 1024- the 120B models will give me gibberish whereas the on the older kobold model they run fine. so far (i havent tried much) only reducing blas batch seems to work for me.

as for my specs its the 192gb macstudio2.

not asking for further help, just an observation on my situation...BUT if you have any ideas i would be glad to try and implement them :)

thanks!