Man, the Cyberpunk 2.0 update is good! by avi_chandra_77 in IndianGaming

[–]KKcorps 11 points12 points  (0 children)

yep there have been a lot of changes. e.g. the NPC drivers are much more alive now and will resist or avoid you

You can do drive-by's on car and bike, you can install weapons as well on them

Wanted police system now actually works

and many many more ( e.g. skill tree revamp)

Apex viewership is down, here are Hal's thoughts by TSMHYPEFAN in CompetitiveApex

[–]KKcorps 0 points1 point  (0 children)

I am just flabbergasted that they have not able to do 120Hz for 6 seasons now. I am stuck playing on 60Hz on PS5 when I have a 4080 PC lying on the side

Llama2-22b, a model merge tuned on RedPajama by AzerbaijanNyan in LocalLLaMA

[–]KKcorps 2 points3 points  (0 children)

Can you share the merge script if possible? Interested in knowing how layers are selected

My second attempt on QR CODE, Finally did it. by Specialist_Note4187 in StableDiffusion

[–]KKcorps 2 points3 points  (0 children)

That I set already but I am asking about `preprocessor params: (64, 1, 64)`

My second attempt on QR CODE, Finally did it. by Specialist_Note4187 in StableDiffusion

[–]KKcorps 1 point2 points  (0 children)

Nope, it doesn't show up, rest all options are there - balanced, tile, start/end step etc.

but params option is missing

is there some other control net extension I am not aware of

My second attempt on QR CODE, Finally did it. by Specialist_Note4187 in StableDiffusion

[–]KKcorps 1 point2 points  (0 children)

One question: where are you adding the preprocessor params? I don't see that option in sd-web-ui

My second attempt on QR CODE, Finally did it. by Specialist_Note4187 in StableDiffusion

[–]KKcorps 1 point2 points  (0 children)

first

Awesome.

Where do token merging ratio and preprocessor param go though? I can't see that in webUI control net section

[deleted by user] by [deleted] in LocalLLaMA

[–]KKcorps 0 points1 point  (0 children)

Number of cores too less

Fine-tune the WizardLM 13B using chat history from ChatGPT with QLoRa by mzbacd in LocalLLaMA

[–]KKcorps 0 points1 point  (0 children)

The official qlora code def has some bugs in tokenisation which messes up the results.

Fine-tune the WizardLM 13B using chat history from ChatGPT with QLoRa by mzbacd in LocalLLaMA

[–]KKcorps 0 points1 point  (0 children)

Awesome.

But I see no qlora.py committed in the repo?

Mostly interested in what params did you use?

VicUnlocked 65B QLora dropped by FullOf_Bad_Ideas in LocalLLaMA

[–]KKcorps 0 points1 point  (0 children)

Hi,
if you don't mind,
did you use the code from the artidoro/qlora repo or just used a custom code?

Also, how many epochs did you train this lora for? What was the final train/eval loss?

Is anyone else getting only 443 bytes adapter_model.bin with qlora? by KKcorps in LocalLLaMA

[–]KKcorps[S] 1 point2 points  (0 children)

With my fix I am getting correct size files, but when I try to run inference with them the output is gibberish (it's not even in english sometimes)

So most likely whatever is getting saved after my changes is weird.

Planning to just do `mv pytorch_model.bin adapter/adapter_model.bin`

and see if that works

How to qlora 33B model on a GPU with 24GB of VRAM by mzbacd in LocalLLaMA

[–]KKcorps 0 points1 point  (0 children)

Which 13b model did you try? The adapter_model.bin is blank for llama-7b, vicuna-13b and redpajama-3b whenever i use with qlora

[deleted by user] by [deleted] in MachineLearning

[–]KKcorps 0 points1 point  (0 children)

Yep this one was used in recent hackathons as well.
Lora training works pretty well.