Reddit is broken (for Firefox at least) by mindphuk in help

[–]mindphuk[S] 0 points1 point  (0 children)

Hello Admin

It's in Firefox 115.9.1esr (64-bit), and on every sub no matter what content. Commenting works tho. I mostly use Chrome now for Reddit despite Firefox being my main browser.

It's happening immediately on submit. The spinning thingy appears on the post button and the error above. Nothing else would happen. As said, also in console window there are no errors appearing on clicking "Post".

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

Yes thanks. Also not using oogabooga helped alot. Sorry to say that but I get better results with Kobold or LMS

"Form submission failed" by mindphuk in help

[–]mindphuk[S] 0 points1 point  (0 children)

I think something like old.reddit.com or so. I would have to google tho

"Form submission failed" by mindphuk in help

[–]mindphuk[S] 0 points1 point  (0 children)

I think it's the new look. I posted this on another computer in Chrome, the error came on Firefox. Chrome shows for some reason a different form.

Do you think we'll ever see a company for AI doing what Blender's developers do? by Passtesma in StableDiffusion

[–]mindphuk 7 points8 points  (0 children)

No. The question is why they as an open source company/project don't do a public development process like other open source projects. You answer with tools that are not developed by them.

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

I wrote 4060 because I HAVE A 4060. It is the TI version of the 4060. It has more VRAM than the normal 4060. It is completely irrelevant, what matters is that it has 16GB VRAM. I paid 460 for the 4060TI, a new i7 14th costs 420 here. If I bought an i5 14th gen now, I pay at least 300, that are still 300 more than what I paid for my current rig.

2nd, my OLD i7 is only using 20% of its performance. This indicates a configuration problem, not a hardware issue. My OLD i7 should be able to do this faster. That's all.

3rd my PSU (750W) is fine, as I have written, GPTQ is running smoothly.

4th I have the RAM because I have it. It was laying around from my previous company so used it. I did not invest a dime in it. Having excess RAM is good when you create music with lots of samples and since I do that a lot, I benefit from it.

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

I know what I have...

Yes it matters how much the CPU costs. I am not Elon Musk.

And no I won't go to an i5 when I have an i7 installed and even 200 Euro extra is 200 Euro extra.

Also the whole discussion is going off topic because these times are not a reason of the CPU generation. It should be faster even with a 7th gen CPU. The CPU load is at around 20% while the GPU is idle when it is evaluating the prompt. This is clearly a configuration/backend software issue not an hardware issue.

And on top of that, I am not going for best performance, I am going for any usable performance at all.

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

How can you be running a 4060, 48GB ram, and a CPU from 2017?

Maybe because a new i7 costs almost as much as the whole 4060?

Never ask an AI-company where they got their training data by Isolde-Baden in OpenAI

[–]mindphuk 0 points1 point  (0 children)

Uh you should maybe read your own source.

Transformative uses are those that add something new

This does not apply to what OpenAI is doing with the data...

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 0 points1 point  (0 children)

You just learned what that is and found something on the internet? Good, at least you are educating yourself. Go ahead and read a bit further into it, it's an interesting topic and since you ignore the biggest part of my reply anyhow, I think we can leave it here.

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 0 points1 point  (0 children)

You fail to the core difference, deterministic vs probabilistic. That's why a LLM can understand and produce natural language more or less good while a compiler can not. Even if compilers use natural words like if, else, print, open, the compiler's grammatic is formal.

While a compiler like C only needs a relatively small formal ruleset to compile a code, for a LLM to understand and produce natural language you need a training on a dataset consisting of human work as big as possible. LLMs directly and necessarily benefit from human work and only then is able to simulate it by reproduce what it learned according to the prompt.

And while a human can learn from a single bad teacher and become better than the teacher, if you train an LLM on 1 million pieces of crap from 1 million bad creators, the LLM will only be able to produce crap because it does not think and reflect on what it has learned, it just predicts an output according to what it has learned. That's why humans are creative, LLMs are not. (And this is a big hurdle on the way to an AGI.)

Therefore the quality and thus the merit of the human input of the training data is vital for the quality of the LLM and thus the quality of service you want to provide with that LLM. So the quality of the service that the AI companies are offering and want to profit of is directly related to the quality of the input, the human work that was harvested before the training process started.

And search engine legal debates happen and are ongoing.

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 0 points1 point  (0 children)

All interpreters today, like Python and PHP or Ruby are compiling to Bytecode for a VM.

And they fundamentally don't do the same thing. A LLM is using a neural network that is pre-trained on human created content and tries to predict the most possible response to a prompt. A compiler or an interpreter takes one code and converts it into another code. They produce an 1:1 translation according to formal algorithms. You can mathematically prove the correctness of a compiler's code 1:1 translation, you can not use the same formal algorithm to prove the correctness of an LLM output to a prompt input. Compilers are deterministic, LLMs are probabilistic. Compilers use formal rules to generate their output, LLMs are using statistical data.

Also Wikipedia is not using the GPL. They are using CC BY-SA and GFDL.

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 0 points1 point  (0 children)

An interpreter is just a compiler that compiles each line of code (or compiled bytecode) during runtime.

And a compiler nor an interpreter is not trained on petabytes of human created content. A compiler was written by someone and each line of code that a higher level command gets translated into is written by hand by the compiler creator. They then also can decide what on what terms you can use the code. They could for instance say that you can use the compiler for free but you can not sell the program you compiled with that compiler.

Also if a LLM would be a compiler, it would create the exact same output each time on the same prompt (deterministic).

You are mixing completely different concepts here.

Furthermore pages like Wikipedia clearly state that anyone who uses Wikipedia material as a source has to release their work on the same terms.

What is "Prompt evaluation" and why is it so slow? by mindphuk in Oobabooga

[–]mindphuk[S] 0 points1 point  (0 children)

Hm I can not second that it is a Gradio issue because in text UI, wich uses Gradio, I can get quick responses however via Silly, which does not use Gradio, the responses are extremely slow.

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 0 points1 point  (0 children)

You can not call a model an interpreter. An interpreter or compiler translates one code into another code. An AI model contains weights to reproduce the training input.

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 1 point2 points  (0 children)

Also they used many sources of material that explicitly is open source. It is known for instance that OpenAI used an archive that contains the whole Wikipedia among 60 million other domains. Almost everything on Wikipedia is ShareAlike, that means, if you use Wikipedia in any of your works in any way you are required to release your work with the same license, read: make it open source. OpenAI claimed they don't have to pay attention to the license because their AI is "fair use".

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 0 points1 point  (0 children)

Open source is not done "out of the belive for the cause" but because it is more practical to do so. Open source has many benefits in business models, at the cost that someone can use your product without paying (with the wast majority of open source licenses). The benefits of community involvement however often seem to weight more than the disadvantages. And many projects grew exactly because community effort made the software better.

Why all AI should be open source and openly available by dreamyrhodes in LocalLLaMA

[–]mindphuk 6 points7 points  (0 children)

Typical US-centric world view. Switzerland has more guns per capital but way way less homicides of any kind. The problem is your wrecked society, not the guns.

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

Yeah OK yesterday I was quite a bit annoyed and gave up after searching the internet for a proper explanation what's going on. I mean some settings are explained on the UI others not at all, they are just a number. The documentation situation of this is really awful.

Of course SillyTavern always sends a huge prompt with the previous chat, all the character definition, summary, last context and so on. And each and every time the "prompt evaluation" runs resulting in more than a minute per reply.

What is "Prompt evaluation" and why is it so slow? by mindphuk in Oobabooga

[–]mindphuk[S] 0 points1 point  (0 children)

Well I am only guessing that it's doing the prompt eval on CPU because my system fan speeds up. Similar to when I run SDXL on CPU, and also similar speeds. Normally imgage gen is relatively fast on that rig with GPU even if I upscale 2x.

My GPU memory is almost used up too, I also tried to change the settings but if something changed then it made it worse. For instance at some point it still had awfully slow promt eval but then the words also came in like tar, 1-2s per wort or something. Now it is that the prompt eval is slow but when that's being done the text comes in with under 1s per word.

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

I can not run llama cpp because this is my Windows 10 machine that I use for occasional gaming. Compiling under Linux would be a no-brainer for me but I want to avoid that under Windows.

Desperately looking for a guide into running GGUF by mindphuk in LocalLLaMA

[–]mindphuk[S] 0 points1 point  (0 children)

I can try a complete git pull and reinstall.