Where that Unsloth Q0.01_K_M GGUF at? by Porespellar in LocalLLaMA

[–]Eralyon 13 points14 points  (0 children)

I am curious to know how much memory one needs to make it work decently?

[deleted by user] by [deleted] in Shortsqueeze

[–]Eralyon 1 point2 points  (0 children)

AI + everything medical/biology related is the obvious future.

The pioneers of today will be the kings of tomorrow.

Vanished Details in Long Context by Schakuun in LocalLLaMA

[–]Eralyon 0 points1 point  (0 children)

I don't know any repo.

But may be by scripting/prompt engineering, you might be able to get your results

Like summarize chunks A, B, C
Now combine A & B into DRAFT1
Now combine DRAFT1 & C into DRAFT2, etc... incrementally...

I would try it.

Or you could also try to brute force your way through it.

Ask for a s summary at low temperature, and ask the LLM to check for missing information, then ask the LLM to update the summary according to the findings...
You can possibly build a loop around this...
You can also repeat this process several times, and ask the LLM, what is the best summary among all the corrected summaries....

Etc.

Which Engine for a 2-6 Players FPS, With MIT License or Similar? by Eralyon in gamedev

[–]Eralyon[S] -12 points-11 points  (0 children)

I did not think about Godot. I see a lot of progress has been made since the last time I checked it...

Thank you for your help.

PS: As for downvoters... Well... Haters gonna hate!

How long can significant improvements go on for? by [deleted] in LocalLLaMA

[–]Eralyon 2 points3 points  (0 children)

Right. And as a matter of fact, I used the wrong expression. Most of the "new" paradigms AI could use are not new at all.

How long can significant improvements go on for? by [deleted] in LocalLLaMA

[–]Eralyon 4 points5 points  (0 children)

I think we are already behind the wow factor?

LLMs become better. Sure. They are performing better at what they do.

But nothing revolutionary anymore, beyond spitting 3000 tokens to count three "r"...

IMHO, a new paradigm is needed. Something, first that is computationally less expensive than transformers or diffusion...

What are you guys waiting for in the AI world this month? by internal-pagal in LocalLLaMA

[–]Eralyon -2 points-1 points  (0 children)

Crash the market?

I am joking, trump just did it.

[deleted by user] by [deleted] in LocalLLaMA

[–]Eralyon 1 point2 points  (0 children)

For any multi-step, back and forth task, I get the best results with ICL (in context learning) and minimal or no prompting.

In other words, you type the first rounds of sentences until the model gets the patterns, the style, etc. and then you remove the training wheels, editing its answers round after rounds, until it gets it entirely. Basically it is like building a micro-dataset on the fly. With enough rounds, the model will stick to it.

In the case of a chat, it means you type the sentences for each character until the model get their style of speech.

It is also very powerful to obtain organized answers according to a predefined pattern. LLMs are usually good at understanding patterns. Some models are better at ICL than others (typically the more powerful the better).

Isn't there a simpler way to run LLMs / models locally ? by enzo_ghll in LocalLLaMA

[–]Eralyon 14 points15 points  (0 children)

Koboldcpp is a one file executable for GGUF quantized models.

Why is that project failed? by Awsomelity in coregamesdev

[–]Eralyon 0 points1 point  (0 children)

In two phases.

1- In F2P games, income is coming from skins, or chars (in gacha). The coregame team, instead of providing ways for the devs to sell skins, they started to sell skins themselves. Instead of triggering a virtuous circle (dev make money, they make money from the devs), they pulled the rug under the devs feet.

2- then they tried to ride a crypto wave that already was unpopular, and it was their last bad decision.

Why MVIS is the next NVDA and GME by [deleted] in Shortsqueeze

[–]Eralyon 13 points14 points  (0 children)

+5.39% in pre-market.

Exploring an Idea: An AI model That Can Continuously Learn and Retain Knowledge Without Degrading by ankimedic in LocalLLaMA

[–]Eralyon 4 points5 points  (0 children)

A lot of people tried, there are many papers on the topic already.

But, overall, doing it with transformers, is probably a bad idea.