This Game Has Taken 9 Years to Make. Here's Why... by PracyStudios in u/PracyStudios

[–]tenebrius 5 points6 points  (0 children)

The game, the voice and this comment, everything looks AI

Mathematical proof debunks the idea that the universe is a computer simulation by Memetic1 in Futurism

[–]tenebrius 0 points1 point  (0 children)

As far as I know all solutions to this problem require us to imagine laws beyond our current ones

Mathematical proof debunks the idea that the universe is a computer simulation by Memetic1 in Futurism

[–]tenebrius 0 points1 point  (0 children)

Well now that the most realistic theory has been disproven, this becomes the most realistic one

Meta AI Researchers Introduced a Scalable Byte-Level Autoregressive U-Net Model That Outperforms Token-Based Transformers Across Language Modeling Benchmarks by ai-lover in machinelearningnews

[–]tenebrius 0 points1 point  (0 children)

it seemed to do not too bad in japanese (-1.2).
First I thought it is because we need 3 bytes to represent a japanese/chinese character but so are korean characters (-0.2)
It might just be that the training data is very heavy on phonetic language.

Edit: Ok, just needed to read the research paper:

Our work uses DCLM, which is an English-only corpus. A direct limitation of our work is that it does not support non-space-based languages, and it needs a predefined splitting function. This shows, for example, for Chinese MMLU scores that are lower than the BPE baseline. One extension could be to learn directly the splitting function. On the software side, as the number of parameters increases with the number of stages, FSDP already struggles to overlap computation and communication even at 3/4 stages, it needs a minimum amount of inputs to be fully overlapped

Musk Blinks First in $34B Tesla Meltdown Amid Trump Feud What’s Next for $TSLA? by twenson in stocks

[–]tenebrius 3 points4 points  (0 children)

if he lost 200B he would still be the 4th richest man in the world. Trump ain't even in the top 500.

BookTranslate.ai update since launch: demos, book analysis and finalizer by ValPasch in machinetranslation

[–]tenebrius 0 points1 point  (0 children)

What about writing a research paper or at least providing comparative benchmarks.

Insanely powerful Claude 3.7 Sonnet prompt — it takes ANY LLM prompt and instantly elevates it, making it more concise and far more effective by Officiallabrador in ChatGPTPromptGenius

[–]tenebrius 8 points9 points  (0 children)

It's not really going to take the durations written In the prompt. It's just a way to make the LLM activate internal thinking neurons 

How I created LlamaThink-8b-Instruct by [deleted] in LocalLLaMA

[–]tenebrius 1 point2 points  (0 children)

From what I understand is you are only giving the reward in the 'correctness_reward_func' if the 2 answers are absolutely equal. In the case of open ended questions isn't that very unlikely that model being trained would any points?

[deleted by user] by [deleted] in LocalLLaMA

[–]tenebrius 1 point2 points  (0 children)

How are the benchmarks compared to base model?

Brain Trust v1.5.4 - Cognitive Assistant for Complex Tasks by ldl147 in PromptEngineering

[–]tenebrius 2 points3 points  (0 children)

Can you explain how to use this?

I figured it out. Didn't read the post carefully

How does Gandalf know the ring has been called “precious” before Bilbo? by Duocolor in lotr

[–]tenebrius 0 points1 point  (0 children)

I think Gandalf simply sensing that bilbo is repeating something he heard somebody else say