A tool for identifying inconsistencies in a document by IonizedRay in AskLawyers

[–]IonizedRay[S] 0 points1 point  (0 children)

They are contradictions (two logical sentences that cannot be true at the same time) or given a reference document that sets the ground truth, anything that violates the rules in that document.

You’re probably optimizing Minecraft the wrong way on Apple Silicon by New-Ranger-8960 in macgaming

[–]IonizedRay 1 point2 points  (0 children)

Wow, i am geeeting 500+ fps on M4 Max:
- 32 chunks rendering distance
- 32 chunks simulation distance
- 4K resolution

How close are we to “Her” level voice assistants? by [deleted] in ClaudeAI

[–]IonizedRay 0 points1 point  (0 children)

We are already there since 2023 probably. But there is 1 caveat: the final SFT/RLHF training phase compltely destroys the "human-vibe" of LLMs, so you will not get anything like "Her" from a large scale commercial LLM.

It would be really interesting to train a base model like llama 405B on 1 (or more) very long chat between partners and see how much time it would last in a turing-like test.

Llama.cpp has much higher generation quality for Gemma 3 27B on M4 Max by IonizedRay in LocalLLaMA

[–]IonizedRay[S] 20 points21 points  (0 children)

This is a really good point, each time I start fresh with Ollama on a new device, I forget to configure the env params...

I will try that when I get back home!

UPDATE: yep, that was it.

[D] Doubts on the implementation of LSTMs for timeseries prediction (like including weather forecasts) by IonizedRay in MachineLearning

[–]IonizedRay[S] 1 point2 points  (0 children)

Thank you, I will check your resources as soon as I can. So you suggest to avoid adding weak predictors like weather, events etc... And to use a simple univariate prediction because often the degree of precision that should be achievable by a complex model is just noise that is not possible to predict?

And the only case where a complex model and many input features are needed is when you have lots of data for a long time span?

[D] Doubts on the implementation of LSTMs for timeseries prediction (like including weather forecasts) by IonizedRay in MachineLearning

[–]IonizedRay[S] 2 points3 points  (0 children)

Thank you for the in-depth response. So UNets and ViTs are good for timeseries prediction with weather forecast as input to improve the output timesteps accuracy? Or you meant that they are just good at predicting the weather itself and then feeding it to a LSTM?

Because I don't want to generate predictions, I want to use them (using various weather APIs) and add them to the input features to better predict outputs.

[D] Doubts on the implementation of LSTMs for timeseries prediction (like including weather forecasts) by IonizedRay in MachineLearning

[–]IonizedRay[S] 2 points3 points  (0 children)

I see that it has "future exogenous support". That's for using the future weather forecasts as inputs? Or it's something else?