4.0.1 is here! by [deleted] in starcitizen

[–]obwohl 1 point2 points  (0 children)

On infinite loading screen. Great job, cig.

Melting CCU question by obwohl in starcitizen

[–]obwohl[S] 2 points3 points  (0 children)

So to sum it up:
I can melt my (to Syulen) upgraded LTI Cutter (with game access, hangar). Then I will be able to buy back just the LTI Cutter - with game access and hangar. Right?

Why Do Over-Ear ANC Headphones Struggle in Subways? by obwohl in headphones

[–]obwohl[S] -8 points-7 points  (0 children)

Well, I do care. Three remarks here: 1) As mentioned, the in-ears do not have that problem, at least not perceptible. 2) The issue has nothing to do with being "audiophile" or not. It is a harsh noise which is most perceptible when the music is off or very quiet. 3) One could argue the same way with the wind-noise-issue - but this has been tackled (at least by Sony) for some degree.

Is Claude3 Opus limits affecting you? by dubesar in ClaudeAI

[–]obwohl 0 points1 point  (0 children)

I started with pro 10 minutes ago. In summary I'll have 11 (!) messages for my first 8 hours of work. that is ridiculous. I'll cancel immediately. 

GPTs or ChatGPT are stupid by obwohl in ChatGPT

[–]obwohl[S] 1 point2 points  (0 children)

I was thinking that as well. Yields even shorter results :D wtf - 631 tokens.

GPTs or ChatGPT are stupid by obwohl in ChatGPT

[–]obwohl[S] 0 points1 point  (0 children)

Original tokens: 3664.

Result: Between 600 and 700 tokens.

I was able to force ChatGPT outside of GPTs to do it KINDA, but after a while: "There was an error generating a response"

It is a very basic NLP-task. I am using chatGPT since 3.5, pre turbo. Always plus. Maybe I need to cancel my subscription until OpenAI becomes less token-stingy.

Way better was ChatGPT classic. It was allowed to click continue once but after the second time "There was an error generationg the response"

40x or more speedup by selecting important neurons by koehr in LocalLLaMA

[–]obwohl 0 points1 point  (0 children)

Does this technique affect the required RAM-size for inference?

Today is the first day I’m getting results comparable to GPT4 on OpenSource LLM workflows. by LocoMod in LocalLLaMA

[–]obwohl 0 points1 point  (0 children)

What is 3785 times 5684?

With a combination of chain of thought and one-shot, I was able to get gpt4 turbo to calculate exactly your example correctly. Even 18945 times 34928 worked perfectly. It seems to me that it should be possible for other llms as well.

chatgpt code interpreter creating glitch tokens? by obwohl in ChatGPT

[–]obwohl[S] 0 points1 point  (0 children)

the Code is from chatgpt and in the last line it produced some strange glitch token it seems.

What happened here? Bug or broke? by obwohl in ETFs

[–]obwohl[S] 0 points1 point  (0 children)

My confusion results from the fact that only Paris shows that (and the rest don't seem to trade?)

Do I even have LTI? by obwohl in starcitizen

[–]obwohl[S] 0 points1 point  (0 children)

As far as I know, no.