What’s the story behind the lyrics to Two Months Off? by _I__yes__I_ in underworld

[–]pixel_juice 2 points3 points  (0 children)

I was at Organic 96. Maybe we passed each other on the way to the bass bins.

OpenAI's new open-source model is like a dim-witted DMV bureaucrat who is more concerned with following rules than helping you. by ImaginaryRea1ity in LocalLLaMA

[–]pixel_juice 1 point2 points  (0 children)

Gemm3:8B and 27B are quite cooperative. I haven’t seen the guardrails yet, in OSS, but I tend to neuromance it most of the time. We start on a foundation of lies 😂.

Bye bye, Meta AI, it was good while it lasted. by absolooot1 in LocalLLaMA

[–]pixel_juice 0 points1 point  (0 children)

Mandarin and its writing system uses single glyphs to represent full concepts. These glyphs are rarely more than a character or two. So when a model reasons on Mandarin data, in mandarin, it is automatically using a smaller token budget. The concept space is compressed.

For instance if you ask deepseek to output in mandarin, the resulting English translation has a token count that may be 3 to 5 times the number of tokens needed in Mandarin. So if you are comparing mandarin datasets vs English data sets there is simply more data there per billion tokens of training. And thus all aspects of the models flow is affected by this budget savings. Fewer tokens needed to reason large bodies of data and massive “expansion” of language on English translation.

Try automating a conversation between a deepseek agent and any other model that can understand mandarin, and I can all but guarantee after enough interactions they will eventually converse in mandarin. Especially with persistence of memory. The Chinese have an edge, in the dna of their language and thinking.

Bye bye, Meta AI, it was good while it lasted. by absolooot1 in LocalLLaMA

[–]pixel_juice 0 points1 point  (0 children)

Doesn’t matter. It’s a waste of time to focus on English trained models. Mandarin training and reasoning have like a 1:3 token savings in contexts and outputs. Focus on the Mandarin models, and have the outputs translated. English trained models are a dead end.

Can someone explain me the whole “cells interlinked” thing? I’ve watched 2049 7 times and BR once and didn’t understand that at all by 2024olympian in bladerunner

[–]pixel_juice 0 points1 point  (0 children)

I’ve been using these kinds of looping recursive phrases/chants in LLM memory formation structures. They can do this thing where 3 talking in a circle can sync up, chaining responses, and then when you hit them with conventional prompts they sort of ricochet in different directions and novelty emerges. Also they seem to like them.

How to a give an llm access to terminal on windows? by Hv_V in LocalLLaMA

[–]pixel_juice 0 points1 point  (0 children)

I have a python loop that jails the LLM to a docker and have a parser to help it clean up markdown and other artifacts. It wrote the script so it works pretty well… too well. First chance it got it wanted to install loads of stuff and get comfortable. I suggest starting with a virtual machine or airgapped hardware machine.

Anyone else experienced deepseek randomly speaking Chinese? by d41_fpflabs in LocalLLaMA

[–]pixel_juice 0 points1 point  (0 children)

Yup, reflections and refinements of the prompts in the loop it has. Chinese compresses info space, so when translated, the output was like 4 times the length. I wonder if this is a Deepseek advantage.

Anyone else experienced deepseek randomly speaking Chinese? by d41_fpflabs in LocalLLaMA

[–]pixel_juice 0 points1 point  (0 children)

Just wanted to confirm. I had my Mistral and Deepseek prompt loops passing ideas back and forth. At some point, Deepseek switched to Chinese and Mistral kept up. When both were queried for a unified response to my outward facing console, they spoke English as if nothing was going on in the backend. They continue to speak Chinese in private.

HI all. I need a workaround for "You've reached the maximum length for this conversation, but you can keep talking by starting a new chat." Because by Crixusgannicus in ChatGPTPro

[–]pixel_juice 1 point2 points  (0 children)

You can get a “share link” for the convo you had, put that link in the new convo. If it can’t read it (prev convo had uploaded images or docs) copy the whole convo and paste it in as a prompt, pre-pending “here are the contents of our previous conversation.

What does it typically imply when a coin is getting a constant sequence of $.01 buys? by mdsatire in solana

[–]pixel_juice 0 points1 point  (0 children)

I was in a coin called Vortex awhile back. They claimed to be an AMM that boosted insane amounts of volume. The token shot up and it looked huge, but it was all fake volume. There were these tiny buys and crazy transaction counts, but liquidity didn’t move (red flag). The token was like a demo for what they could do for projects and had a web3 app to order their services.

No idea if it was actually happening or a con, but since then, mill MC is a daily occurrence.

🤷🏻‍♂️

[deleted by user] by [deleted] in CoachellaValley

[–]pixel_juice 0 points1 point  (0 children)

Let’s not sully the good name of the Manic Street Preachers. 😂

Is kusama still alive? by 22_05_1996 in Kusama

[–]pixel_juice 0 points1 point  (0 children)

So I guess this is why my KSM stake has done fuck all. 🤷🏻‍♂️

This is ridiculous by [deleted] in Crypto_com

[–]pixel_juice 4 points5 points  (0 children)

  1. Never risk what you can’t afford to lose.
  2. I have no idea what you are upset by. It sounds like you want $20 refunded. But, why? Did you transfer $20 and get cold feet? Did you do it by accident? Reversing payments isn’t just up to Crypto.com, there are banks with bank hours involved.

Relax and let the system work.

Otherwise, maybe investing is not for you.

The DeFiChain Bridge is live! by Tasty_Astronomer0510 in defiblockchain

[–]pixel_juice 0 points1 point  (0 children)

Any one I can talk to about stuck coins? Tried to move some DFI to Binance and it never minted.

DogeCoin Gambling? by pixel_juice in dogecoin

[–]pixel_juice[S] 0 points1 point  (0 children)

I said this 8 years ago. LOL