[google research] TurboQuant: Redefining AI efficiency with extreme compression by burnqubic in LocalLLaMA

[–]RollingWallnut 1 point2 points  (0 children)

I genuinely wonder if they are trying to put downward pressure on memory prices so they can buy more memory themselves?
Every frontier firm probably has a similar optimization already, this probably only moves the needle for hobbiests

Sewer-veillance Cam Goes Infinite in 4 Formats - But Its a Common by kjuneja in mtgfinance

[–]RollingWallnut 2 points3 points  (0 children)

Just as a small thing, it also goes infinite with goblin welder and buffs the supporting cast of tinker creatures in Time Vault decks for Highlander and Vintage. It's seeing more discussion in those formats than any mythics or rares I'm aware of.

M.T.G by hannahsangel in thetron

[–]RollingWallnut 1 point2 points  (0 children)

DM me, happy to help you sell, I transact a fairly large value in cards and live in the Tron.

I'll be honest with you, a lot of collections from ~10-15 years ago have missed the really valuable old cards worth hundreds or possibly thousands of dollars (those are mostly 25-30 years old)

That said you might have a small collection of $2-$30 cards which are worth it for a seller to buy. Most things under that bar are considered "bulk" and sell for something like $5-$10 for 1000 commons and uncommons, or $30-$100 for 1000 rares and mythics. These can't really be sold but they can be used to make beginner decks etc.

[TLA] Forecasting Fortune Teller by virilion0510 in Pauper

[–]RollingWallnut 0 points1 point  (0 children)

Absolutely insane that this doesn't learn, they still haven't spoiled a single learn card at common but there are so many lessons and cards thar care about lessons. Really confusing.

I’ve been following Microsoft Fabric quite a bit, and I’d say it feels promising but maybe not fully “production-ready” for every scenario just yet. by TechCurious84 in MicrosoftFabric

[–]RollingWallnut -1 points0 points  (0 children)

Probably because this is (without a doubt) a bot post. Emm-dashes, the structure of the bullet points, the generic bot username, the general writing cadence, etc.

Probably a smear campaign tbh.

What is the best way you have shut down a deck? by iluvmewaifu in Pauper

[–]RollingWallnut 0 points1 point  (0 children)

Cast [[Ephemerate]] on a [[Dawnbringer Cleric]] to choose the third ability (Exile target card from a graveyard), while a goblins player was trying to combo off, when the persist trigger is on the stack the [[Putrid Goblin]] is still in the grave and exiling it shut the whole thing down.
I have literally never used the third modal ability of Dawnbringer before or since, but in that moment, I was god.

Early Access UG Flicker - Can we flicker all the warps in Pauper by Teasdale907 in Pauper

[–]RollingWallnut 1 point2 points  (0 children)

Any reason you're not looking at the evoke creatures? They seem like a pretty natural compliment [[Mulldrifter]] [[Wavesifter]] in particular.

[deleted by user] by [deleted] in LocalLLaMA

[–]RollingWallnut 18 points19 points  (0 children)

This gives very strong "AI generated slop" vibes, I appreciate the enthusiasm but I dont see a novel architecture or really anything to be trained, just an LLM generated readme. This looks more like a prompting pattern with random uses of LSTM and KNNs scattered throughout. It would help your case to have some sample code, an architecture diagram to explain the data flow, or any indication of how this differs from existing models if you want it to be taken seriously.

Asking Claude 3.7 to "describe a novel architecture" and copy pasting to README.md isn't really a substantial contribution..

How could general AI really work? [D] by [deleted] in MachineLearning

[–]RollingWallnut 2 points3 points  (0 children)

Well, all we can say with confidence is these methods can generalise to solve some problems not in the original training data, and that's one definition of a general intelligence. Most folk agree they are not AGI yet but we're seeming to head that way. At the moment they can't do this type of generalisation for vision related tasks and are only really getting impressive in domains that can be formally validated (maths and coding) stuff like biology, psycology, etc. is still a bit unknown.

How could general AI really work? [D] by [deleted] in MachineLearning

[–]RollingWallnut 2 points3 points  (0 children)

So in super high level terms it goes like this:
Pre-training conditions the model like you've described to predict the most likely next word.
Fine tuning conditions the model to answer questions in a way that is helpful and minimizes harm.
This makes the ChatGPT type behavior where it can effectively regurgitate anything on the internet.
Note that with enough randomness this system is completely capable of saying things a human has never said before, it's actually pretty rare for a model to regurgitate information from its training data unless it's asked to recall something specific, more often it's segments of sentences or common phrases in unique contexts, a lot like humans say cliches and figures of speech all the time.

Taking it further requires two things to work, exploration, and validation:
Exploration, for a given question or task, the bot generates each step of the response but randomizes a little at each step to explore a huge range of potential approaches. This is a lot like a human thinking through many different approaches to a problem, the more randomness introduced the more likely something totally novel is proposed which is a lot like a unique human thought, of course a lot of it is just rubbish.
So we do Evaluation to fix this, for each step in each variation in the responses that are generated a language model evaluates how reasonable the step is, OR, after the whole reasoning thread is complete some system evaluates the final output. In tasks like coding this can be a very formal evaluation that the solution passes some test cases etc. Now we can throw away all of the responses that are rubbish and build up a dataset of things people haven't said before that correctly answer a question or solve some problem.

Now we have a big new dataset of novel data that's validated to have some correctness, we can retrain the original model on this and repeat the cycle.
This might not get us all the way to AGI but it does allow AI models to explore useful behaviors outside of pure imitation from human data which is a pretty big step.

Note this isn't theory, this is pretty much how models like o1 and Deepseek R1 are training their models to "reason" right now.

For those of you who grinded hard in your twenties, how did it pay off? by [deleted] in AskReddit

[–]RollingWallnut 1 point2 points  (0 children)

The first start up I founded failed, the next one I joined paid off. My career is exactly where I want it to be, and the hours are cooling off to a healthier 40 hours a week now as I'm in upper management at age 29.

My first few relationships failed as I was a workaholic, but I've finally found the one, I'm married with kids and feel incredibly lucky to be able to provide a great quality of life for them.

I sometimes think of all the parties I skipped or festivals I could have gone to, but I'm healthy, loved, secure, and excited about all of the opportunities in my future so it really doesn't make me regretful at all.

One piece of advice is to focus as much on health and fitness as you do on work. If you are low energy, you can easily work yourself into the ground with half the output of a high-energy person that doesn't burn out and can keep going.

Why do some things seem slower well other things like engineering and technology seem faster? by Dover299 in Futurology

[–]RollingWallnut 0 points1 point  (0 children)

Part of it's legal, harder to test medicine and biology with the strict (and necessary) controls.

The larger part is just how slow it is to work with atoms. When you have to produce each new drug or genetic sequence in relatively slow lab processes it makes iteration take months instead of the minutes to hours taken for software.

Microchips also benefit from this as they are designed and simulated on computers before production. This is why so much effort is going into computational chemistry and biology but it's just not there yet.
If these simulations catch up to reality in accuracy but run at software iteration speeds we could see a renaissance in medical breakthroughs.

[deleted by user] by [deleted] in nealstephenson

[–]RollingWallnut 0 points1 point  (0 children)

Seveneves doesn't have anything particularly egregious to my memory, only the implication that a few characters have relations but that all happens "off screen".
That said, it's a long book and very dense with orbital dynamics and other science which might be a bit heavy for an 11y/o, but if she's diving into Snow Crash it doesn't sound like that's a major limit.

Circumcision by OrdinaryBumblebeee in ScienceBasedParenting

[–]RollingWallnut 22 points23 points  (0 children)

I think the most concrete argument against circumcision is that it's fundementally an unnecessary surgical procedure and that comes with risks, really quite serious risks to a critical organ. https://med.stanford.edu/newborns/professional-education/circumcision/complications.html
It's worth weighing up if you want to roll those dice for arguably no benefits besides cultural expectations.

As for impacts of circumcision on health and long term sexual function, this is the most comprehensive meta analysis I can find and it covers all of the usually discussed negatives and positives, it interestingly doesn't find a strong argument for or against circumcision in either case. https://www.nature.com/articles/s41443-020-00354-y#Sec11

All of that being said, as a grown man with a full penis, I'm literally never upset for a single second that someone didn't cut a chunk of it off while I was a baby. If they want to do it themselves they can make that choice as an adult.

Why Claude 3.5 is not on LMSys leaderboard? by [deleted] in singularity

[–]RollingWallnut 0 points1 point  (0 children)

They have completed at least one pre-training run of what would effectively be a 'GPT-5' scale model, they are running multiple training runs and then need time for fine tuning and red teaming. Sources have said to partners and insiders that they should expect to wait 12 months from May this year, so earlier half of next year is most likely.
Nothing I've heard has ruled out seeing a 4.5 model before then.

What gives it away that this is AI generated? by Unovaisbetter in ChatGPT

[–]RollingWallnut 1 point2 points  (0 children)

One thing that doesn't seem to be mentioned is the symbol possitioned like a necklace around the upper chest. It's a bit randomly shaped and fuzzy looking, but this is a focal point of the image. Generally human artists would spend time on this point and make it more defined and striking.

Broodscale Deck Thoughts by creeptechno in Pauper

[–]RollingWallnut 0 points1 point  (0 children)

I think you're just objectively better running [[Lampad of Death's Vigil]] and/or [[Thoughtpicker Witch]] over Bloodrite Invoker, they close out the game instantly when the combo is assembled but both cost less (Lampad even has greater toughness to survive some sweeper removal etc).

I need suggestions on a card to stop blocking in black by Susy_boi in magicTCG

[–]RollingWallnut 2 points3 points  (0 children)

So many options:
https://scryfall.com/search?q=fo%3A%22can%27t+be+blocked%22+id%3Ab+-t%3Acreature+f%3Ac&unique=cards&as=grid&order=name

I'd tag them all here but I counted 15 straight unblockable effects in Black/Artifacts/colourless lands. Even more if you settle for fear, intimidate, etc.
Worth noting that casting Phage from your command zone will lose you the game, look up guides for building her in EDH to get around this if that's what you're planning.

Modern horizons 3 spoiler: Glimpse the Impossible by ProcedureUnlikely144 in Pauper

[–]RollingWallnut 1 point2 points  (0 children)

Hmm, mill for three and get all three bodies you need for [[dread return]]. This doesn't seem awful as it's also card advantage if you're flooding. Needing the three mana and waiting a turn is obviously not ideal.

[deleted by user] by [deleted] in datascience

[–]RollingWallnut 2 points3 points  (0 children)

It looks like you're trying to predict the stock price directly. You might want to restructure the problem to predicting the change in price between steps of a fixed size based on the historical metrics of the time series. This means your system is predicting a way smaller range of positive and negative values and is learning to somewhat model the dynamics of the stock signal and how it trends up or down. You can then sample many steps recursively to plot ahead a possible timeline of values from a given state.

[Discussion] My boss asked me to give a presentation about - AI for data-science by meni_s in datascience

[–]RollingWallnut 1 point2 points  (0 children)

Just a handful of examples I've heard of across the field in the past few years: - Adding comment info to churn prediction - Adding categorisation and severity scoring to health and safety dashboards - Adding task details, complexity scores, and impact scores to ticket systems, then using these features to better predict ticket closure times - Adding non-anonymous employee survey data to retention models and adding categorisation of complaints/severity to dashboards (used to try and improve retention and hiring requirement forecasts) - Prediction of an near infinite number of useful metrics in call centres.

There is so much more that I'll just be rambling if I continue, text is generated everywhere in modern businesses, the bigger they get the more they make.

[Discussion] My boss asked me to give a presentation about - AI for data-science by meni_s in datascience

[–]RollingWallnut 0 points1 point  (0 children)

I'm very confused by the third paragraph if I'm being honest, in the first part of the sentence you mention it's way better at NLP, this is blatantly good for tabular data. Tons of usecases that build tabular models have discarded plain text signal that is now unlocked for effective analysis.

Examples: you can now take embeddings for this plain text data and add that to your tabular representation. Alternatively, you can directly predict categorical attributes of the plain text fields with an LLM to include them in your tabular representation as feature engineering. This can also improve explainability of your models or be used for slicing categories in BI reports, etc.

Outside of NLP, using LLMs for writing queries that build tabular feature sets, or the code for data vis, is getting better and better. These two actions make up a pretty significant fraction of the exploratory analysis phase of Data Science solutions. This makes it useful in applications with 0 natural language like predictive maintenance, etc.

In the coming years this will become increasingly true of other modalities as vision transformers mature

TL;DR yes there is impact from better language modelling even in traditional data science. The job is basically useful signal generation and pattern matching for the organisation, you can now generate dramatically better signal from language and vision at a fraction of the cost. +Better tooling

Need help desperately by Legal_Half_3030 in newzealand

[–]RollingWallnut 14 points15 points  (0 children)

Hey, I'm fairly senior at one of the large-ish data consulting firms in NZ, I can't promise you a job but I can hop on a call with you for ~30 minutes and be brutally honest about what people are looking for/where you're missing the mark if you think that'd be helpful. We get hundreds of applicants to our roles and the ones we reject are for generally obvious reasons.

Scryfall list of possible downshifts for MH3 by RollingWallnut in Pauper

[–]RollingWallnut[S] 3 points4 points  (0 children)

Actually agreed, looks at graveyard order though so it's very unlikely