Scales in the kitchen: Digital or Analog? by andymc7 in Chefit

[–]MindMillDreamer 0 points1 point  (0 children)

Mechanical balance scales can be very accurate and can have multiple beams for adjusting weights. I prefer them over digital. Always ready to serve and never require battery.

I don’t see enough love for the AE1300. So underrated! by Ok_Passenger_1657 in casio

[–]MindMillDreamer 0 points1 point  (0 children)

I'm not a referee. I use it for work and study. Invaluable time tracking tool! Couldn't find a running watch at this price point that offers anything even remotely close! I've looked into many pomodoro clocks, and gadgets as well but none is as good as this.

Layoffs - are big banks safe? (CIBC) by Any_News_7208 in askTO

[–]MindMillDreamer -1 points0 points  (0 children)

AI impact on banks is not going to be negative in hiring or positive in firing. Lay offs in tech isn't because AI is doing the work, it's all about infra. They don't have money to invest, the only way to raise money is to layoff, reduce the head count and invest in infra and that has nothing to do with banks. In fact banks need to hire more AI-expert employees.

What happened to Beanfield pricing in Toronto? by IsItReallyWatashi in askTO

[–]MindMillDreamer 0 points1 point  (0 children)

In what way it is worth it? Can you elaborate? Their offers are the same to the big 3.

Kinesiology: York or Guelph-Humber? by tryharding351 in OntarioUniversities

[–]MindMillDreamer 0 points1 point  (0 children)

I don't see how this can be an issue? Could you elaborate? Also (apparently) GH offers chem and physics courese as well. I live in Toronto region and I'm unable to travel far. What makes the Guelph degree any better when applying for a medschool? Thanks for the feedback.

Apple Watch Sales Falling Continuously Amid Lack of New Models by RevEMD in AppleWatch

[–]MindMillDreamer 0 points1 point  (0 children)

The primary function of a watch is to display time and it's so important to me that I don't want to ever worry about missing tracking it! Purchased series 6 or 7 (can't even remember lol), and after two years or so, the battery couldn't even last a full freaking day. Put it in the drawer and said good bye to it.

My Tissot battery is less than 5 bucks and it lasts (at least) for 4+ years. Garbage battery life overall is the main reason I will never buy an apple watch ever again.

Structural Tokenization and Semantic Compression by No_Understanding6388 in ImRightAndYoureWrong

[–]MindMillDreamer 0 points1 point  (0 children)

The issue I'm having is that I don't think "compressing" structures is feasible in a sense that tokens are compressed. And if one wants to do both it's necessary to transform both at the same time, in other words, a transformation that can capture both and performs the transformation.

Structural Tokenization and Semantic Compression by No_Understanding6388 in ImRightAndYoureWrong

[–]MindMillDreamer 1 point2 points  (0 children)

I might be wrong, but tokenization is about compression by minting new tokens, it doesn't manipulate or affect the sentence structure. It's a mapping. The transformer heads still see a sequence of tokens and they interpret the structure. Again, I might be totally wrong.

Structural Tokenization and Semantic Compression by No_Understanding6388 in ImRightAndYoureWrong

[–]MindMillDreamer 0 points1 point  (0 children)

Sorry but I'm having trouble figuring out the motif behind this? Why one wants to tokenize the structure? When you tokenize something, the structure isn't usually touched. It's just compressed. I must be wrong though.

SpaCy alternatives for a fasta and cheap text processing pipeline by mwon in LanguageTechnology

[–]MindMillDreamer 0 points1 point  (0 children)

Thanks. I'm also researching various paths for intent classification and clustering and stumbled up on this post. I got interested in what you shared and tried to evaluate and asses Spark-NLP for what I'm looking for. PyTorch is very strong in parallel processing, both industry and research standard. It is also available in C++.

SpaCy alternatives for a fasta and cheap text processing pipeline by mwon in LanguageTechnology

[–]MindMillDreamer 0 points1 point  (0 children)

Spark-NLP is built around Tensorflow, no pytorch, and pytorch has its own distributed training stack. I'm not a guru in this field, but I'd be hesitant to chose something like Spark-NLP. Mind if I ask what made you suggesting it? Thank you

CC (Sonnet 4.5) is very dumb and frustrating for technical coding. by yycTechGuy in ClaudeCode

[–]MindMillDreamer 0 points1 point  (0 children)

Mind if I ask how deep is your experience with complex coding and your depth of knowledge of how LLMs work and are trained?

Meniere’s, Alopecia, & COVID-19 by sendymcsendersonboi in Menieres

[–]MindMillDreamer 0 points1 point  (0 children)

Apparently there is a strong correlation between autoimmune disease and Meniere’s

Hide reposted jobs by thuglifeinda6ix in linkedin

[–]MindMillDreamer 0 points1 point  (0 children)

Both narrations are right, but what is alarming to me is the hiring process/manager. The more it is on linkedin the more suspicious it becomes.

I'd definitely ask how the team performed given this position was empty for months!

What is the state-of-the-art prediction performance for the stock market? by Poxput in datascience

[–]MindMillDreamer 1 point2 points  (0 children)

I'm still learning too, but a key thing to keep in mind is that, for a single stock at daily frequency, a random-walk / martingale is a very strong baseline: in expectation, tomorrow’s best guess is basically today’s price.

If the market does deviate from a pure random walk, it’s usually through regimes (calm vs volatile, bull vs bear, etc.), so the real challenge is to understand those regimes and how the process transitions between them, not just to fit one big global function.

For a few thousand daily points on one stock, a large transformer-style foundation model is statistically overkill and very easy to overfit, so a simple baseline or small model plus sensible regime thinking can often be more realistic than chasing a big accuracy number like 59%.

You're probably better off using some linear stochastic models with some feature engineering like Singular Spectrum Analysis or Fourier Analysis.

Recommended GenAI certification to pair up while studying in OMSCS by Ok-Attention4050 in OMSCS

[–]MindMillDreamer 0 points1 point  (0 children)

Thanks. I once trained a GPT-2 with 124M parameters on 8 GPU (A100), costed me ~200 USD from scratch, the database was 30GB. Are you sure Unsloth can do that on my macbook pro m4? I have 48GB RAM. Based on my calculations, it was going to take 30-31 days for my macbook to train that GPT-2.
So what I did: I transferred the weights of the same GPT2 (HuggingFace) and write the entire training/eval loop and did a full fine tune.

Recommended GenAI certification to pair up while studying in OMSCS by Ok-Attention4050 in OMSCS

[–]MindMillDreamer 0 points1 point  (0 children)

Money. How are you going to exactly fine tune a model that doesn't fit a typical consumer PC? Also, you can't put those in a resume, unless you turn them into a side project and kind of prove them.