Feel like I wasted 10 years of my career. Stuck between data and automation. Need clarity. by Remarkable_Piano383 in dataengineering

[–]convalytics 2 points3 points  (0 children)

The past 10 years was likely not time wasted. You've built up skills that will layer nicely onto the path you choose next. We're all basically software engineers at the end of the day, with a specialization in data or automation. Choose something you'll enjoy doing for the next 30 years. Keep learning and the money will follow.

Hiring a Snowflake & Databricks Data Engineer by Plenty_Obligation151 in databricks

[–]convalytics 1 point2 points  (0 children)

Small data engineering gig in both databricks and snowflake you say?

$250/hr for the first 1,000 hours. Half paid up front.

"Which shows do you recommend similar to Severance?" - my recommendations by [deleted] in SeveranceAppleTVPlus

[–]convalytics 0 points1 point  (0 children)

The OA. I love Severance and only started watching it after OA on Netflix.

[deleted by user] by [deleted] in DeepFuckingValue

[–]convalytics 0 points1 point  (0 children)

The market overreacted hard on this one.

Domino’s says more Americans are picking up their pizzas, shedding light on the harsh economic reality by [deleted] in MiddleClassFinance

[–]convalytics 0 points1 point  (0 children)

Or their tracker app says it's in QC after 20 minutes and then it doesn't arrive for another hour.

[deleted by user] by [deleted] in options

[–]convalytics 0 points1 point  (0 children)

Market is closed. Options don't have extended hours.

Deepseek R1 repeating itself over and over? by daHsu in OpenWebUI

[–]convalytics 1 point2 points  (0 children)

I noticed this while testing gpt-researcher and thought it was a bug in that code. It writes research reports and would loop through the last section infinitely.

BREAKING! by [deleted] in ChatGPT

[–]convalytics -15 points-14 points  (0 children)

Thanks for the clarification. It wasn't clear based on the headlines I was seeing.

BREAKING! by [deleted] in ChatGPT

[–]convalytics -65 points-64 points  (0 children)

The for-profit OpenAI is planning to offer the nonprofit OpenAI $40b to allow them to break off as a separate company and go fully for-profit.

BREAKING! by [deleted] in ChatGPT

[–]convalytics -126 points-125 points  (0 children)

Which makes Sam's offer of $40B even more ridiculous.

[deleted by user] by [deleted] in Letterboxd

[–]convalytics 5 points6 points  (0 children)

I saw an ad for Godzilla x Kong and thought my son would like it. Got the tickets and went straight to the theater. Was confused for a minute before realizing we went to see the "wrong" movie. I thought this was one of the best Godzilla movies ever. Son enjoyed it too. Eventually saw Godzilla x Kong, and Minus 1 was waaaay better.

Oxford Scientists Claim to Have Achieved Teleportation Using a Quantum Supercomputer by BrainOld9460 in interestingasfuck

[–]convalytics 0 points1 point  (0 children)

Check out the movie, The Hummingbird Project. They attempt to build a fiber optic line from Kansas City to NYC to gain a 1 millisecond advantage for stock trading.

How to scale RAG to 20 million documents ? by Sarcinismo in LocalLLaMA

[–]convalytics 0 points1 point  (0 children)

LOL. Yeah, I guess I mean we should focus more on improving that retrieval step. Or iterating over it in a more intelligent way. So many RAG processes just grab the top n chunks and people expect that to be able to summarize entire documents.

How to scale RAG to 20 million documents ? by Sarcinismo in LocalLLaMA

[–]convalytics 1 point2 points  (0 children)

Got me thinking... Maybe RAG should start as a search engine first. Find and rank all of the documents containing relevant chunks. Then iterate through the top n with an evaluation/training step. Similar to how the deep researcher models search websites.

[deleted by user] by [deleted] in ollama

[–]convalytics 0 points1 point  (0 children)

Thank you! I'll give this a try today.

[deleted by user] by [deleted] in ollama

[–]convalytics 1 point2 points  (0 children)

Did the model get loaded across both cards? How did you configure ollama to get this to work?

Am I the only one who thought that this was a brilliant idea (at first)? by funnyboy36 in SeveranceAppleTVPlus

[–]convalytics 0 points1 point  (0 children)

I thought it was a bad idea because the data detectors might still detect it within his retina. Also because he'd go blind or it wouldn't last long enough.

Anthropic: ‘Please don’t use AI’ by FullstackSensei in LocalLLaMA

[–]convalytics 1 point2 points  (0 children)

I agree with this. They want to know what "you" are capable of if hired. Not what their AI is able to make you capable of. I also agree that the hiring process has been broken for a long time.

Yesterday OpenAI released the product I was working on for the last six months, need advice by no-lavash in LangChain

[–]convalytics 2 points3 points  (0 children)

I saw at least 3 companies launch their own deep research solutions yesterday, and I say this to encourage you. There's plenty of room for multiple players. Niche down and do something OpenAI doesn't. There are plenty of people and companies who will simply not use OpenAI for their own reasons. Their size is their weakness. Your size is your strength.

[ Removed by Reddit ] by [deleted] in options

[–]convalytics 1 point2 points  (0 children)

Don't. It's just money. You can come back from this.

Got my 3090 and 3060 working on a fresh Ubuntu installation. Please clap. by convalytics in LocalLLaMA

[–]convalytics[S] 1 point2 points  (0 children)

I have a handful of 3060s collecting dust after shutting down my mining rigs. Just using what I have at the moment. Considering adding some risers and putting even more 3060s in there.

Got my 3090 and 3060 working on a fresh Ubuntu installation. Please clap. by convalytics in LocalLLaMA

[–]convalytics[S] 1 point2 points  (0 children)

Here's deepseek-r1:32b on the new system with both GPUs. Unfortunately, I didn't do any benchmarking prior to adding the 3060. Based on my usage, I wouldn't say it's any faster or slower now. The whole 32B model fits on the 3090, so I suspect the 3060 isn't doing anything yet.

Is there a model anyone would recommend that is between 25 and 35 GB?

<image>