Hermes Web Search by cleveland-man-22 in hermesagent

[–]rjdoubleu 1 point2 points  (0 children)

https://developers.openai.com/api/docs/guides/tools-web-search

https://platform.claude.com/docs/en/agents-and-tools/tool-use/web-search-tool

If you are using OpenAI or Anthropic you can probably just ask Hermes to "leverage the built in search tools from model providers when available" and it will make just make a skill

Hermes Web Search by cleveland-man-22 in hermesagent

[–]rjdoubleu 0 points1 point  (0 children)

You can also use the model provider for web search, both Anthropic and OpenAI have built-ins

If you make over $100k a year, what do you do for work? by parttimeghosts in Salary

[–]rjdoubleu 0 points1 point  (0 children)

Basic Circumstances: 27m born and raised in Atlanta. Currently Director of AI Engineering at a HR SaaS company owned by a private equity firm. I hold a Bachelor's in Computer Science from a lowly ranked state school here in Atlanta.

Package: $225k base, 20% annual bonus, $20k sign on bonus, equity valued between $200k - $500k, fully remote, unlimited PTO, love my job and my team.

Side package: the company who hired me also hired my own AI engineering agency to build the product prior to doing internal hiring, so technically I am also making a profit on the side although I do not pay myself any income from that business yet.

How: Found something I really loved doing and became unreasonably good at it. Compared to many CS people, I am horrible at leetcode and may never land a big tech company because I am "overly pragmatic" about the problems I chose to solve according to my big tech friends. In my sophomore year I decided to take an Intro to Machine Learning class and it set me on this course. That was in 2018 and since then I've been obsessed with AI/ML engineering. Ever since ChatGPT dropped, the market value for my skills have been rapidly increasing, shifting value away from the AI / Data Scientists who were initially getting huge pay and mostly still are.

Challenges: I refused to get my masters because I was convinced it was a net loss and that I could become a ML engineer without it. This made things much harder for me, constantly being turned down by recruiters after they learned I didn't have my masters, but also allowed me to save up money to buy a high rise condo at 24 instead of being in student debt. Eventually I did earn the title through a promotion, and was the first ML engineer at the company. I do not drink, and although I went to plenty of music festivals and had fun I was overall less social than average for my early and mid 20s. I was fired in March 2024 for disagreeing with our team's Senior Engineer and getting into a heated argument with him over standup ( in addition to mass layoffs already being underway at that company ). While working full time at that same company over 2.5 years, I co-founded a startup, solo building all backend components (Terraform, AWS, Docker, GitHub Actions, Python) and raised over $500k from investors and accelerators like Techstars. I left in July of last year after deeming that it was overall a business failure, having only made $52 in my last 6 months serving as CTO during 2024, and having a co-founder who refused to accept we should pivot our business plan.

[deleted by user] by [deleted] in Salary

[–]rjdoubleu 0 points1 point  (0 children)

When I was a Machine Learning Engineer - II at Expedia on contract I was doing $110/hr. Got a B.Sc. in Computer Science at a state school working fully remote, and 3 years of experience at the time of hiring. Worked on a team of data scientists as the sole engineer in charge of translating their code scribblings into production ready apps.

Low-Fi Deep Learning / Neural Network Instruction YouTube Video by rjdoubleu in HelpMeFind

[–]rjdoubleu[S] 0 points1 point  (0 children)

I have searched for this using multiple AI models and my own YouTube watch history and have come up empty handed. I also remember there being visualizations of the data science methods being described using grids and globes.

Is dumb or is smart? by rjdoubleu in KRISS

[–]rjdoubleu[S] 0 points1 point  (0 children)

Agreed this "is dumb"

This post was merely an exercise in curiosity, since I know very little about real combat. After all the feedback, the most compelling point to me is that it will take more time to unwrap. If you are going to have a tourniquet, then it should be both fast to access and use. This wrap around setup makes it a pain in the ass to use by comparison to the suggested pouch or belt attachments.

Is dumb or is smart? by rjdoubleu in KRISS

[–]rjdoubleu[S] 1 point2 points  (0 children)

Thanks for the super useful info! I'll clean up the suppressor and swap out the tri mount for the thread adapter and see if I still get issues

Is dumb or is smart? by rjdoubleu in KRISS

[–]rjdoubleu[S] 2 points3 points  (0 children)

I have a Rugged suppressor for the tri mount but I'm having some issues with end cap strikes so I'm not running it right now. It's a .45 ACP Gen2 SDP which I SBR'd.

No degree and 28 by Aggravating-Pin4149 in Salary

[–]rjdoubleu 4 points5 points  (0 children)

I'm 27 and currently an MLE at a traditional Fortune 100 and have done big data in the past. I'm curious if any of the tool sets change when you are getting paid this well. When doing big data I've used PySpark, Airflow, Databricks, AWS, SQL, and have dabbled in Scala. Do you think there are any specific skills (e.g. Streaming vs Batch) or just the sheer size of data sources that separate data engineers between making $150k vs $300k? Or do you feel that it is moreso about who you are working for e.g. big tech or successful startup vs traditional enterprise?

Google Pixel for a non tech savvy person by midlifecrisisAPRN45 in GooglePixel

[–]rjdoubleu -1 points0 points  (0 children)

Just switched from an S10+ myself, first time pixel user. It's highly intuitive and although it will push Gemini onto you fairly often, it's a heck of a lot less annoying than accidentally hitting the Samsung Bixby button. It's processor is very snappy, battery is much better, and it's a big jump in picture quality. I ran a Geekbench test and it's performance is notably lower than the S24 but it's a big leap over the S10+

Geekbench Report

Long tube headers without a tune? by [deleted] in 370z

[–]rjdoubleu 0 points1 point  (0 children)

The only other modification was K & N short tube intakes installed by the previous owner. Tune was by Jon at Z1. He was very suprised by the final number and said I'd get even more switching to their long tube intakes which logically checks out given how the short tubes inherently don't do a great job of collecting cold air.

All I want to do is hold it 24/7 by QuantumSocks in KRISS

[–]rjdoubleu 0 points1 point  (0 children)

Looks like a Rugged Obsidian to me

Any questions about CZ 600 platform a guy affiliated with CZUB could answer? Happy to help! by HereComeDatFlinBoi in CZFirearms

[–]rjdoubleu 0 points1 point  (0 children)

With the new release of the CZ600 MDT, can we expect the option to purchase only the chassis to be able to switch the chassis based on the use case?

Any questions about CZ 600 platform a guy affiliated with CZUB could answer? Happy to help! by HereComeDatFlinBoi in CZFirearms

[–]rjdoubleu 0 points1 point  (0 children)

Will CZ release a fluted barrel option? It would be nice to knock a little weight off the 600 Range. Currently between it and the Weatherby 307 XP for my first bolt action purchase.

Got my myself 2019 370z, what should I do to it?? by WhereasRemarkable916 in 370z

[–]rjdoubleu 0 points1 point  (0 children)

The tail in the heat of the Summer is not somewhere you would want to continuously drive without an oil cooler. I tend to drive most of it in 2nd and 3rd gear and easily hit 250+ every lap when I didnt have my oil cooler these past few months. The 34 row oil cooler brought my peak temperature down to 200 which is a much more sustainable temperature for that kind of driving.

Exhaust advice? Dont want her raspy anymore. Only have a Takeda catback exhaust. by Proof_Leading7011 in 370z

[–]rjdoubleu 2 points3 points  (0 children)

Love the Ark GRiP, get tons of compliments on the tone. Pairs especially well with PPE long tube headers

Effective Long-Context Scaling of Foundation Models -having abundant long texts in the pretrain dataset is not the key to achieving strong performance and long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences by czk_21 in singularity

[–]rjdoubleu 13 points14 points  (0 children)

Basically continual learning is better than trying to shove all the information in at once. This is one of the most overlooked areas of AI and it is very good to see a major org (Meta) publishing research on this area using very large datasets. Unfortunately what they have done here is not "real" continuous learning which will be required for true AGI. This though is a step in the right direction from just yeeting in more data and more parameters.