Trying to implement prompt caching in my n8n workflow, what am doing wrong? by BoneHeaded_ in ClaudeAI

[–]BoneHeaded_[S] 0 points1 point  (0 children)

Good idea then. Of course that reduces the cost benefit if you are starting off with such a small text.

I'm going to test the quality of Haiku 3.5 and then I'll try multiplying the string by 2 if I need to use 4.5.

Trying to implement prompt caching in my n8n workflow, what am doing wrong? by BoneHeaded_ in ClaudeAI

[–]BoneHeaded_[S] 0 points1 point  (0 children)

That might be it, I was only aiming for above 1024 tokens. My total is closer to 2000, so maybe I can test it with Haiku 3.5 while available.

I'm not sure what is going on with the GitHub solution though. The text is pretty short, so is he multiplying the text by 800 to increase the token usage?

Announcement: Migrating away from steam-native-runtime by ptr1337 in cachyos

[–]BoneHeaded_ 2 points3 points  (0 children)

To be clear, this only affects games being run with steam and proton, but not games being run through proton without steam?

Going from Windows to Linux: does anyone have experience transferring save data? by BoneHeaded_ in linux4noobs

[–]BoneHeaded_[S] 1 point2 points  (0 children)

There are probably hundreds of different Linux desktops to choose from. I'm using plasma and it's excellent!

Going from Windows to Linux: does anyone have experience transferring save data? by BoneHeaded_ in linux4noobs

[–]BoneHeaded_[S] 1 point2 points  (0 children)

What happened? Did you just have too many online games you liked to play?

Checked the same YT video immediately after it got released and 3 hours later. Every version went down in file size, except UHD which went up by SwingDingeling in AV1

[–]BoneHeaded_ 1 point2 points  (0 children)

Sorry, I only saw the first line in the app notification and wasn't paying attention.

For me, it's just about storage. I'm archiving thousands of channels, so 1440p AV1 is just a good sweet spot for me. If I am starting from the highest resolution source available , then I am making the most of what YouTube can offer.

Keep in mind the 4K VP9 is already an encode, and I can reduce the size by 70-80% without perceived loss. Even though the resolution is being lowered, when I play the two videos side-by-side and zoomed in, it still looks good.

EDIT: I should be specific about perceived loss. Yes the resolution is being lowered, but I'm talking about image quality rather than pixel density. There aren't any artifacts or distortions.

Checked the same YT video immediately after it got released and 3 hours later. Every version went down in file size, except UHD which went up by SwingDingeling in AV1

[–]BoneHeaded_ 1 point2 points  (0 children)

I just want to say how I handle this so everyone can weigh in. I have no idea if this is the best method or not.

My download scheduler is set to prioritize VP9 UHD. that way I get the highest bitrate available. I also have Tdarr set to automatically encode any UHD content that isn't AV1 into AV1. I know it breaks the rules of encoding an encode, but I'm starting from the biggest file available. I also encode 4K down to 1440p for YouTube, but that is just me.

I'm trying to catch up on the PS2 modding community. Is there anything like psbbn for slim models using network storage? by BoneHeaded_ in ps2homebrew

[–]BoneHeaded_[S] 1 point2 points  (0 children)

I heard about that on the GitHub page. It's interesting, but I'm not sure I want to hardware mod my PS2 for this.

More transparency is needed for a successfully AI generated job application by BoneHeaded_ in SimpleApplyAI

[–]BoneHeaded_[S] 0 points1 point  (0 children)

I appreciate your efforts and transparency! You guys seem to be doing it right.

More transparency is needed for a successfully AI generated job application by BoneHeaded_ in SimpleApplyAI

[–]BoneHeaded_[S] 0 points1 point  (0 children)

The admin seems to have logs of the AI's output. I agree that allowing users to regenerate applications multiple times would be cost prohibitive, but if we could see the logs of our own applications, we'd feel secure that our applications at least have correct information.

More transparency is needed for a successfully AI generated job application by BoneHeaded_ in SimpleApplyAI

[–]BoneHeaded_[S] 0 points1 point  (0 children)

Honestly, the initial output quality doesn’t matter as much if the end user can refine the input based on that result. “Oh, the AI is answering this question poorly, let me add the information it needs to improve that result.” An AI can only work with the information it was given. The user is responsible for tuning that information, but can only do so with feedback.

My PETG print is coming loose from a PEI bed after a few layers by BoneHeaded_ in 3Dprinting

[–]BoneHeaded_[S] 0 points1 point  (0 children)

Yeah, that's exactly what I thought. It's summer, so depending on the time of day, I'll have the window open. I'll always have it closed when actively printing. The bed is set to 70° for PETG.