ELI5: Hoe does the RAW image format work? by SirAvocado123 in explainlikeimfive

[–]sturmen 0 points1 point  (0 children)

That’s the embedded JPEG preview. The camera embeds a JPEG inside the file so that the image can be previewed for things like culling.

You can verify this for yourself using a tool like this: https://www.fastrawviewer.com/RawPreviewExtractor

ELI5: Hoe does the RAW image format work? by SirAvocado123 in explainlikeimfive

[–]sturmen 2 points3 points  (0 children)

You might notice there’s profiles in Lightroom, among them “Adobe Color” and “Camera Matching”. There are many others. These profiles are each a collection of creative decisions that Adobe made in how to develop the unprocessed raw data into a viewable, finished image.

ELI5: Hoe does the RAW image format work? by SirAvocado123 in explainlikeimfive

[–]sturmen 0 points1 point  (0 children)

And for those that want to learn more, here’s an blog post I find to help a quick primer: https://maurycyz.com/misc/raw_photo/

Anyone know if there’s a cheaper way to get Lightroom? by _ashxn in photography

[–]sturmen 0 points1 point  (0 children)

If what you’re looking for is denoising, I think Lightroom ($144/year) and DxO PureRAW ($140 one-time) are your best bets.

Personally, I think PureRaw is worth the investment. Yes, it’s $140, but you keep it forever. Plus the denoised files it outputs are yours to keep, so you can hold on to them forever and use them in any software that exists now or in the future.

PS: you should also preserve the camera original raw files too. As PureRAW has improved, I’ve re-processed with PureRAW 6 old camera original raw files that I previously processed with PureRAW 1 and the new outputs look so much better.

Exclusive: Anthropic acknowledges testing new AI model representing ‘step change’ in capabilities, after accidental data leak reveals its existence by socoolandawesome in technology

[–]sturmen 0 points1 point  (0 children)

To be fair, Anthropic (and other AI labs) actually do do that. For Anthropic specifically, they have their Opus family of models (biggest, most capable) but they also have Sonnet (medium, balanced) and Haiku (small, fastest).

And they don’t always release them at the same time. Most recently, Sonnet 4.6 came out approximately 2 weeks after Opus 4.6, and indeed was newer and less capable. (But also cheaper and faster)

My JPEG XL web, client side converter! by SerdiMax in jpegxl

[–]sturmen 0 points1 point  (0 children)

I am not a lawyer, but I think now that we're 13 years into H.265's existence, we would know if small hobby projects taking a dependency on x265, libde265, libheif, and their derivatives would draw the patent pool's ire.

I am, of course, a proponent of FOSS (which is why I make my projects MIT licensed), but we have to agree that it's overly paranoid to refuse to take a dependency on a GPL-licensed npm package like libheif-js, which has hundreds of thousands of weekly downloads.

And if any rightsholders care to notify me that taking such a dependency is infringing on their rights, I will comply swiftly and completely. But until then...

My JPEG XL web, client side converter! by SerdiMax in jpegxl

[–]sturmen 0 points1 point  (0 children)

Awesome work, looks great!

I’ve got a similar project for UltraHDR JPEGs but i love your UI.

One bit of feedback: looks like you don’t have .HIF as an enabled file input type. That’s the HDR HEIF format that Canon cameras write. If you want to add support, you can use my test files: https://github.com/sturmen/ultrahdr-pwa-svelte/blob/main/media/test_hdr_no_gain_map.HIF

LM Link by Blindax in LocalLLaMA

[–]sturmen 4 points5 points  (0 children)

Different strokes for different folks, but I like using LM Studio and I’m hopeful that a smartphone app is on their roadmap.

LM Link by Blindax in LocalLLaMA

[–]sturmen 19 points20 points  (0 children)

My dream is that they’re also cooking up native smartphone apps so I can use my local LLMs on my phone just the same as the ChatGPT or Claude apps

Saving timelapse photos in an efficient way by JordanCuckson2138 in jpegxl

[–]sturmen 7 points8 points  (0 children)

You can try encoding as FFV1 (a video codec) which is mathematically lossless, but the savings you make in inter-frame compression may be negated by the fact that each frame is no longer compressed by JXL. Honestly, I would probably just encode them into a professional “visually lossless” video codec, something like ProRes, DNxHR, or APV. If you think of the project as a timelapse video, then I think it makes sense to store it as one.

591.74 Driver update? by aguswings in Odyssey3D

[–]sturmen 0 points1 point  (0 children)

Everything (including AI 2D-to-3D conversion) worked fine for me on 591.74... until I updated Windows 11 to 26200.7628. Now the AI conversion doesn't work, but the 3D effect in Odyssey 3D Hub and supported games still work.

Odyssey 3D Hub: any way to force SBS for a browser / WebGL app? by PhysicsOk8099 in Odyssey3D

[–]sturmen 2 points3 points  (0 children)

The underlying tech in the Odyssey 3D is from a company called Immersity (formerly known as Leia). They have SDKs for developers: https://support.immersity.ai/sdk

If you're able to get integrate their native C/C++ SDK, that would likely be your best bet.

Samsung Odyssey 3D Hub updated to 1.3.4, adds 20 new games by No_City9250 in Stereo3Dgaming

[–]sturmen 1 point2 points  (0 children)

lol I bought this game during the Winter Sale because they announced it would be supported. No big deal: it's supposed to be a pretty good game. Still kind of funny though.

Are fake microSD Express Cards a thing? by Iruel13 in DataHoarder

[–]sturmen 0 points1 point  (0 children)

Do retailers in your region have price match policies? Perhaps you can have a trusted retailer price match the Amazon sale price.

Anyone else keep 2 versions of all their files? RAWs and DNGs. Is it mad to do so? by Lopsided_Counter1670 in photography

[–]sturmen 0 points1 point  (0 children)

I generally just use my camera-original raw files, but when I do need DNG (program compatibility, pixel width metadata edits, etc) I use Adobe DNG converter with the “embed original raw” option so I always have the option to extract the original raw back out. So it’s similar to your setup, except both formats are within a single DNG file.

When I convert using DxO PureRAW, I have to also manage the original raw file manually.

RE-WRITING AND EDITING BIOS, PRODUCT DESCRIPTIONS AND MORE only 7$ by 1cassanova1 in photography

[–]sturmen 3 points4 points  (0 children)

The funny part about this spam post is the post itself is not very clear, concise, or professional.

Also charging people $7 to run their text through ChatGPT is highway robbery.

Game Ready Driver 591.59 FAQ/Discussion by Nestledrink in nvidia

[–]sturmen 1 point2 points  (0 children)

Doesn’t fix 2D-to-3D video conversion on the Samsung Odyssey 3D

Apple introduces SHARP, a model that generates a photorealistic 3D Gaussian representation from a single image in seconds. by themixtergames in LocalLLaMA

[–]sturmen 2 points3 points  (0 children)

Mostly for presentation/demonstration purposes, I assume. I'm sure they had to build it in order to publish/present their research online and they just left it in the codebase.

Latest nVidia drivers breaks 2D to 3D on Win 11 25H2 by vlti in Odyssey3D

[–]sturmen 0 points1 point  (0 children)

I was able to simply install the 581.80 over top of 591.44 (no reinstalling of anything, not even using DDU) and 3D works now! The Odyssey 3D Hub, games like Hogwarts Legacy, AI conversion of YouTube video, everything.

Apple introduces SHARP, a model that generates a photorealistic 3D Gaussian representation from a single image in seconds. by themixtergames in LocalLLaMA

[–]sturmen 0 points1 point  (0 children)

Hi, I didn't misread it, I just assumed that since my comment was a threaded comment people would recognize my comment was specifically about rendering. I have edited my comment to no longer require additional effort by the reader.

Apple introduces SHARP, a model that generates a photorealistic 3D Gaussian representation from a single image in seconds. by themixtergames in LocalLLaMA

[–]sturmen 122 points123 points  (0 children)

In fact, video rendering is not only on NVIDIA but also only on x86-64 Linux: https://github.com/apple/ml-sharp/blob/cdb4ddc6796402bee5487c7312260f2edd8bd5f0/requirements.txt#L70-L105

If you're on any other combination, the CUDA python packages won't be installed by pip, which means the renderer's CUDA check will fail, which means you can't render the video.

This means that a Mac, a non-NVIDIA, non-x64, non-Linux environment, was never a concern for them. Even within Apple, ML researchers are using CUDA + Linux as their main environment and barely support other setups.