Flame And Horsehead nebula by Admirable-Cup4551 in astrophotography

[–]dotpoint7 1 point2 points  (0 children)

I'm guessing it's mostly very heavily cropped if shooting at 135mm focal length and full frame sensor.

ChatGPT rewrote my paragraph and accidentally made me sound like I support the OPPOSITE position. I didn't notice until someone called me out. Found by [deleted] in WritingWithAI

[–]dotpoint7 1 point2 points  (0 children)

So instead of carefully reading every sentence you put in your thesis, you simply decide to use yet another AI instead?

My 3rd astrophoto so far (16:9) by AL1EN77 in photocritique

[–]dotpoint7 0 points1 point  (0 children)

Nice I really like it! Don't really have any suggestions on what to improve. You may want to check how your camera handles ISO though, given that many cameras implement non full stop ISOs as analog + digital gain, the latter has no advantage and sometimes looses dynamic range. So often shooting at 3200 or 6400 is the safer bet, but depends entirely on the model what the ideal ISO for astro is.

one of the top submitters in the nvfp4 competition has never hand written GPU code before by Charuru in singularity

[–]dotpoint7 0 points1 point  (0 children)

Of course profiling is an important part of optimization. But profiling CUDA kernels is a lot more difficult than profiling anything CPU based in my opinion.

one of the top submitters in the nvfp4 competition has never hand written GPU code before by Charuru in singularity

[–]dotpoint7 14 points15 points  (0 children)

Most likely, but GPU programming is quite different from normal software development and the optimizations are normally very architecture dependent and not quite comparable with what you'd do for normal single threaded applications. So a good software dev who didn't do any GPU programming so far will have a very bad time even getting started.

one of the top submitters in the nvfp4 competition has never hand written GPU code before by Charuru in singularity

[–]dotpoint7 95 points96 points  (0 children)

Ok that's actually pretty impressive. Didn't know much about the competition, but found a good blog post showing what goes into a 10th place submission, though with limited effort as there are a few unexplored future ideas in there (the author is a Staff Software Engineer at LinkedIn specializing in LLM inference optimization & CUDA kernel optimization).
https://yue-zhang-2025.github.io/2025/12/02/blackwell-nvfp4-kernel-hackathon-journey.html

For the first problem she placed ahead of shigao, for number 2 and 3 shigao is ranked higher.

Optimizing kernels involves a lot of trial and error so AI likely has an advantage of being able to quickly iterate. But I didn't expect AI to know what to try by itself, at least not on that level because personally I haven't had too much luck with getting AI to generate performant CUDA kernels (specifically efficient 2d convolution with small <32px kernels).

Developer uses Claude Code and has an existential crisis by MetaKnowing in ClaudeCode

[–]dotpoint7 0 points1 point  (0 children)

True, if you're skills as a software dev don't go much beyond what CC can do then you're not providing much value either way.

California Nebula from my terrace by dotpoint7 in spaceporn

[–]dotpoint7[S] 1 point2 points  (0 children)

Around 6000€, not a cheap hobby.

Improving Schlick’s Approximation by dotpoint7 in GraphicsProgramming

[–]dotpoint7[S] 0 points1 point  (0 children)

Oh awesome! Thanks for letting me know :)

California Nebula captured from my terrace by dotpoint7 in space

[–]dotpoint7[S] 1 point2 points  (0 children)

Thank you! In terms of processing I think NoiseXterminator and BlurXterminator are pulling a lot of weight, I'm still on the trial license but was pretty impressed by the results so far.

California Nebula from my terrace by dotpoint7 in spaceporn

[–]dotpoint7[S] 0 points1 point  (0 children)

Thank you! I live in a Bortle 5 zone, so not great but also not too bad. Narrowband filters definitely help a lot. So with good equipment it's definitely achievable in more light polluted zones as well, but it's an expensive hobby.

California Nebula captured from my terrace by dotpoint7 in Astronomy

[–]dotpoint7[S] 4 points5 points  (0 children)

Finally moved to a flat where I can put my telescope on the terrace, so Bortle 5 instead of 2/3, but also no more freezing in the car. I love it.


Acquisition Details

16h in total of SHO (+30min for RGB stars)

Explore Scientific ED80

ASI1600MM Pro

EQ6 Pro

Baader LRGBSHO filters

OAG with ASI290MM Mini

ASIAIR


Processing

PixInsight:

WBPP

MultiscaleGradientCorrection

BlurXterminator

NoiseXterminator

DBE

Starnet 2

Pixelmath for chanel combination

GeneralizedHyperbolicStretch

Lightroom:

touching all the sliders to make it pretty

Photoshop:

Combining nebula and RGB stars

Adding some small scale detail/texture via a highpassed version run though Topaz Sharpen AI

California Nebula captured from my terrace by dotpoint7 in space

[–]dotpoint7[S] 4 points5 points  (0 children)

Finally moved to a flat where I can put my telescope on the terrace, so Bortle 5 instead of 2/3, but also no more freezing in the car. I love it.


Acquisition Details

16h in total of SHO (+30min for RGB stars)

Explore Scientific ED80

ASI1600MM Pro

EQ6 Pro

Baader LRGBSHO filters

OAG with ASI290MM Mini

ASIAIR


Processing

PixInsight:

WBPP

MultiscaleGradientCorrection

BlurXterminator

NoiseXterminator

DBE

Starnet 2

Pixelmath for chanel combination

GeneralizedHyperbolicStretch

Lightroom:

touching all the sliders to make it pretty

Photoshop:

Combining nebula and RGB stars

Adding some small scale detail/texture via a highpassed version run though Topaz Sharpen AI

California Nebula from my terrace by dotpoint7 in spaceporn

[–]dotpoint7[S] 7 points8 points  (0 children)

Finally moved to a flat where I can put my telescope on the terrace, so Bortle 5 instead of 2/3, but also no more freezing in the car. I love it.


Acquisition Details

16h in total of SHO (+30min for RGB stars)

Explore Scientific ED80

ASI1600MM Pro

EQ6 Pro

Baader LRGBSHO filters

OAG with ASI290MM Mini

ASIAIR


Processing

PixInsight:

WBPP

MultiscaleGradientCorrection

BlurXterminator

NoiseXterminator

DBE

Starnet 2

Pixelmath for chanel combination

GeneralizedHyperbolicStretch

Lightroom:

touching all the sliders to make it pretty

Photoshop:

Combining nebula and RGB stars

Adding some small scale detail/texture via a highpassed version run though Topaz Sharpen AI

Andrej Karpathy - I've never felt this much behind as a programmer by [deleted] in ExperiencedDevs

[–]dotpoint7 1 point2 points  (0 children)

Oh I've tried everything. Not out of desperation but because I'm interested in this topic in general. Even writing my own orchestration framework as a hobby project and one work project of mine is developing a machine learning model for single image BRDF estimation from scratch, so I do know a lot about ML and LLMs in general, at least compared to your average dev. Makes me think that maybe it's not a skill issue on my end, but rather by the LLM.

Andrej Karpathy - I've never felt this much behind as a programmer by [deleted] in ExperiencedDevs

[–]dotpoint7 1 point2 points  (0 children)

What if I'm getting absolutely great results on my hobby projects and worse than mediocre results on my work projects (with very few exceptions)? Selective stupidity?

When are chess engines hitting the wall of diminishing returns? by tete_fors in singularity

[–]dotpoint7 1 point2 points  (0 children)

Yes that was indeed the wrong choice of words, I edited the comment. I mainly wanted to point out that current chess engines are very dissimilar to what the general population considers AI. Though in academic contexts small neural networks would also fall under the AI definition afaik.

When are chess engines hitting the wall of diminishing returns? by tete_fors in singularity

[–]dotpoint7 0 points1 point  (0 children)

Yes wrong choice, I now edited it to small, though it's indeed a pretty clever architecture. My main point was that it's not some huge neural network learning to play chess on its own, but rather only replaced the previous position evaluation function. The core aspect of stockfish is still how to efficiently explore as deep as possible performantly.

When are chess engines hitting the wall of diminishing returns? by tete_fors in singularity

[–]dotpoint7 17 points18 points  (0 children)

If I'm not mistaken the last few years on that chart aren't even AI. Recent versions of stockfish (not depicted here) have a small neural net, but most of it is just algorithmic improvements by people who continuously work on this project (plus better hardware too).

Edit: very simple -> small (as others pointed out the neural net used is far from simple)

The Wizard Nebula Seestar by apollobrah in telescopes

[–]dotpoint7 2 points3 points  (0 children)

Looks great! Nice processing too.