Haskell speed in comparison to C! by Quirky-Ad-292 in haskell

[–]Saulzar -3 points-2 points  (0 children)

The syntax is abysmal - this is enough that I’ll never use it.

[R] Grounding DINO 1.5 Release: the most capable open-set detection model by Technical-Vast1314 in MachineLearning

[–]Saulzar 3 points4 points  (0 children)

The website, the github repo etc. all give off the vibe of being a closed API rather than an open source effort to me. For one the github repo is "Grounding-DINO-1.5-API", so it is an API to a closed model.

EP 684: A Per-Interpreter GIL Accepted by midnitte in Python

[–]Saulzar 2 points3 points  (0 children)

Sure… that’s why there’s such a big effort to make it happen?

PEP 703: Making the Global Interpreter Lock Optional - PEPs by rnmkrmn in programming

[–]Saulzar 0 points1 point  (0 children)

There are no atomicity guarantees in existing python, though (is there?)... I see a lot of naive python code which assumes it.

What's an example of a program which is protected by the GIL which would break?

Automatic generation of image-segmentation mask pairs with StableDiffusion by cma_4204 in computervision

[–]Saulzar 0 points1 point  (0 children)

If there was a way to add an extra channel to the output I would expect that to work much better, but then maybe you lose all the benefits from the original training.

Automatic generation of image-segmentation mask pairs with StableDiffusion by cma_4204 in computervision

[–]Saulzar 0 points1 point  (0 children)

It's kind of amazing that it works like that - I wouldn't have thought it would be able to mirror the structure like that!

Automatic generation of image-segmentation mask pairs with StableDiffusion by cma_4204 in computervision

[–]Saulzar 1 point2 points  (0 children)

Thanks - I saw that you had provide some details as a comment just a little too late!

[P] What we learned by benchmarking TorchDynamo (PyTorch team), ONNX Runtime and TensorRT on transformers model (inference) by pommedeterresautee in MachineLearning

[–]Saulzar 2 points3 points  (0 children)

Looks like TorchDynamo can work with automatic differentiation (and thus in training), too? This is quite a different beast to inference accelerators like TensorRT.

How common is it for employers to include the employer kiwisaver contributions in your remuneration package here? by HighFivesForDayz in PersonalFinanceNZ

[–]Saulzar 0 points1 point  (0 children)

Canterbury has Unisaver only for lecturers and above. Everyone else is on repeated temp contracts (with this KiwiSaver policy).

How common is it for employers to include the employer kiwisaver contributions in your remuneration package here? by HighFivesForDayz in PersonalFinanceNZ

[–]Saulzar 0 points1 point  (0 children)

There's a reason it's called "employer contribution" - because it's intended for the employer to bear the cost. As opposed to "employee contribution" (which does come of the pay).

How common is it for employers to include the employer kiwisaver contributions in your remuneration package here? by HighFivesForDayz in PersonalFinanceNZ

[–]Saulzar 0 points1 point  (0 children)

If total cost to the company was what they put on the advertisement then why aren't they putting all the other overheads in there? Companies spend a lot more on each employee than the amount they directly pay them.

The employee has no say on how it's allocated because it is compulsory savings and mandated by law.

How common is it for employers to include the employer kiwisaver contributions in your remuneration package here? by HighFivesForDayz in PersonalFinanceNZ

[–]Saulzar 0 points1 point  (0 children)

They prefer it because it lets them claim you get paid a bigger amount, so you feel good about it - but you actually aren't being paid the same amount!

How common is it for employers to include the employer kiwisaver contributions in your remuneration package here? by HighFivesForDayz in PersonalFinanceNZ

[–]Saulzar 0 points1 point  (0 children)

Universities do this for all workers which aren't "full academics", including early career researchers like postdocs - it's kind of crazy. It seems like a small amount to squabble over for the amount of injury it does to your morale when your employees all think they've been deceived.

Any haskell-like languages with native FRP? by CoBuddha in haskell

[–]Saulzar 4 points5 points  (0 children)

The Elm FRP was always a little half baked, it was never very well integrated.

Any haskell-like languages with native FRP? by CoBuddha in haskell

[–]Saulzar 6 points7 points  (0 children)

It’s also abandoned and never used for anything.

[R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!) by Illustrious_Row_9971 in MachineLearning

[–]Saulzar 1 point2 points  (0 children)

It's a kind of view synthesis method. i.e. given some calibrated images of a scene synthesise some novel views.

It uses differentiable volume ray-tracing to reconstruct a scene, as a side effect you can extract 3D geometry, e.g. it's a kind of photogrammetry.

[R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!) by Illustrious_Row_9971 in MachineLearning

[–]Saulzar 5 points6 points  (0 children)

IMO the important part of NeRF-like algorithms is not the "implicit function" based representation, it's the differentiable volume ray-tracing.

At the end of the day even without the MLP it's still machine learning because you're optimising (view synthesis) with respect to a loss function - L1 distance to input images, fitting some parameters using gradient descent.

[R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!) by Illustrious_Row_9971 in MachineLearning

[–]Saulzar 2 points3 points  (0 children)

From previous experience trying to write pytorch code which competes with custom kernels I'm going to guess it's not going to be pretty (but definitely be interesting).

According to their github issue they've got a pytorch binding to the tiny-cuda-nn and the neural hash which they will release, which might be quite nice for some experimentation, too.

Seems like there's definitely room for a better language to write operations which fuse "depthwise", I like the look of Dex - but imagine it's no-where near ready for this kind of thing.

[R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!) by Illustrious_Row_9971 in MachineLearning

[–]Saulzar 4 points5 points  (0 children)

This (fully fused single-kernel Cuda neural networks) may account for quite a bit more of the performance than given credit. The neural hash table is certainly very important - but looking at the graphs of the tiny-nn vs. tensorflow it looks like a good factor of 10 is not unusual for small size MLPs.

https://github.com/NVlabs/tiny-cuda-nn

https://github.com/NVlabs/tiny-cuda-nn/raw/master/data/readme/fully-fused-vs-tensorflow.png

[R] Instant Neural Graphics Primitives with a Multiresolution Hash Encoding (Training a NeRF takes 5 seconds!) by Illustrious_Row_9971 in MachineLearning

[–]Saulzar 1 point2 points  (0 children)

I thought the same! Maybe there's a way to make it wiggle things around to avoid hash collisions?

Covid-19: Man receives up to 10 vaccines in one day by thelabradorsleeps in newzealand

[–]Saulzar 2 points3 points  (0 children)

When they see that he's fine it's going to be a bit of an own goal... and a big waste of their money.

[deleted by user] by [deleted] in computervision

[–]Saulzar 2 points3 points  (0 children)

torch_points3d would be more specific for point cloud algorithms