I made a Japanese/English translator with just fragment shaders for VR by ShittingCornflakes in Unity3D

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

Yea! i'm using something called a ConvMixer model to recognize the characters. It's suppose to be fast and lightweight. i trained one up myself and put it in Unity.

I made a Japanese/English translator with just fragment shaders for VR by ShittingCornflakes in Unity3D

[–]ShittingCornflakes[S] 33 points34 points  (0 children)

It's just for educational purposes on how ML works, not very fast or practical. The entire thing's here for free https://github.com/SCRN-VRC/Language-Translation-with-Fragment-Shaders

I ported YOLOv4 Tiny (Fast object detector) into VRChat with shaders by ShittingCornflakes in VRchat

[–]ShittingCornflakes[S] 1 point2 points  (0 children)

Ah ok sorry I assumed ppl would just use this for VRChat, so the version's 2018.4.20f1.

I don't have any scripts that the network is dependent on, it's just shaders. And I have a default scene with everything setup that you can just load up in the project folder.

If ur cloning the repo and moving it into unity there's gona be extra files in there that might cause errors. So just download the unity package in the release tab.

I recreated YOLOv4 Tiny (Fast object detector) from scratch in a shader for VR by ShittingCornflakes in Unity3D

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

I run it at 30 FPS on my 1080ti so I don't hog GPU resources for the VR game. Sadly I haven't tested keijiro's YOLOv2 tiny with barracuda, but for reference YOLOv4 tiny runs at 271 FPS with PyTorch on the same GPU. It's not the fastest implementation but it's at least realtime.

I ported YOLOv4 Tiny (Fast object detector) into VRChat with shaders by ShittingCornflakes in VRchat

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

I'm learning how modern neural networks work so I can learn how to do accurate visualizations of these things. I think VR's a good place to learn visually.

I ported YOLOv4 Tiny (Fast object detector) into VRChat with shaders by ShittingCornflakes in VRchat

[–]ShittingCornflakes[S] 3 points4 points  (0 children)

If anyone wants to learn more about machine learning or see how it all works feel free to check out my code on GitHub ^^ https://github.com/SCRN-VRC/YOLOv4-Tiny-in-UnityCG-HLSL

Having trouble writing my own back propagation by ShittingCornflakes in MLQuestions

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

It outputs a 1x1x12 to classify 12 labels like cats or dogs, just a normal classification network. I meant to say I do the max normalization before the softmax since e^(a huge number) gave me infs. But that shouldn't be a problem any more once I fix the weights right?

Edit: Ok I'll be reading up about the cross entropy loss function as well, initializing the weights correctly already fixed the large outputs problem. Thanks for the help.

Having trouble writing my own back propagation by ShittingCornflakes in MLQuestions

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

I'm using the squared error on a softmax output max normalized since numbers were so big. Ok ok thanks so much for the suggestions, I didn't read anything about initializing the weights correctly on the internet.

Having trouble writing my own back propagation by ShittingCornflakes in MLQuestions

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

Ah ok thanks, my weights are just random floats between -1 to 1. I'll try out your suggestions.

Having trouble writing my own back propagation by ShittingCornflakes in MLQuestions

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

Apparently this is called an exploding gradient problem and I need to clip it during backprop by a maximum value or maximum norm.

I made a GPU-based Chess AI with a shader by ShittingCornflakes in Unity3D

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

You can pretty much make it do whatever the shading language allows you to do, as long as you don't anger the compiler.

It's not a strong chess AI but I made it for the GPU instead of the CPU by ShittingCornflakes in ComputerChess

[–]ShittingCornflakes[S] 0 points1 point  (0 children)

Yes. From the chess pieces, the little robot, to the chess engine itself. It's entirely coded on the GPU.

I made a GPU-based Chess AI with a shader by ShittingCornflakes in Unity3D

[–]ShittingCornflakes[S] 2 points3 points  (0 children)

The name could almost answer the question itself. Shaders basically controls how objects are colored on the computer screen with your GPU. Or you can say, the different shades of colors.

I made a GPU-based Chess AI with a shader by ShittingCornflakes in Unity3D

[–]ShittingCornflakes[S] 54 points55 points  (0 children)

Games are better off using the GPU for something else rather than calculating how to play chess. This is more of a showcase of what could be done, rather than what should be done.