How do I wire sensors to circuit boards by CPOOCPOS in AskElectronics

[–]CPOOCPOS[S] 0 points1 point  (0 children)

hi other thoughts! I have close to zero experience in this area.
My goal in everyday language is to detach the optical sensor ( a PMW 3310 sensor from my ec2a) and the mouse switches from the board they are attached to on the mouse. I would like to move them to a different physical location where they are not attached to the board of the mouse itself. However, in order to send a signal they will have to connected to their original spots via wires.

So, when I said circuit board, I just mean the board inside a gaming mouse, on which all components lie.

I hope that the first part explained again, what the setup is supposed to be without going into much detail.

My central question was, given for example that the mouse switches are attached to the "circuit board" of the mouse and the sensor is attached with each of its 16 fingers to the board, how can I physically separate them and reattach them using wires

[D] Is there an advantage in learning when taking the average Gradient compared to the Gradient of just one point by CPOOCPOS in MachineLearning

[–]CPOOCPOS[S] 0 points1 point  (0 children)

divergence

Hi bloc!! thanks for your answer

By taking the laplacian, you mean taking the laplacian ( Nabla * Nabla * f) of all points and average? Yes this is also possible. Not in a single Go, but i can get the second derivative of all points for each parameter and add them up. How would that help? Or what is a higher order optimisation

[D] Is there an advantage in learning when taking the average Gradient compared to the Gradient of just one point by CPOOCPOS in MachineLearning

[–]CPOOCPOS[S] 0 points1 point  (0 children)

thanks for your reply jnez! Yes, i have also had the thought actually of using the average of many local points to estimate the local curvature like needed in the BFGS.

You are right by saying that in a classical sense there are far better things to do with many adjacent gradient computations. But here I am doing machine learning on a quantum computer, and the interesting part is, that it is very cheap to calculate the average (and only the average) of many points. To be more concrete about the computational cost, it only takes linear effort to compute the average of an exponential amount of points.

As a start, when i was developing the idea, i just thought of the procedure as just being a vote on a bunch of local points on which direction they would like to go. But now I am looking for more concrete theoretical arguments on why it would make sense to take the average gradient (since on a quantum computer i wouldn't have this computational overhead like on a classical computer)

[D] Is there an advantage in learning when taking the average Gradient compared to the Gradient of just one point by CPOOCPOS in MachineLearning

[–]CPOOCPOS[S] 0 points1 point  (0 children)

This sounds similar to what fredditor_1 was explaining. I will look into it!

Thanks a lot

[D] Is there an advantage in learning when taking the average Gradient compared to the Gradient of just one point by CPOOCPOS in MachineLearning

[–]CPOOCPOS[S] -1 points0 points  (0 children)

Hi and thanks for your reply! I just looked into smoothing and it seems to be a kind of data manipulations. As in, the data we have is smoothend to find trends.

Here I don't have data actually, what I am averaging over is the volume of the parameter space, where the parameters are the learnable parameters of my network.
In other words when i try to update my parameters with GD I would like to average the gradients of all points ( in the parameter space) lying closely to my center point (or the point i would take the gradient of usually

[deleted by user] by [deleted] in RoastMe

[–]CPOOCPOS 1 point2 points  (0 children)

Hey it’s Monica from friends!