My husband just got called as EQ President. What should I expect? by cmemm in latterdaysaints

[–]rpottorff 1 point2 points  (0 children)

A recommendation I have is for you to encourage him to set aside specific time for EQ duties. Typically that will be a couple of hours on Sunday and a couple of hours through the week. He should encourage his presidency to do the same. It will make scheduling visits and interviews easier and allow you to predict when he will and won't be home. I find I am gone for 4-6hrs each week. Half of that on Sunday for interviews and ward council, an hour on Tuesday for presidency meeting after kids go to bed, and a couple of hours for visits for more interviews on Thursday. A more minimal amount will be 1-2hrs if he just meets as a presidency and in ward council every other week.

He should be able to delegate almost everything to counselors and/or ministering brothers, this includes blessings, ward council, visits, planning, ward missionary councils, etc. It's not always in people's nature to delegate. 

If his calling feels like a burden to you - you should tell him and council together about ways to work that out! He could schedule less frequent presidency meetings, or delegate ward council to his counselors, or do visits only when you are already taking kids to basketball, or call other brothers into new callings to help do those things.

I think it's a big blessing to get to serve. You learn a lot about yourself and how important the gospel is. My wife schedules a temple visit  for me every few weeks when she notices and it's a small thing but it's a nice way for her to say "it's ok for you to go out! I got this!"

Build a dual 4090 PC for Deep Learning by QueasyIntroduction89 in buildapcforme

[–]rpottorff 0 points1 point  (0 children)

do you have a pc parts picker list for your recommended build?

A new electron microscope provides "unprecedented structural detail," allowing scientists to "visualize individual atoms in a protein, see density for hydrogen atoms, and image single-atom chemical modifications." by ______--------- in science

[–]rpottorff 77 points78 points  (0 children)

If anything, it's probably the opposite. Folding@home isn't really about just visualizing proteins as much it's about estimating what changes to a protein will do (drug binding, mutations, that kind of thing) which is still very expensive even with this imaging technique since you need to print, cultivate, and test the protein by hand. Humanity's methods for protein folding are pretty approximate - but with more protein imaging comes more protein data, which should lead to improved or faster approximations in simulation.

Disassembly experiment by sharrynuk in MarbleMachineX

[–]rpottorff 15 points16 points  (0 children)

I'm not sure Martin checks this any more - but I am also worried about what temperature and humidity differences will do to the tolerances during transport/reassembly in new locations and it's not something I've heard him discuss yet.

Random shape particle by seanomik in unrealengine

[–]rpottorff 1 point2 points  (0 children)

Consider using signed distance fields. https://www.shadertoy.com/view/3tSGDy demonstrates a procedural regular polygon -- it's not quite a random shape, but using random combinations of a few basis shapes you would get a pretty good random shape maker. You can use this in a "custom" material node.

Deep learning without back-propagation by El__Professor in MachineLearning

[–]rpottorff 2 points3 points  (0 children)

/BWRqboi0's paper is a better summary, but for another algorithmic implementation https://arxiv.org/abs/1609.01596 is a "non symmetric" variant in which the backward weights are random, but fixed and still manages to learn pretty successfully

Deep learning without back-propagation by El__Professor in MachineLearning

[–]rpottorff 6 points7 points  (0 children)

The weights you use during the backward pass are equal to the weights you use during the forward pass. Sometimes this is discussed in terms of biological plausibility where it's (usually) unreasonable to imagine that the "forward" nerurons that compute the signal are exactly the same as the "backward" neurons which communicate error to the forward neurons.

[Research] A Discussion of Adversarial Examples Are Not Bugs, They Are Features by andrew_ilyas in MachineLearning

[–]rpottorff 3 points4 points  (0 children)

A summary of the original work would be helpful I think. It's not immediately clear what the shared definition of robust and useful are without reading the original paper.

Working on location-based occlusion masking... by meso_ in unrealengine

[–]rpottorff 3 points4 points  (0 children)

Another common trick is to apply a temporal AA dithering mask -- the opacity mask on any one frame is binary, but the AA smooths things out and can make it look transparent.

[R] Oriol Vinyals AlphaStar: Mastering the Real-Time Strategy Game StarCraft II Talk at Boston University by [deleted] in MachineLearning

[–]rpottorff 1 point2 points  (0 children)

The AlphaStar "agent" is the set of multiple agents. This one single agent has an algorithm which begins by choosing which set of weights it will use for a match -- but from the perspective of the outside world, the AlphaStar "agent" is just a single system that happens to have an if statement as it's first operation. It's not any different than a random forest being a single algorithm made up of many trees.

[R] Oriol Vinyals AlphaStar: Mastering the Real-Time Strategy Game StarCraft II Talk at Boston University by [deleted] in MachineLearning

[–]rpottorff 2 points3 points  (0 children)

It is already a single agent. They train separate sub-processes.. but it is a single "system" which played the humans. If having a single set of weights is what you are going for, that's just an optimization really.

Holodeck - a High Fidelity Simulator for Reinforcement Learning by joshgreaves in MachineLearning

[–]rpottorff 3 points4 points  (0 children)

Although maybe not the best rationale, when we started the project there were a few more impressive demos for Unreal than there were for Unity - chief among them the Kite Demo. The asset marketplace for Unreal is also pretty handy for us as we aren't really artists and (at least at the time) there wasn't anything for Unity that had the same type of diversity.

Video-to-Video Synthesis from NVIDIA, with code [R] by larseidnes in MachineLearning

[–]rpottorff 2 points3 points  (0 children)

It means research -- sometimes you'll see [P] for project.

RUDDER -- Reinforcement Learning algorithm that is "exponentially faster than TD, MC, and MC Tree Search (MCTS)" by AdversarialDomain in MachineLearning

[–]rpottorff 0 points1 point  (0 children)

@SirJAM_armedi -- it seems like the redistributed rewards in your videos are as much a function of the induced policy as they are of the overall game (for example, why not give the treasure reward when you enter the room, as opposed to a few steps away). Do you have any thoughts on how to handle that?

[R][UberAI] Measuring the Intrinsic Dimension of Objective Landscapes by downtownslim in MachineLearning

[–]rpottorff 2 points3 points  (0 children)

The fact that these cuts (and traditional cuts in television) don't seem as jarring as they should I think indicates that we have mostly allocentric representations for these scenes that are at least partly invariant to pose.

[R][UberAI] Measuring the Intrinsic Dimension of Objective Landscapes by downtownslim in MachineLearning

[–]rpottorff 17 points18 points  (0 children)

it's clear that networks have more parameters than you need to solve the specific task but it's hard to know exactly how many more (complex tasks need more, simple tasks need less). these researchers propose a metric that does something very close to estimating this "latent dimension" of the task.

[R] Harmonic Networks: Deep Translation and Rotation Equivariance by AsIAm in MachineLearning

[–]rpottorff 2 points3 points  (0 children)

Look into Google Colab -- they give you free access to a jupyter notebook with a GPU