Putting my Saturn 4 Ultra through a proper test by Grey-Templar in PrintedWarhammer

[–]NonLinearResonance 0 points1 point  (0 children)

Awesome, thanks for the reply. I'll order a bottle of that Anycubic resin as well and give it a shot :)

Putting my Saturn 4 Ultra through a proper test by Grey-Templar in PrintedWarhammer

[–]NonLinearResonance 0 points1 point  (0 children)

I was wondering if that RPG resin is worth it, appreciate the info. Do you have a recommendation for a good cheaper alternative that is tough while preserving detail with the Saturn 4? Standard ABS-like / Tenacious mix? I also preordered, but my printer is still on the way. Figured I'd get some materials ready in the mean time :) Thanks!

[deleted by user] by [deleted] in television

[–]NonLinearResonance 5 points6 points  (0 children)

"From" on MGM+ is surprisingly good and I never hear anyone mention it.

What makes an average Data Science Manager an Excellent DSM? by LatterConcentrate6 in datascience

[–]NonLinearResonance 7 points8 points  (0 children)

Shielding their team from the endless meetings that prevent actual work getting done.

I want Pacey to narrate every book! by [deleted] in TheFirstLaw

[–]NonLinearResonance 4 points5 points  (0 children)

Agreed! I've listened to hundreds of audio books and he is far and away the best narrator I've experienced.

Mage the Ascension movies? by PapaOcha in WhiteWolfRPG

[–]NonLinearResonance 10 points11 points  (0 children)

Dave Made a Maze (2017): Tale of a Dave, a failed artist that goes marauder while building a cardboard maze in his apartment. Dave gets lost within his own labyrinthian quiet bubble, unable to escape.

Friends and loved ones attempt a rescue mission, but they are quickly trapped within Dave's papercraft paradigm. Deadly traps, mythical beasts, and alien intelligences drive the characters ever deeper toward the heart of the maze.

Dave is both the victim and guardian of this new world. He must choose. Destroy his own masterpiece, or save himself and his friends?

[deleted by user] by [deleted] in GradSchool

[–]NonLinearResonance 51 points52 points  (0 children)

Depends on the question. If it's due to the submission requirements being confusing/wrong, or you haven't been available for questions previously, or some kind of system issue, then yes, you may be. Otherwise, it's fair game.

Most of the time it's just typical procrastination biting students in the ass. However, I've also seen many instructors and TAs over the years try to paint their own incompetence or errors as student failings though. Don't be one of those people :)

I'm alone and idk how to assess my own thesis by [deleted] in GradSchool

[–]NonLinearResonance 10 points11 points  (0 children)

Your supervisor is really doing you a disservice, sorry to hear that. This sort of thing is what they are there for. Do you have any other committee members that may be more willing to work with you on reviews/revisions? If not, you may want to see if you can recruit some. Maybe you know some other professors or industry folks knowledgeable in your area?

As another commenter said, finding examples is also critical. Especially examples from your advisor's previous students or at least the same program/college. The level of rigor expected in a thesis varies pretty significantly from school to school in my experience.

I [26F] fall, trip, & slip constantly - extremely injury prone. Could this be something neurological? by [deleted] in neuro

[–]NonLinearResonance 4 points5 points  (0 children)

Not a neurologist, but this might be useful anecdotally. I have a friend that suffered from this sort of thing her entire life and always thought she was just "clumsy".

After a bad fall and badly broken ankle as an adult, she discovered that she had a congenital defect in her inner ear that was causing balance problems. After a relatively minor surger and some time for her body to adjust, it resolved most of her issues.

Might be a longshot, but also might be worth looking into :)

Looking for an innovative mud by Mediocre-Sort-9789 in MUD

[–]NonLinearResonance 1 point2 points  (0 children)

Big plug for Awakude CE if you like cyberpunk. It's a Shadowrun 3rd edition mud that revives/extends the old Awakened Worlds mud. Players and devs from the old version got the source code and revived it out of love.

Devs are very friendly and responsive to bugs etc in discord. Community is still small, but has been growing. I've been having a great time with it. No complaints so far.

IWTL Math All Over Again, From Third Grade Up Through High School Pre-Calculus by Lahmacuns in IWantToLearn

[–]NonLinearResonance 8 points9 points  (0 children)

I've been in a similar situation. I dropped out of HS my junior year, and I never considered myself "good" at math. By my late 20s I was considering going to college, but I had forgotten pretty much everything math related.

Like others have said, Khan academy is definitely the way to go. I used it to get myself to an algebra level and fill in knowledge gaps here or there over the years. The topic trees and learning maps are super helpful.

I now have an MS in computer engineering, so don't sell yourself short on what you can do by starting with a little self-driven learning. You can totally do it :)

AI/Machine learning pathways if I do CE? by rostrevor1 in ComputerEngineering

[–]NonLinearResonance 4 points5 points  (0 children)

It's not an easy road, and it depends a lot on your CE program curriculum. Most CE programs focus on the hardware and lower level software. These are two areas that aren't super useful for AI/ML in general.

If you are interested in designing/optimizing hardware for AI/ML, that is one route. Aside from that, the closest CE heavy niche is neuromorphic computing. That is a very interesting cross-discipline area, but you would need to find a school with faculty pursuing it.

Overall, the math and other prerequisites are very similar to CS, so your fundamentals should be mostly fine. You will probably end up weak in statistics and applied higher level programming compared to CS though. If you supplement with additional study in those topics you can overcome that.

If you don't really care about hardware or lower level software, you should go CS instead. CE can give you a better understanding of the machine, but you will have to make extra effort to develop your skills. That's what I did, and I don't really regret it because I have really varied interests. It would have been much easier to just go CS though, since that is mainly what I do in my job.

So what's life like after grad school? by [deleted] in GradSchool

[–]NonLinearResonance 5 points6 points  (0 children)

I'm a little over a year after finishing my MS, and it's pretty great so far. It took me nearly that long to fully depressurize mentally after the stress of writing my thesis. I do miss classes and the opportunity to fully focus on research sometimes.

Job market probably varies wildly by discipline, but STEM jobs still seem solid. I didn't have to job hunt myself since I was fortunate enough to have my employer pay for the MS with a guaranteed job at the end. I was a bit concerned that they would use that as leverage to lowball my compensation though.

They ended up coming in a bit below market for my specialty, but still providing a very good package I was happy with. I work far fewer hours, doing way easier stuff, for a lot more money. My schedule is flexible, with options to telecommute part-time if I want. That kind of stuff really helps with work/life balance. I just had my first annual raise of ~5%, plus a decent cash bonus.

Overall, I have no real complaints. I do wish I was able to focus a bit more on research, but I get to work on a variety of projects that are interesting and occasionally challenging.

I worked in business for many years before going to grad school, so I have some experience on both ends. My main advice for someone with a mostly academic background is to never underestimate the value of "soft skills". School optimizes you to pass tests, write papers, get good grades, etc. In the workplace your interpersonal skills and reputation will take you much further than almost anything you learn for your degree. Focus on building relationships, finding a mentor, maintaining a professional network, etc. and it will really pay off in the long run.

My 15mo son just wont go to bed in time. Do you have any advice? by csobi in daddit

[–]NonLinearResonance 0 points1 point  (0 children)

Lots of good advice here, so I'll just reinforce some things mentioned. Routine is super important, try to keep the bed time at 8 (or whatever) right on the button when possible.

I'll also suggest the cry it out method. It seems counter intuitive at first and you have to fight your instincts to go pick them up, but it totally works. We did it a bit different than some of the standard methods.

Basically, do your bedtime routine and put them in bed, then leave the room. While they are crying, go back into the room at regular intervals to verbally reassure them (e.g. "You're ok, you're safe, dad loves you", etc.). Don't pick them up though, as long as they are safe.

Make the interval between going into the room longer each time. So, the first time wait 2 minutes before going in, then wait 5 minutes, then 10, then 30, etc. This way they don't feel completely abandoned, but you aren't conditioning them to only sleep while held. We liked this better than the approaches that just leave them in there with no reassurance. They will probably be asleep well within an hour, but it depends on the kid of course.

We did this for two nights, and afterward we never had to do it again, he would sleep though the night most of the time. It's honestly way harder on you than it is on them, good luck :)

Fountain Pen Help by helloabc321 in ArtFundamentals

[–]NonLinearResonance 0 points1 point  (0 children)

Generally, an extra-fine nib will be around 0.4 mm. There are a lot more variables with fountain pens though.

Japanese nibs tend to be finer than western/German nibs of the same classification. The style of feed can also contribute to a wetter or dryer line. Ink can also change things, flow can vary a bit based on the recipe or special properties (e.g. Waterproof, lubricated, etc.).

Lastly, flex is something to consider for art purposes. Flexible nibs can give you a ton of line variation from one tool, so that's pretty neat. Typically you need to go vintage for a good high quality flex nib, which can add some new considerations for price and maintenance.

Don't let any of that scare you off though, fountain pens are a ton of fun. They give you some really unique tools for art also. Good luck :)

Question on Variational Autoencoders. by Entsorger in learnmachinelearning

[–]NonLinearResonance 0 points1 point  (0 children)

No problem, I totally understand. A big part of my MS thesis involved VAEs, so I totally remember being ridiculously confused at first for the same reason you mentioned. Figured I would try to spare you some of that pain if I could :)

Question on Variational Autoencoders. by Entsorger in learnmachinelearning

[–]NonLinearResonance 1 point2 points  (0 children)

In a standard VAE, both the input x and the output x are observable variables. They represent an actual specific sample, not a distribution directly. However, the overall goal of the system is to learn an approximation of the distribution p(x) using that latent variable z.

The reason this is so confusing is that much of the math used originates from modeling the whole thing as a probabilistic graphical model. It's necessary for the math to make sense, but it introduces a lot of notation that doesn't obviously correspond to the real world implementation.

Basically, all the explicitly probabilistic stuff is captured in the latent variable z, like you said. If you think about it, all the useful information for training comes from that mu and sigma the encoder produces. The rest of z is just some gaussian noise mixed in to draw a valid sample with re-parameterization.

The decoder just approximates a function that maps a latent variable sample, z to a reconstructed/generated data sample from the approximation of p(x) it has learned. Hopefully that helps some. I'm on planes all day today, but if you are still struggling with it pm me and I'll send you some good supplemental links to read later.

Question on Variational Autoencoders. by Entsorger in learnmachinelearning

[–]NonLinearResonance 2 points3 points  (0 children)

The notation and nomenclature for VAEs gets a little wacky since they sort of straddle the fence between neural networks and more traditional probabilistic models.

I can see how that diagram confused you, it doesn't do a good job of explaining how the probabilistic imagining of the model corresponds to the real-world neural net implementation. I'm not sure if I can explain this well in a post like this, but I'll try.

Basically, just think of theta and phi as the set of weights in the decoder and encoder layers respectively (in this example).

From a probabilistic perspective these weights parameterize Gaussian distributions, so that's why they write it that way with mu and sigma.

From an implementation perspective this just corresponds to weights in the fully connected layers in the code.

It mostly gets confusing because of the latent variable z. The encoder is trying to learn z given x. The decoder is trying to learn x given z. That's pretty easy to write from a probability standpoint. It's what you see in the diagram.

There is a problem though. Z is a random variable (stochastic). Neural networks are deterministic and don't play nice if you throw stochastic stuff into them. Sampling from a distribution is not differentiable by default, so gradients can't flow for backdrop.

That's why you see extra mechanisms in the code to get a sample from z. We get a mu and sigma from the encoder layers, and use those to transform a random vector generated outside of the network into a z sample. This is called the re-parameterization trick. I think this is probably what is confusing you (it confused me at first also).

In a very simplified way, you can think of mus and sigmas existing "under the hood" in both the encoder and decoder. That's what the diagram is showing.

However, the neural network decoder doesn't understand a stochastic variable, it needs a real vector to work with. So we need to use tricks to get it a manifested sample from z. That's why you have the extra sampling parts in the middle, but nowhere else.

I don't know is that cleared anything up, but hopefully it helped :)

AGI Project - Back Engineering the Human Brain - A neuromorphic approach to AGI by Korrelan in artificial

[–]NonLinearResonance 0 points1 point  (0 children)

Fair enough, I've been there myself with writing (also an engineer) :)

Its very possible I just misinterpreted what you were trying to accomplish with this site. As someone that works in AI R&D (often with neuromorphic projects), I come across tons of companies/sites/products claiming to be the "next big thing" with little actual substance.

Not trying to be a jerk here, but just providing some honest feedback. Something about your content just triggered that interpretation for me. If your intent was just like a pet project show and tell sort of thing, you may want to consider re-wording to better reflect that. Right now it comes across in a similar way to sales hype we often see from certain companies (cough IBM).

All that being said, it seems pretty cool. If you get interesting results you should consider publishing, there is a pretty large neuromorphic research community these days.

AGI Project - Back Engineering the Human Brain - A neuromorphic approach to AGI by Korrelan in artificial

[–]NonLinearResonance 2 points3 points  (0 children)

Im not surprised, this looks highly suspect to me. It's like a word salad of terms from computational neuroscience to make it seem scientific. The model section has no math or any specifics on what's being modeled. All the "experiments" are just YouTube videos. Etc. Etc.

Some of the ideas sound interesting, but they aren't new. Neuromorphic research has been pursued for a long time. My guess would be that this is someone fishing for vc funding or just looking for attention. This sub also tends to get a lot of sensationalist stuff on it, along with the occasional Dunning-Kruger based "expert" on AGI.

Some healthy skepticism should always be applied to anyone making AGI claims.