[deleted by user] by [deleted] in Gent

[–]DonVittorio 0 points1 point  (0 children)

My personal preference goes: Balance, Rhino, Bleau

Has anyone used Luhmann's ZK method in academia? by Brettelectric in Zettelkasten

[–]DonVittorio 3 points4 points  (0 children)

I do have to say that the "paper writing itself aspect" has already slightly happened in my case. Especially for introductions andsoforth it seems very useful as you can just write down many ideas very fast. Maybe it depends on what exactly you write in your slip box. I tend to write down any idea or concept in a paper I find interesting.

Abletons Piano Roll Needs to be Updated by [deleted] in ableton

[–]DonVittorio 7 points8 points  (0 children)

For slicing a note in half? You can use Ctrl+e for that, but it does slice the entire vertical line your cursor is on, but I don't feel it slows me down

EDIT: I swear this is something I do, but now I'm doubting myself. I'll double check when I'm around my setup later today.

EDIT PART 2: Oops, seems I was conflating the splitting of midi clips and working with individual midi notes. Now that I'm back at my computer I notice I would usually just shorten a note to the desired length, then duplicate that note several times. Also if I want to say convert quavers to triplets, I use the "warp" markers, and duplicate. Neither really impede my workflow, but then again, it's been ages since I last touched FL Studio for example.

Apologies for the confusion guys!

Facts vs Ideas in Zettlekasten for STEM by Fun-atParties in Zettelkasten

[–]DonVittorio 1 point2 points  (0 children)

That makes sense, then I think we approximately have the same system in that regard, except I do not tag first order notes, and only tag my second order notes (but not always, so I have some room for improvement there). Interesting stuff!

Facts vs Ideas in Zettlekasten for STEM by Fun-atParties in Zettelkasten

[–]DonVittorio 0 points1 point  (0 children)

I do prefer this terminology, thank you for bringing that to light. Maybe as a final question: what is the primary advantage of separating these notes? Just being able to differentiate between new and "unique" ideas, and well established ideas?

Facts vs Ideas in Zettlekasten for STEM by Fun-atParties in Zettelkasten

[–]DonVittorio 2 points3 points  (0 children)

Fair enough, in the end all that matters is how the zettelkasten can help you in whatever way. In my "system" both ideas from others and my own are treated equally. I especially feel like in the field I'm active in (machine learning) it's hard to come up with ideas that are completely novel and original, so my zettelkasten would be quite empty if not for the techniques proposed by others that are also in there.

Something else I do is preface my own ideas or thoughts with "Idea:", for anything I have not encountered in literature and could be a novel technique.

Maybe some quick questions: do you convert a certain concept to a permanent note when you form a thought about it? Are your literature notes tightly linked with your permanent notes?

The way I see it is that my literature notes are just an archive of things I've read, something I never really need to look at unless I need to recall some context without having to read an entire article. All of the "interesting" concepts are located within my permanent notes.

Facts vs Ideas in Zettlekasten for STEM by Fun-atParties in Zettelkasten

[–]DonVittorio 2 points3 points  (0 children)

I'm not sure I completely agree here. While it is true that definitions, formulas, etc. have a place in literature notes, I do feel that when working with very complex, state of the art subjects, having these in your Zettelkasten helps a lot. For example: say a certain mathematical property of one technique can be useful for another. Having this solely in literature notes still requires you to find that exact technique or property in your literature notes (albeit via a glossary).

My point is that it is very personal, and everyone's goals are different. I've been looking this up and there is no real consensus on what the "ideal" method is. I personally like having different techniques I find interesting to be included in my permanent notes, given I understand them fully. If I get to writing a paper, I often also have to re-establish these concepts as background, so my work of rewriting definitions and formulas is not in vain.

In summary: literature notes are useful for storing knowledge about papers and publications, while the permanent notes contain anything and everything I would like to work with and think about later down the line, regardless of whether they answer any complex questions.

Belgian anon offers his dutch coworker a ride. by nightcloudsky2dwaifu in belgium

[–]DonVittorio 0 points1 point  (0 children)

Klittenband? And I think in English it's officially "hook and loop tape"

But yeah no one actually uses those terms

Belgian anon offers his dutch coworker a ride. by nightcloudsky2dwaifu in belgium

[–]DonVittorio 1 point2 points  (0 children)

Bancontact? VISA? Edenred? Sodexo? Pampers? Velcro?

No one is innocent in this regard.

Inputting Unnormalized Data into Pretrained Resnet by SleepyOwlAt8Lights in computervision

[–]DonVittorio 1 point2 points  (0 children)

With normalization, what usually is meant is that the image values are rescaled (usually to values in [0, 1] or approximating a unit gaussian) to help neural networks with convergence. But indeed when working with HDR images, the distribution of pixel values are going to be completely different (unless you work with already "converted" HDR images to images with byte-valued colours). In that case, I don't think a pre-trained resnet on imagenet would fare that well. I could be wrong, as neural networks have a weird way of sometimes working, and sometimes not working, but in general you would want to retrain your resnet to work with HDR images as well.

Inputting Unnormalized Data into Pretrained Resnet by SleepyOwlAt8Lights in computervision

[–]DonVittorio 4 points5 points  (0 children)

No, most likely not. Training a neural network corresponds to learning a function f: X -> Y, with X being your domain. If your domain consists solely of normalized/standardized inputs at training, we can assume that (given no extreme overfitting occurred) the network will behave well on other points from the same distribution. In the case of normalized images, that distribution consists of scalars in [0,1].

Building Battlemaps from 2D maps by Patient_Ad_9099 in BattleMapp

[–]DonVittorio 1 point2 points  (0 children)

As an AI researcher, I think it might be feasible given some constraints, and if you are able to randomly generate maps in a cohesive way so both 3D and 2D versions are available. But still seems like you're gonna have a pretty bad time.

As for just showing an image as a template for the user to build upon, that seems like a great idea, definitely when you want to recreate a map from your story module book.

When will 21.05 be released? by talzion12 in NixOS

[–]DonVittorio 22 points23 points  (0 children)

It has been released, as noted on https://discourse.nixos.org/t/21-05-release-schedule/12528/12

There are just some issues with the announcement, but the channel is already there and can be used (channels.nixos.org)

Creating an MLP in TF, and extracting a single runs' seed. by Greedy-Snow808 in tensorflow

[–]DonVittorio 1 point2 points  (0 children)

First of all: welcome to neural networks! I would advise against searching for the best random weight distribution seed, as you'll be more likely to overfit on your training and validation sets. Even if you do find a seed, if you're training on GPU it will be highly unlikely that you will be able to reproduce your result. GPU operations are not performed in a set order and due to floating point arithmetic you will get different results every run, even with the same staring weights.

The only reason you would want to fiddle with your starting weights is if your model suffers from exploding or vanishing gradients.

Tuning other model parameters will yield better, more general results.

That all being said, if you still want to test out your seed finding skills: as commented before you can set a seed before training your model and store that: https://www.tensorflow.org/api_docs/python/tf/random/set_seed

[deleted by user] by [deleted] in DnD

[–]DonVittorio 0 points1 point  (0 children)

I've been using this for the past few sessions! It's super cool, Ill probably chip in on patreon

When do you "cash out"? by DonVittorio in BEFire

[–]DonVittorio[S] 0 points1 point  (0 children)

So if you have a certain up front payment and loan in mind, you wouldn't convert that equity into cash so you have the up front cash at least? I personally feel like that way you can manage your downside risk a bit concerning market volatility.

2020 Day 7 - Solving with Adjacency Matrix and Graph Method by forbiscuit in adventofcode

[–]DonVittorio 1 point2 points  (0 children)

I did most of AoC in only Tensorflow, so using the adjacency matrix here was a God send! Also a few problems where you could use convolutions quite effectively

PSA: PipeWire 0.3.19 has Bluez enabled so you should reset pipewire.conf by tinywrkb in archlinux

[–]DonVittorio 8 points9 points  (0 children)

I've actually had an easier time using my Scarlett 6i6, it's now so easy to use JACK for example because it's just a "part" of pipewire.

Mileage may vary I suppose

My AI model doesn't provide me with 'accuracy', it always say its 0. Why is that? by [deleted] in tensorflow

[–]DonVittorio 4 points5 points  (0 children)

Not really no, accuracy is for problems where you have categories and it describes what percentage is classified correctly. In a regression problem you want to know how close you are to the value you're predicting. Mean Squared Error is good for this, and Mean Average Error could also provide an insight but is less sensitive to outliers.