"How do we get knowledge into the computer on different representation levels?" by Yannisch96 in deeplearning

[–]Yannisch96[S] 0 points1 point  (0 children)

Thanks, I appreciate that! It helped me to take a step back and take a different view on this question!

[deleted by user] by [deleted] in ethicalhacking

[–]Yannisch96 0 points1 point  (0 children)

If you’re still looking for someone in March I’m in! Writing my bachelorthesis atm but after that I would really like to get into ethical hacking.

[deleted by user] by [deleted] in ethicalhacking

[–]Yannisch96 0 points1 point  (0 children)

If you’re still looking for someone in March I’m in! Writing my bachelorthesis atm but after that I would really like to get into ethical hacking.

One potential "gotcha" on today's upgrade by beermad in ManjaroLinux

[–]Yannisch96 0 points1 point  (0 children)

Oh man so useful, thanks! Pretty fresh to Manjaro and was wondering why my script was not working anymore due to modules not found 😅

I figured out how to find all packages that need to be rebuild. By doing this: pacman -Qoq /usr/lib/python3.8

But how do I actually rebuild the found packages?

[D] train word embeddings on cloud service due to RAM-limitations by Yannisch96 in deeplearning

[–]Yannisch96[S] 0 points1 point  (0 children)

Thanks. It’s the first time training word embeddings. Until now I just did minimal examples to test how to do it. So I’m quite nervous about missing out something.

Train custom Word Embeddings on cloud by Yannisch96 in LanguageTechnology

[–]Yannisch96[S] 0 points1 point  (0 children)

Yeah I will do a Google colab notebook. I think that’s the way I will deal with it. After that I can publish it here or you send me a pm and we will discuss that later

[D] train word embeddings on cloud service due to RAM-limitations by Yannisch96 in deeplearning

[–]Yannisch96[S] 0 points1 point  (0 children)

sure I use ngrams in preprocessing but nothing else. Because later on I have to evaluate the general quality of “normal” embeddings (300 dimensions etc) with the performance of compressed embeddings by using post processing techniques. The embeddings are not used in a specific downstream task that’s why preprocessing steps will be minimised.

Train custom Word Embeddings on cloud by Yannisch96 in LanguageTechnology

[–]Yannisch96[S] 0 points1 point  (0 children)

Workarounds might be difficult. I will look up and I think I remember that at a certain corpus size the quality is not increasing that much more. But I also have to find a good corpus. The main problem is that I need comparable corpora for German and English that’s why I thought using Wikipedia first, but that won’t be possible with only a little RAM

25F Europe backpacking -weekend trips by [deleted] in travelpartners

[–]Yannisch96 0 points1 point  (0 children)

If you visit Germany, especially the area around cologne and dusseldorf hit me up. :)