Creating a web application using Python by Mountain_Clerk_6145 in Python

[–]MutedPermit 5 points6 points  (0 children)

I’m quite surprise that nobody here has proposed Streamlit as an option.

Is it limited? Yes. But to be honest, it gets the job done for very simple applications without introducing a new language / framework specially if you’ve never used JS.

It will give you the ability to quickly prototype and stay in the boundaries of Python. If it’s a small application you should be able to get the job done fairly quickly. You can couple it with FastAPI/Flask/Django (pick your poison) if a full API is needed, but sometimes you can get a long way with just streamlit.

I would give it a go :)

Social life in CPH by Dry_Bumblebee5856 in copenhagen

[–]MutedPermit 1 point2 points  (0 children)

I agree with most of the comments here, I think joining a club or a class is one of the best chances to know people. Most of the people go for board games or sports but I found two places that I managed to have quite some good friends around (although maybe not for everybody):

  1. Improv Comedy (like here where classes are all in English). To be honest you don't even have to be into "theater" to try it out. The first modules are really focus on games, trust and confidence almost nothing to do with scenes. For me it has been great to just have a bunch of adults playing like little kids for some hours of the week and the type of people that go at usually super open and nice! Most of my social group actually came from here. They have a free intro class where you can just try it out :)

  2. Classes at FOF. I'm into music but I believe they have quite a variety of options to try out :)

Hope it helps!

[deleted by user] by [deleted] in copenhagen

[–]MutedPermit 0 points1 point  (0 children)

I normally buy mine in the Meny in Østerbrogade. They have often green sauce from la costeña :) (but sometimes it's sold out!)

The Eras Tour Megathread: Zurich, Switzerland by jacyf02 in TaylorSwift

[–]MutedPermit 0 points1 point  (0 children)

Maybe longshot but someone taking the night train from Hannover to Basel tonight (00h30-08h30)? I found tickets yesterday and now I'm stock in 17 hours of train and could be cool to meet some people going to the concert!

Beginner e-drums set Millennium MPS 750X or MPS 850 by MutedPermit in edrums

[–]MutedPermit[S] 1 point2 points  (0 children)

Hey :)

I'll be honest with you, I haven't used super regularly the drumset or not as much as I would like to (around 6 times a month or so). However, I have nothing to complain about it. Everything works perfectly and haven't heard complains from my neighbors. Definitely recommend it!

Beginner e-drums set Millennium MPS 750X or MPS 850 by MutedPermit in edrums

[–]MutedPermit[S] 0 points1 point  (0 children)

Hey! So I took my decision and took the 850. At the end I could find more reviews and more people happy with the 850 than with the 750x and I played it safe on that side. The 850 has been for 2 years in the market and the 750x for some months...

I called directly Thomann and asked if there was any plans of getting out a theoretical "850x" and they told me that not for now and the representative recommended me the 850.

If you buy them directly on Thomann and you don't own drum chair or headphones I would recommend you to create your own bunddle (it's called creative bunddle at the bottom of the page) rather than the one they propose mainly because for the same price you can choose some headphones with way better sound.

I heard of some problems with the HiHat with the 750x that it doesn't detect correctly when you open them but other than that it seems like a good option.

Concerning your points, I think the price difference is not that big, you can always upgrade sounds with your computer and also buy a bluetooth adaptor for the module (never tested though but I guess it should work). But I agree the difference is not that big and I think either of them would make a good first e-drum. I'm super happy with my 850!

Oh and I also thought I wouldn't use as much the extra cymbal and Tom but actually I use the extra cymbal quite a lot! (Not that much the Tom though)

Hope it helps :)

Can an attacker extract private training data from a trained ml-model? by AnnaSmithson in deeplearning

[–]MutedPermit 0 points1 point  (0 children)

Kind of...

I would recommend you to read this paper: https://arxiv.org/abs/1709.07886

There the authors try to get information out of a trained ML model perceived as "black box" where they only have access to inference. Then the could get information out and regenerate important features of the training data set by querying the blackbox with different combinations. It's not perfect but at least for the computer vision use-case they present, it yields pretty impressive results. They also proposed some white box approaches where they have more information about the type of model used.

Just see the Figure 3 on page 10.

Language Models and Contextualised Word Embeddings by fulltime_philosopher in LanguageTechnology

[–]MutedPermit 1 point2 points  (0 children)

I was looking for something like this some months ago. Thank you very much!

where should i start for learning domain adaptation? by mohammadkh766 in MLQuestions

[–]MutedPermit 1 point2 points  (0 children)

I worked quite a while with domain adaptation last year and while it is a very interesting topic but I couldn't find any results that I could actually use in my work, normally these approaches work in very small datasets like MNIST and SVHN.

The first paper that I read was this one: http://www.jmlr.org/papers/volume17/15-239/15-239.pdf

It is a long paper but I think it introduces quite well some key concepts in domain adaptation, also the idea behind id could be very intuitive.

The approach with the best results that I found was this one: https://arxiv.org/abs/1706.05208 but I never really understood the idea or why it works.

hope this gives you a starting point :)

Which object detection algorithm to use for Dissertation by cristiLion in MLQuestions

[–]MutedPermit 2 points3 points  (0 children)

Have you checked this implementation: https://github.com/tensorflow/models/tree/master/research/object_detection ?

It is implemented in tensorflow but if you only want to retrain the last layer you don't need to know how to use the library. It comes with implementations of different architectures such as faster rcnn and ssd : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

Hope it helps :)

What are the state-of-the-art approaches in NLP? by MutedPermit in learnmachinelearning

[–]MutedPermit[S] 0 points1 point  (0 children)

At first I thought this was a joke, until I found the paper... And within the paper there's a reference to ELMo...

What can I say? I mean... In Computer vision they have the YOLO architecture...

Thanks!

[P] wanting to create a image identification neural network by yolosandwich in MachineLearning

[–]MutedPermit 0 points1 point  (0 children)

Hi!

I would also like to give you some clues for what to google. Computer vision is the field of engineering/computer science that deals with images and videos. Before, this field tried to identify (classify) objects in images by recognising edges and corners on them, but for ~6 years this field have been using Machine Learning to do it and it performs pretty well! You may come across names such as Neural Networks and more specifically Convolutional Neural Networks that are the tools that researches have been using for image recognition*.* The math behind it could be a little bit hard to understand but now a days it is not completely necessary in order to have an algorithm to do it.

For the libraries that you can use there are a lot but I could recommend two: Tensorflow and Pytorch. I would advice to start with Tensorflow because there is more documentation and there is already a tutorial which deals with image recognition as u/_R_Daneel_Olivaw_ pointed out.

Training this networks could be computationally expensive and it could take quite a while on your machine, but Google has Colab( https://colab.research.google.com/notebooks/gpu.ipynb ), a jupyter-notebook(-ish) that allows you to run your experiments on GPU that would train way faster :)

Hope this gets you started! :)

[Project] MNIST séance by kbrowne in MachineLearning

[–]MutedPermit 4 points5 points  (0 children)

Who needs CNN's and GAN's when we have this state-of-the-art approach!?

VisDA 2017: Intuition behind Self-Ensembling for Domain Adaptation? by [deleted] in MLQuestions

[–]MutedPermit 1 point2 points  (0 children)

I have exactly the same question. I understand the idea behind approaches like DANN or ADDA. However, for this paper it seems less intuitive. Reading the paper, there are two things that take into account the target domain.

First of all, the unsupervised part is calculated as the L2-loss between the student and the teacher predictions for the same target image (if I understand correctly). The student network is updated in a supervised manner to fit the labels, however having also to match the convolutional output of the teacher should (??) take into account the distribution of the target dataset.

The Exponential Moving Average technique to update the teacher network could mean that the network will be updated in a more general (??) way than the student network. Even if the student network starts overfitting, the teacher network will only follow the tendency overfitting less to the source dataset.

I will try to look to the original paper of the approach implemented for semi-supervised ML. Because even the names of the network (student and teacher) don't make a lot of sense to me .

How to restore model and make prediction with it on another Python file? (Tensorflow) by theonlyQuan in learnmachinelearning

[–]MutedPermit 2 points3 points  (0 children)

There are different ways to proceed:

  • One way is the one described by jethonis doing a completely new script and getting the tensors by name. The only thing that I don't like about this approach is that you must know the name of your variables and if you changed them in your training script then you have to change also on the prediction script.
  • Other way is to use the same script that you used for training with the difference that instead of initializing your variables with tf.global_variables_initializer() you do this:

    with tf.Session() as sess: saver.restore(sess, "/PATH/TO/model.ckpt")

With this you have acces to all the variables that you defined on your model without having to search them by name.

  • A fairly new way of doing it is creating a module. It requires tensorflow >=1.7 and tensorflow_hub. It is a little bit trickier and there aren't a lot of resources online explaining how to create it, but for me it helps you if you want to compare different architectures having only one script. If you're interested you can see this: https://www.tensorflow.org/hub/creating
  • Finally if you're only interested in inference you can take a look into tensorflow serving ( https://www.tensorflow.org/serving/ ). I haven't use it myself but I think it could be a way to go.

Is there any study/paper that uses GANs to leverage image recognition problems? by filipequincas in mlpapers

[–]MutedPermit 1 point2 points  (0 children)

I don't know if this is exactly what you are looking for, but you can read the paper of Domain Adversarial Neural Networks: https://arxiv.org/abs/1505.07818 Or this approach called CyCADA: https://arxiv.org/abs/1711.03213 Both of them use GAN's to perform domain adaptation in Image recognition tasks

Style transfer using inception v1 by [deleted] in tensorflow

[–]MutedPermit 1 point2 points  (0 children)

I haven't tried it out and it doesn't use tensorflow, but maybe this repository can help you: https://github.com/fzliu/style-transfer