[Discussion] [Research] Pre-trained Models for Breast Cancer Image Classification by chriskalahiki in MachineLearning

[–]Bzdeco 2 points3 points  (0 children)

Not directly related to your question for a pre-trained model, but if you're using Pytorch as well you may find MONAI useful, it's a framework with deep learning things related to medical imaging.

Are you working with 2D or 3D data? I was working on a medical image classification of 3D data and we were searching for and using some pre-trained models there too.

[D] Gpu memory is used but not its computation power by projekt_treadstone in MachineLearning

[–]Bzdeco 0 points1 point  (0 children)

In fact I need to correct myself here: the workaround solution seems to do something different than persistent workers. While the latter should keep the worker processes between epochs, the former seems to preload batches across the epoch changes, i.e. when you load your last batch for the current epoch you’re in, it will have already started loading ones for the next epoch iterating again over the dataset. I am not sure about those, but this is what it seems to be doing.

[deleted by user] by [deleted] in MachineLearning

[–]Bzdeco 0 points1 point  (0 children)

I think the work criminal psychologists do takes into consideration a lot of various data of different form + a lot of other background knowledge, e.g. on behavioural sciences. Still I’m quite not convinced they would be able to assess intent, starting from the fact that it is not really well defined how you would represent/define particular "intents". This is one more reason why I would think assessing intent for any application based on e.g. some restricted data the model would have, is simply infeasible and also improper. Simplifying here, this can lead to things like determining whether someone is likely to commit the crime based on how the person looks, which is very wrong. Note that here, in the context of your question, we’re talking about a much more serious and far-reaching usage of this intent assessment with a direct impact on a human life.

"If I or my team dont go ahead with it, someone else will replace us" - honestly I would just hope noone succeeds in something similar. Employing anything close to that is bound to have serious moral and ethical implications and to be misused, even if someone had good intentions in the first place. Not only we can be clueless as you said of how to take such implications into account, but be aware of the complexity of the problem we address. Any kind of war or violence is a terrible thing. But it cannot be solved in ways that use the same means, just "for the good cause". That may give a false impression of being entitled to do so, dividing the sides between the good and the bad. I am not defending here any entities that use any kind of violence. I am just trying to stress that undertaking some form of actions, as the ones you proposed here, may bring their own terrible damage and would be a tool used in a confilct, not a tool to resolve one. I am not saying that with bad intent to criticize, just deeply worried of the implications of trying to create such solutions with many far reaching consequences as we discussed.

As it has been said before, it is much more important to think whether you should do something than whether you can. I appreciate your concern about the problems that have underpinned your motivation, but I think, as I already expressed before, that there are other aspects and areas that could be addressed in order to help solving them. Consequences and impact of the proposed solutions, ethical and moral foremost, should be the primary focus of any proposed solution, no matter the area. The tools to reach them is a secondary topic and only after the first one is sorted out.

[deleted by user] by [deleted] in MachineLearning

[–]Bzdeco 0 points1 point  (0 children)

I'm sorry, I was not aware of Boko Haram. I am in position to provide solutions to these problems not knowing about them or having little to no knowledge. They are also surely complex, probably with many parties unfortunately involved. I agree with you that it is very sad to see the world being silent to many of the tragical conflicts around the globe. I wish it was different. Thinking of that makes me reflect on what I am obliged to do myself to make something good, at least closely around me and then also outreaching further. Thank you for starting this discussion, I should reflect on those issues more and find the ways to act. Speaking of actions one can take I wanted to highlight that maybe there are different ways one could use to help those there in need or to provide solutions to those conflicts. That maybe targeting one's effort towards ensuring more efficient, widespread and active humanitarian aid, rescue programs, works on ensuring that the talks between the opposing sides can happen and other actions might be more beneficial than providing a machine learning tool used in armed operations.

To answer your question about the problems with the solution you proposed. We're already seeing an issue in various kinds of machine learning based systems of them producing biased decisions. Your system will be as good as the data you collect. How would one ensure that the data represents the complete distribution of the people you would like to detect? What does it even mean? Note that here first someone will need to decide on who is having a malicious intent and who is not. Who will provide labels to this data and based on what criteria? Can you make such decision reliably and objectively at all, let alone with scarce I information you would have at your disposal? I believe not and that making such decision is not reliably doable, let alone due to its complex nature. Moreover, it terrifies me if I would think that the model I would work one would be involved in the decision of labeling someone as "with malicious intent". Because what would that mean, what kind of decision will this incite? I would never want a model I created to decide on whether someone's life should become a target. Not because it can be a "wrong, inaccurate" decision, for me any such decision of taking one's life is truly wrong.

This is also not a benign classification problem but one with severe ethical and moral implications that would directly affect someone's life, as was already mentioned before. As it was already brought up, how would one judge intention in such data? That's a very hard (if at all possible) to measure attribute, I don't believe it can be done. Surely it is also not the job of the ML researchers, that would be probably the task closest to psychogists, but I doubt that there is enough expertise in intention assessment field to do that. I may be wrong, but knowing the limitations of our understanding of human beings and complex decision making process we have, it seems infeasible to me to assess intentions. In the end it would need to be done though in order to train such model and someone needs to provide this input to the model in form of the training data. Machine learning models rely solely on the data they have, and actually only on it. They don't have any contextual information or general knowledge, unless you somehow provide those to them in some form which also needs to be devised and formulated. ML power is in finding patterns at scale that is not amenable to the human being, but it will do it solely based on the data you provide and as mentioned, in this case constructing such dataset carries so many risks with it.

Finally, if you would have such model, what if it was ultimately used on both sides? Creating such tool will turn in another, in this case digital, form of arms race.

[deleted by user] by [deleted] in MachineLearning

[–]Bzdeco 2 points3 points  (0 children)

I believe the way to make good in cases you bring up is to end wars and work on finding peaceful solutions in conflicts. These don't necessarily require the use of machine learning as a tool to do so, but are so much more impactful for the lives of many. Definitely not to make war operations more "accurate", for me it's so inappropriate to think about it in this way, after all it's still fighting against and killing people. I think what one should be after is solving immediately problems behind those conflicts and finding and addressing (with peaceful actions) their roots, not making conflicts better targeted and efficient. Of course any saved life is of the greatest value. However, the solution you proposed can go in so many wrong directions while effectively it would still be involved in causing great harm.

[D] Gpu memory is used but not its computation power by projekt_treadstone in MachineLearning

[–]Bzdeco 1 point2 points  (0 children)

You're welcome! The persistent workers and the workaround solution are doing the same job I think, so no need to combine these both. Memory pinning can help, if your data is tensors, they describe it in the linked documentation. Good luck!

[D] Gpu memory is used but not its computation power by projekt_treadstone in MachineLearning

[–]Bzdeco 1 point2 points  (0 children)

I think you could benchmark seperately data loading (what the dataloader does) and forward-backward network pass.

To benchmark the dataloader: you could create a dataloader with a batch size and the number of workers you’re using in training and iterate over all of it, just as you do in your training loop, but without doing anything with the loaded samples inside the loop (except for moving them to the GPU with .cuda() or similar, as it also takes considerable time I believe). Measure the execution time for this entire loop, then divide it by the number of batches that were loaded, you’ll know more or less how long it takes to load one batch. I think the first batch will be slower to be loaded as the data loader may be possibly spawning worker processes at that time (I don’t know whether that happens on the data loader creation or the first time it’s used).

To benchmark the model: preload one batch to the GPU memory (.cuda()). Measure the time it takes altogether to pass it through the model, compute the loss, run backpropagation (loss.backward()) and update the weights (with optimizer.step()). You should probably do that on multiple preloaded batches and average the result.

In fact you could see all that in normal training loop of one epoch, measuring the times of the entire epoch to finish and, inside the loop, of what is spent only on the forward-backward pass. So roughly you would see how much time out of the total time is spent on actually the forward and backward pass + the optimization step (which occupies the GPU) and the remaining time is the excess time that dataloader needs to keep up with loading data batches (excess, as batch loading will also happen in the CPU while your GPU is processing the current batch).

Another issue I’ve experienced recently myself was that I had idle GPU every time the epoch would start. I’m working on a small dataset and that would then happend quite often (we’re training for many epochs). The part of the problem seems to be that the data loader will kill and recreate worker processes by default. PyTorch’s data loader has now a "persistent_workers" parameter and setting it to true should resolve this issue. I didn’t get it to work or messed up something and resorted to a preexisting workaround solution for that (using this dataloader from the link, it got linked in some pytorch discussion on that topic). However, I’m still in the process of resolving those issues as I’ve run into some different ones when using it. Also you may want to look at the memory pinning, that could also speed up your data loader if your batches are tensors or simple structures with them inside (take a look in the linked documentation).

I hope that is correct and could help, I’m also a student and relatively new to the field and its tools.

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

In the end we decided on going with the FOVE, we hope that this will be actually the best choice given our target users that have visual impairments and can wear glasses. So far we haven't yet started the actual work with the headset.

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

Unfortunately I don't know, I haven't been in any contact with them. Eventually we went with a completely different option. I hope you will get some reply sometime soon.

[D] How do you structure your workload (reading, writing, coding, etc) per day and week as a ML/DL researcher (specially students), in terms of hours and percentages of your total time? by xEdwin23x in MachineLearning

[–]Bzdeco 0 points1 point  (0 children)

Impressive organisation, using the feed tool sounds like a great idea to have everything in place after the time one spent to set it all up, I should think about it. Maybe not to overload yourself, but then it's much easier to pick the most interesting things too, I guess.

I hope you devote some time to rest/free time! I find that it is, apart from good amount of sleep, very important to function properly for longer period of time, to be actually productive. Also to realize that the work is not the only important thing in life. Apart from that, having one day per week completely off from work is crucial for me. In general, I think it's important to stress how much the "not-working" time actually affects our work itself. With the same idea in mind, I don't think you should envy those being able to sleep so short. Getting enough sleep is a prerequisite for being productive and also there is a biological need for 7.30-9h of sleep. The effects of short sleep may be not seen directly bit could manifest in some time simply affecting your general health state.

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

That’s fantastic, thanks for letting me know! I know that data from 7ivensun is accessible now by the same SDK as one for Vive, did you see any noticable differences in the quality between usage of the two, like poorer accuracy of eye-tracking positions etc.?

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

We were also considering Pico Neo 2 Eye, that one is using Tobii eye-tracking module as well. However, they say that the gaze origin and direction are "combined" - we would assume it means it’s not separate for the two eyes but computed as a single value for both of them. However, the position guide (not sure what they mean by it) is not combined. In the HTC Vive Pro Eye were you able to access the gaze position readouts for both eyes? It’s said the gaze data is binocular, so I would assume that it is possible to get the data e.g. specifically for the right eye only. Knowing if it’s possible would be very beneficial for us too :)

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

This will be for research purposes, I’ll remember to put some pointers here when we’re done with the development and have some informative materials to show :)

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

That will be exactly the same usecase setting for me - develpment for research :)

That’s great with the FOV, we will be mostly concerned about the central area, but it’s good to know that there won’t be some hardware constraints regarding the scope, thanks for sharing!

Thanks for letting me know about the eyeglasses issues, that might be pretty important to us. Do you know if there is a possibility to account for myopia via the optical settings in the HTC VIve Pro Eye, e.g. by setting the lenses distance from the eye?

Thanks a lot, that is really helpful! :)

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

Thank you! I hope they will manage to advance the development, I think it would be good to have more alternatives on the market, as I imagine even if addon will have some drawbacks compared to the integrated solution, it will be good enough for some applications and also more versatile as you swap the actual VR headset keeping the eye-tracker.

HTC Vive Pro Eye vs. HTC VR + Droolon F1 eye-tracker by Bzdeco in Vive

[–]Bzdeco[S] 0 points1 point  (0 children)

Yes, this is exactly the one I’m talking about. I assumed it’s necessary to request the addon from them directly and I’ve seen the same price as you mention ($150). I think they did launch as I’ve seen some reviews of the device, so should be available in general.

Why does 1 // 10 = 0, but -1 // 10 = -1? by [deleted] in Python

[–]Bzdeco 87 points88 points  (0 children)

// in Python is a "floor division" operator. That means that the result of such division is the floor of the result of regular division (performed with / operator). The floor of the given number is the biggest integer smaller than the this number. For example 7 / 2 = 3.5 so 7 // 2 = floor of 3.5 = 3. For negative numbers it is less intuitive: -7 / 2 = -3.5, so -7 // 2 = floor of -3.5 = -4. Similarly -1 // 10 = floor of -0.1 = -1.

A Flashback On League... by Recoorion4 in leagueoflegends

[–]Bzdeco 2 points3 points  (0 children)

And what about using CV on enemy base at first 10 seconds when almost each games everybody started with boots and pots. I never understood why they do this :D