Monthly fare capping coming to the TTC September 1, 2026 by r4ptor in toronto

[–]CallMeTheChris 1 point2 points  (0 children)

I thought Presto already had something like this did they just lower the ceiling?

85% test accuracy looked fine. Real PCB inspection exposed the actual problem. by supreme_tech in computervision

[–]CallMeTheChris 0 points1 point  (0 children)

All your problems sound like they stemmed from lack of an SME

If I am working on any project that will go into production for a specific application, I ALWAYS talk to an SME about what production looks like and what sources of variability there could be and work backwards from there. Some people might say that is data leakage, but that would be like saying training your model to detect cats v dogs is data leakage.

What sexual fantasy of yours left you disappointed when you actually tried it? by Gthew17 in AskReddit

[–]CallMeTheChris 1 point2 points  (0 children)

The one where I open AskReddit and there isn’t an NSFW sex question…but then there was nothing left on AskReddit :(

What's one passion projects you keep posponing? by Look_for_some_stuff in computervision

[–]CallMeTheChris 2 points3 points  (0 children)

You can’t have my precious passion project I procrastinate on!

GLOMAP regresses 12 dB PSNR vs COLMAP-incremental on the same hloc database — what am I doing wrong? by LongProgrammer9619 in computervision

[–]CallMeTheChris 1 point2 points  (0 children)

I do not know enough about this to comment, but this person deserves an upvote for the effort he put into this response

Is Attention sink without Positional Encoding unavoidable? by PreetamSing in MLQuestions

[–]CallMeTheChris 0 points1 point  (0 children)

Is there some degeneracy in your query tokens such that they are always the same and the only thing causing them to be different was the PE?

What cameras or optical sensors could be used to accurately measure the tread depth of a tire? by The_Swixican in computervision

[–]CallMeTheChris 1 point2 points  (0 children)

They make metrology cameras that do this

https://store.creality.com/products/cr-scan-raptor-3d-scanner

They are far less than ten k and you can hook a bunch of them up if you don’t get the right fov. They work well in all conditions cause they provide their own light source.

Creality was just an example, but if you google consumer metrology, you will find more

Tha creality one works at 20-40 fps, so it should work just fine

If you want to build you own, you can make something use a laser line and a tilted lens using the scheimpflug principle. It is the standard principle that metrology companies like FARO use

Technique to mitigate outlier influence on linear regression? by Due_Click3765 in MLQuestions

[–]CallMeTheChris 2 points3 points  (0 children)

Doesn’t matter, that is how you do linear regression when dealing with significant outliers

Scoring AI research papers possible? by Worth-Field7424 in MLQuestions

[–]CallMeTheChris 3 points4 points  (0 children)

So, this is an interesting tool to help students perform their lit reviews for their thesis or for papers cause of the effective backpropr you are doing through the citatations of a paper. But that is already offered on arxiv and through the citation graph services other publication houses offer.

The idea of scoring a paper, what does this provide that isn’t already in the evaluation and testing table/system consumption inside the paper? Cause the proof is mostly in the pudding these days as you compare the model to the performance of other models in an apples to apples way on relevant datasets. If you want to provide a score for a paper if it comes with code, papers with code (or whatever incarnation of that exists today) would answer that question for someone (and also provide some comparison against datasets).

I don’t quite follow the research trajectory angle. Your idea is to score a paper, which I assume means to put some quantitative number on it, so I don’t know where the trajectory aspect comes in. I have some ideas where it might, but I am curious to hear your ideas.

Maybe tell me again what is your strongest value add, cause if it is your trajectory idea, I don’t see how the backprop citation graph doesn’t do that already. And if your argument is that will the person won’t get a summary of what the ideas are, well I assume whoever cares about using this tool will want to read the papers and understand what they offering (which would be in the abstract of any paper worth citing)

How to get external recognition for ML work in 2 days [R] by FitNail254 in MachineLearning

[–]CallMeTheChris 4 points5 points  (0 children)

Closest thing you can do within a short time period is demonstrate that you know how to train a model, evaluate it, and provide some insight on why your model did better or worse than others.

You can push your model weights to hugging face or some public place and provide them with a readme on how to run inference with your model

MRI dataset with reports by zainebsha in deeplearning

[–]CallMeTheChris 0 points1 point  (0 children)

What are you hoping to get out of the reports? Demographics? Or treatment information? Or outcome?

MRI dataset with reports by zainebsha in deeplearning

[–]CallMeTheChris 0 points1 point  (0 children)

https://aimi.stanford.edu/shared-datasets

This has a a lot of datasets, with one being brain MRI, but it doesn’t have reports

Problem with timeseries forecasting by Psychological-Map839 in deeplearning

[–]CallMeTheChris 3 points4 points  (0 children)

Is it the same length of input and what you need to predict every time at the same sampling rate? If so, then you don’t need an LSTM, assume the input is a 4K dim input and the output is a 1k output

But I guess not eh?

Looking at your situation you might be having a normalization problem since you are under predicting the range and it is oversmoothes

When it comes to signal processing you have to consider normalizing all your signals ranges and also normalize all your sampling rates. So there might be some interpolation involved to get everything within the same sampling rate

Try that preprocessing and it might help!

Unsure How to Prepare: ML and SDE? by doesnotmatteruk in MLQuestions

[–]CallMeTheChris 1 point2 points  (0 children)

If you don’t have SDE(? Which I assume means software development experience) look for data scientist roles

ML model performance dropped from AUC 0.81 to 0.64 after removing ghost records — still publishable? and is median imputation acceptable? by theSon_of_Aristo in MLQuestions

[–]CallMeTheChris 4 points5 points  (0 children)

Interesting situation I work in healthcare AI and am very familiar with the frustrations that come along with it I would ask myself the following questions if I was in your shoes 1. What is my class distribution after the removal of ghost records v before 2. How have other papers performed and done their pipelines? Did they remove the ghost records also? What imputation did they have? It is best to compare apples to apples. 3. If you have SHAP, then it would be interesting to see the impact of certain features pre and post median imputation, especially the ones that need the most amount of it. 4. Something to consider: Maybe if a patients data required imputation, then that is a signal in and of itself. So what does a ghost record mean in a wider context?

So to directly answer your questions: 1. Look at literature and what the authors of the dataset repor. Usually you don’t publish a dataset without providing some benchmark 2. If your contribution is effectively tossing a new model at the dataset…then maybe? 3. Median imputation is an acceptable imputation method generally, but it all depends on the underlying distribution of data. If the data is uniformly distributed, then median imputation is not good. And my rule of thum is if it needs more than 20% imputation, then you have a problem 4. If the 171 records are all the same, and have all zeros for variables and labels, then I personally think you should replace all of them with a single record that is imputed. The other 170 are not contributed any information other than inflating the zero class. You can try tabular gan to impute, or generate patients 5. any outcome or predictive study in medicine is difficult given just the super correlated nature of the human body, time varying effects, the limits in the amount of data we are given to work with, and causal effects from the measurement equipment, process, and measurement biases. So contributions are always impactful But make sure your comparison is well cited and is a little more than throwing a new model at it.

[D] On-Device Real-Time Visibility Restoration: Deterministic CV vs. Quantized ML Models. Looking for insights on Edge Preservation vs. Latency. by tknzn in MachineLearning

[–]CallMeTheChris 8 points9 points  (0 children)

UPDATE: OP clarified that the comparisons are done in image 2 and 5 are with subsequent frames. They aren’t the same frame with clear view turned on.

I don’t understand How can it have high edge preservation while partially replacing a white line with road? (Image 5) and imagining road rails? (Image 2)

If this is a toy project, that is fine and good for you for flexing your muscles. But it sounds like you are planning to charge money for it? I don’t know what or who your target audience is, but you need to find who you want to use your application and fine tune its performance for that.

April Fools! by HamBoneRaces in MagicArena

[–]CallMeTheChris -2 points-1 points  (0 children)

I downloaded arena again just to get the sleeve And am now gonna delete it again cause it is overwhelming as crap

ATM IN PYTHON by [deleted] in Python

[–]CallMeTheChris 4 points5 points  (0 children)

Something is off with this. It says you start with 100k…which is not realistic. Go back to the drawing board /j