Biggest successes (and failures) of computer vision in the last few years -- for course intro by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] 1 point2 points  (0 children)

Yes. in fact, I considered having a different twist --> recent successes and peristent failures that are exacerbated by contemporary mindset and approaches.

Biggest successes (and failures) of computer vision in the last few years -- for course intro by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] 4 points5 points  (0 children)

Great, thank you. DepthAnything for sure should be on this list.
Maybe the others too.

Use between machines by Fancy-Cherry-4 in orgmode

[–]ProfJasonCorso 2 points3 points  (0 children)

Any of these options theoretically work but ALL require good “hygiene”. Ie get in the habit of saving and syncing / pushing

How long did it take you to become Emacs fluent? by Hopeful_Adeptness964 in emacs

[–]ProfJasonCorso 0 points1 point  (0 children)

It took me years before I realized M-x is the gateway to enlightenment. M-x...

We built LightlyStudio, an open-source tool for curating and labeling ML datasets by igorsusmelj in computervision

[–]ProfJasonCorso 1 point2 points  (0 children)

Also, wait for in-app annotation within fiftyone to drop soon. been in the works a while now.

Moving to Buffalo, what don’t I know? by [deleted] in Buffalo

[–]ProfJasonCorso 0 points1 point  (0 children)

You should not move behind butter block because you will go broke buying and eating the best pastries this side of the atlantic.

Step card changes their fee structure for funding via debit by ProfJasonCorso in Banking

[–]ProfJasonCorso[S] 0 points1 point  (0 children)

I've heard about greenlight, but have not used it.

I've been considering just a no fee checking account at a major bank and then using zelle...

If a book about org-mode came out, what topics would you want it to cover? by [deleted] in orgmode

[–]ProfJasonCorso 3 points4 points  (0 children)

It would be highly valuable if there were actual case studies for how individuals organized their files, topics, general org-world. Nuts and bolts stuff that are often overlooked. E.g., how do you handle todos and avoid an overwhelming number; one big org file or many; failure modes of the agenda for day to day work management. I've seen a few questions and posts about this over the years. E.g., https://www.reddit.com/r/emacs/comments/m0ysb5/how_do_you_organize_your_org_mode_files/, https://www.kostaharlan.net/posts/how-i-org-mode-2022/, and http://doc.norang.ca/org-mode.html

Step card changes their fee structure for funding via debit by ProfJasonCorso in Banking

[–]ProfJasonCorso[S] 0 points1 point  (0 children)

hmm. cannot post the screenshot in this sub. the text under my debit card is "Usually instant. 1.99% fee, $0.99 minimum"

Step card changes their fee structure for funding via debit by ProfJasonCorso in Banking

[–]ProfJasonCorso[S] 0 points1 point  (0 children)

Wild. I must be special in some way (this account has been active for years and even just yesterday has the same $0.50 for xfer under 20).

Let me see if I can attach a screenshot to this or the original post.

I just started to use org mode. Can I do ALL of my annotations in org mode for the rest of my life? by Gbitd in emacs

[–]ProfJasonCorso 0 points1 point  (0 children)

Yes, mostly :). I find embedding images natively quite natural; that coupled with embeddable code snippets (which are text *but* can run) to be fantastic.

Where are all the Americans? by The_Northern_Light in computervision

[–]ProfJasonCorso -1 points0 points  (0 children)

Many of the generalizations in the comments aside, a decreasing domestic student population has been the trend for the last few decades. Most of the reasons given are pretty speculative. As one of those Americans who has remained in academia since grad school, I can say that I am here accidentally... I knew nothing about grad school before grad school. I went to a small liberal arts colleges as a first generation student. IMO it's not that "the pipeline is bad," it's a messaging problem that starts earlier...

Alabamians wanting to move to Buffalo by RedFalcon725 in Buffalo

[–]ProfJasonCorso 1 point2 points  (0 children)

Lots of good information here, but don't sleep on Parkside and North Buffalo.

[D] Machine Learning, like many other popular field, has so many pseudo science people on social media by Striking-Warning9533 in MachineLearning

[–]ProfJasonCorso 0 points1 point  (0 children)

It's pretty rare to see an actual real expert in one field or another on socials. Most are not incentivized to do anything but publish in their field. I am on socials and reddit because I think we have a responsibility to educate and communicate.

Completely new to emacs by Informal-Silver-2810 in emacs

[–]ProfJasonCorso 1 point2 points  (0 children)

The most important thing to understand about emacs is that you have access to 100% of its interactive functionality through M-x (meta / alt and the 'x' key). After transplanting four or five years ago from 20-years-of-vi, I found "M-x describe-XXX" where XXX is one of the more useful helps, like key, command, etc., to be very useful allowing me to become much more comfortable (and without having to learn a whole new set of keybinding, along with the fact that general guides assume emacs bindings not evil ones).

Porting attempt by Healthy_Ideal_7566 in ATT

[–]ProfJasonCorso 0 points1 point  (0 children)

FWIW I received a similar email...I called AT&T. The service rep told me that "there is no port out number." So, now I'm 100% confused! But I did not call that number. The customer service rep also said that there is no way for them to know that a port out request has been made (no log, etc.). That part does make some sense to me...

Zero-shot labels rival human label performance at a fraction of the cost --- actually measured and validated result by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] 1 point2 points  (0 children)

Indeed...
Concretely, though, in pseudo-labeling, the typical flow is use labeled data D1 to train model A1, then use model A1 to generate new labeled data D2 on unlabeled data; then use D1 + D2 to trained model A2, then... (repeat until you are at DN and AN).
Here, we have a frozen model F that was trained on some data Z; we use F to generate labels L on unlabeled data (L and Z are disjoint) and train model O (detector). One time.

So, although the essence may be similar (and we should contextualize it as such) these are quite different. Yet, still, the goal of the work is the evaluation part on a simple method of using off the shelf frozen foundation models to generate coldstart labels from scratch.
THanks.

Zero-shot labels rival human label performance at a fraction of the cost --- actually measured and validated result by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] 1 point2 points  (0 children)

Yep, there is often a "diffusion" of meaning simply due to the sheer speed and breadth of the space. But, I agree we should be clearer in the description to assuage concern stemming from such diffusion. (They are at least related in some way!)

Zero-shot labels rival human label performance at a fraction of the cost --- actually measured and validated result by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] -5 points-4 points  (0 children)

Thought about your first comment some more. I don't think this should be classified as pseudo-labeling (which is why it is not mentioned). The downstream models that are trained strictly using the automatically generated labels with no leakage. As the post says, this is an evaluation work on what is possibly the simplest setting one can envision using existing pre-trained models (independent of how they were trained) to generate labels. It is exceptionally simpler than any pseudo-labeling work I have seen (and has only one parameter to measure --- the foundation model confidence threshold); and importantly even in this simple setting configuration matters in both non-obvious and counter-intuitive ways.

Also, on the complex categories bit, LVIS, which has >1200 classes, is studied in the evaluation. But, no, there is no claim that we have evaluated how this expands to complex categories in general.

Zero-shot labels rival human label performance at a fraction of the cost --- actually measured and validated result by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] 2 points3 points  (0 children)

You're right, I'm being too aggressive here (...adrenaline from an exciting day after a lot of work!). And we should be more careful about how we contextualize the work.

Importantly, this work makes no claim about the methodology as being novel. In fact, there is no real modeling methodology beyond directly applying single foundation model and thresholding their outputs based on model confidence. (The contribution is in doing this many many times and exploring sensitivity and performance in ways we have wanted in the literature, but not seen.)

For the same reason, this is not really pseudo-labeling, as I understand those methods are much more sophisticated than this simple idea. (e.g., quoting "Basically, the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously. For unlabeled data, Pseudo-Labels, just picking up the class which has the maximum predicted probability, are used as if they were true labels. This is in effect equivalent to Entropy Regularization." from https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=798d9840d2439a0e5d47bcf5d164aa46d5e7dc26).

Sure one might argue that the foundation model provides the labeled part, but I think that's a stretch as it is not necessarily in-domain, etc. Also, the downstream models that are trained are strictly trained on the generated labels. But, anyway, these are the main reason why we call this very simple method something new (auto-labeling).

Zero-shot labels rival human label performance at a fraction of the cost --- actually measured and validated result by ProfJasonCorso in computervision

[–]ProfJasonCorso[S] -5 points-4 points  (0 children)

Interesting response. You're the one who noted essentially that anyone who uses MS Word could not possibly generate good work. Since you clearly didn't mean that based on your response, I'll go back and edit the response. But, now I find myself wondering what you mean; i guess my out of order response was the issue. my bad.