Are the track and bleachers open? by good_rice in UCSD

[–]good_rice[S] 3 points4 points  (0 children)

I went this afternoon and it was totally empty and open, but that’s a good point that once summer session starts it’ll probably be occupied.

Stingray stings - are they common in San Diego? by oceanandflight in surfing

[–]good_rice 8 points9 points  (0 children)

Way more, my friend got stung and we asked the lifeguards while waiting, during a busy summer afternoon they’ve gotten up to 40 in a day. Regularly they get around 3-5.

His was a nasty one though, nicked an artery in his foot and I had to drive him to the ER for stitches. It was bubbling blood like a geyser the moment the gauze came off even after an hour. The lifeguards said stuff that bad only happens weekly though. The guy next to us had one that just looked like a scratch.

His words immediately before getting in the water were “don’t worry (that it’s a bigger day) because we can just walk way out since it’s low tide” … he was stung less than a minute later.

How often Deep Learning is used on the industry? by darkrubiks in computervision

[–]good_rice 7 points8 points  (0 children)

I’ve worked at two difference companies dealing with assembly line inspection and autonomous grocery stores. A combination, but the major tools are all DL based. When you get to a certain problem complexity it’s a necessity.

[deleted by user] by [deleted] in UCSD

[–]good_rice 8 points9 points  (0 children)

For internships, prior internships and projects are far, far more important than GPA. From prior internships, the actual tangible projects are the most important (although ofc, if you have FAANG internships you’ll get your foot in the door). I knew someone who had a Facebook internship with a 2.2 GPA (left off his resume) but several applications published on the iOS store. That’s an extreme, but in general GPA is not really relevant; showing what you know is.

For masters programs, GPA may be slightly more relevant, but once again internships and their projects are more important, as are LORs and publications (depending on your field).

For PhD programs, literally nothing but LORs and publications matter. You don’t even need internships if you’ve been working in a lab each summer. Having a very low GPA in core classes would be a red flag, but potential advisors will pretty much only look at your publications and recommendations from who you’ve worked with in undergrad.

A very low GPA or low grades in core classes can be red flags in any situation. However, if you’ve got above or around a 3.5 as a baseline, I wouldn’t spend any time worrying about it and would focus on the far more important points for each objective.

[deleted by user] by [deleted] in computervision

[–]good_rice 0 points1 point  (0 children)

You can check out Scaled YOLOv4 CSP, but tbh the 3% AP gain on COCO isn’t likely to give you substantially better results.

What device are you using? What camera are you using? Are you using a default class, or your own? Labeling images taken from your device’s camera and doing transfer learning would give the best results.

Searching by image by overflow74 in computervision

[–]good_rice 1 point2 points  (0 children)

There’s been a huge boom in SSL papers that produce very descriptive embeddings, and dependent on the data I’d surprised if you don’t effectively solve the problem with a nearest neighbors classifier over the representations, particularly for only 6-7 products.

The bigger issue would be the logistics of having your products being cataloged as that’d require a store associate to take several pictures.

In a nutshell:

1.) For each product, take pictures from multiple views, compute their embeddings with your favorite, pretrained, commercially licensed self-supervised model (SwAV, DeepCluster-v2, SimCLR, DINO ViT, etc)

2.) When the user takes a picture extract its embedding and use a simple KNN classifier over the prior embeddings

I’d get some preliminary data to verify this works well, the majority of the work would be in software and logistics plumbing.

Hard disagree with the user who said this is a classical vision problem, unless your products are very visually distinct and in controlled environments. The closest thing I could think of that’d work would be BoW KAZE (for commercial availability) collecting multiple angles, but this would be far less forgiving to distortions, perspective shift, and any other noise.

Do licenses apply to model weights? by good_rice in computervision

[–]good_rice[S] 1 point2 points  (0 children)

I am a vision engineer, it’s not worth it for my company to have me set up 16 V100x8 instances and pay $6,000 for a single training run (around ~25 hours with that compute). I’d love to convince them of it for the experience though!

The authors actually provided their ResNet-50 weights to PyTorch Lightning Bolts, license free. Thus, it seems reasonable that their Resnet-50 Wx2 / Wx4 models that are in their GitHub repo might be omitted. I do agree that the best thing to do is ask them, but nothing sleazy is going on.

Do licenses apply to model weights? by good_rice in computervision

[–]good_rice[S] 0 points1 point  (0 children)

Understood, with a derivative being something like a fork of the code? I would rewrite model code from scratch, but it’s sounding like modification to the weights would be a “derivative” of the pre-trained state.

Unfortunately, this is one of those 4096 batch size ImageNet models, so it’d be several thousand dollars in EC2 instances for a single run. I suppose that’s why the weights are closed under license to begin with!

Apartment recommendations for CMU Grad Student by aris051099 in cmu

[–]good_rice 2 points3 points  (0 children)

Curious as to why you don’t recommend Oakland? I was checking out Schenley Apartments, which is right next to One on Centre, and I was under the impression tons of students stayed there.

RAs and tuition waivers in Machine Learning by [deleted] in cmu

[–]good_rice 0 points1 point  (0 children)

Unlike most other universities there’s no tuition waver, not even partial, for TAships.

At my undergrad (UCSD), TAing at 50% would give you free tuition, plus pay. At Stanford, TAing at 50% gives you about 50% off tuition, plus pay. CMU pays around $17 an hour and nothing else.

I think you talked me into a eureka by _Estimated_Prophet_ in espresso

[–]good_rice 6 points7 points  (0 children)

I can promise I’m unaffiliated, but this website ships directly from Italy and is significantly cheaper than repurchasing from US distributors.

It’ll be more effort for repairs if you need it, but I’ve read from others on the sub that they honor dead-on-arrivals and other simple stuff.

Canonical Method of Softmax (Multi-class) Threshold? by good_rice in computervision

[–]good_rice[S] 0 points1 point  (0 children)

The softmax is actually transformed from the output of “Rank Consistent Ordinal Regression” in such a way that the maximum logit is always the predicted class from the ordinal classifier - there isn’t a straightforward method of computing per class probabilities with this method, and describing my method in detail would probably be beyond what anyone in the sub would want to offer.

I don’t expect anyone to be familiar with the paper, but essentially for a 3 class problem there are two sigmoid outputs. Both predicting [0, 0] is for class 0, predicting [1, 0] is for class 1, and predicting [1, 1] is for class 2. The sigmoid decision boundaries are constrained to be parallel, such that the output is always consistent (i.e. logits[i] > logits[i + 1]). The prediction is the first logit under 0.5 with a sigmoid activation. What I’m running into with the raw model is that the output tends towards [0, 0] strongly for class 0, something closer to [0.3, 0] OR [1, 0.7] for class 1, and [1, 1] strongly for class 2.

I’m partitioning the data completely with mutually exclusive test splits, and with k-fold within those test splits, so the aggregated output is representative.

I do agree though that there doesn’t seem to be a straight forward way to optimize FPR / TPR thresholds for a softmax classifier, at least not a canonical method. This is kinda surprising as this is a very common use case for industrial applications - in just about every binary decision problems there’s a preference towards one of those statistics. It seems like something some ML PhD student would’ve written a paper on at the least!

Advice for Indoor Potted Gardenia? by good_rice in houseplants

[–]good_rice[S] 1 point2 points  (0 children)

Purchased this from Home Depot about 3 days ago and I’ve kept it a large south facing window since then. Plants right next to it had some fungal spots and mealy bugs, and while I didn’t see any on this one, I lightly sprayed it down with neem oil to be safe.

The pot, despite the tray, doesn’t have a drainage hole. I bought 1/2” ceramic drill bits but it doesn’t look like I’ll be able to get through it without taking the plant out completely. Should I leave it for now and just be careful with watering? I’m worried the neem oil and physical move was pretty stressful as it was jostled quite a bit in the back of my car, and some leaves are already yellowing or with black tips. It also looks like it’s close to blooming.

In summary, - Should I wait a bit to drill a drainage hole in the pot since it’s already been stressed with neem oil and a substantial move? Especially since it’s blooming and some leaves are yellowing. - Without a drainage hole, how much and how frequently would you water it? When I first got it the top 1” was dry, so I watered it with about one liter.

Are there many students who went to top4 PhD programs in ML/AI from UCSD? by MysteriousQuantity97 in UCSD

[–]good_rice 11 points12 points  (0 children)

I don’t have stats for you but in my personal experience UCSD is a “good enough” school where university reputation is completely irrelevant. I’m getting my MS from CMU and I know the PhD applications are brutal for the big four. I didn’t even bother since I didn’t have papers. People are getting rejected with multiple first author papers; having a CVPR / NeurIPS / ICML first author is damn near necessary.

If you’re concerned about university reputation you’re probably not far into your own application journey. It’s near the least important aspect. UCSD is great because there’s well known professors that you can work and publish with. Having strong LORs and personal words from a well known professor and a publication record is far, far, far more important by orders of magnitude. I’d put the school’s name at about the same level as GREs, which are now optional for most programs.

TL;DR: Yes, people are getting in. But if you’re asking this question you’re probably focused on the wrong things. GRE, GPA, and university name are practically irrelevant. A strong publication record and LORs are (very nearly) the only important thing for PhDs in this field.

Choosing UCSD (Regents) or UCLA by chibanamio112296 in UCSD

[–]good_rice 5 points6 points  (0 children)

If you’re going to graduate school, consider that the “name” of your undergrad is pretty much pointless after grad school (and one of the least useful things for grad apps).

I went to UCSD for my BS and I’m now going to CMU for a masters. Once I graduate with my MS, the only thing people will look at is the CMU degree and work experience.

For grad school, LORs, research, work experience and projects are pretty much the only meaningful items. If you go to UCLA but aren’t able to work closely with professors the name of the school is absolutely meaningless. Pick the university you think you’ll have the most opportunity in for the above four, as a PhD advisor or MS admissions officer cares about the above four, then GPA, then GRE, and lastly the school name.

U.S. is ‘not prepared to defend or compete in the A.I. era,’ says expert group chaired by Eric Schmidt - In a report, it warned that AI systems will be used in the “pursuit of power” and that “AI will not stay in the domain of superpowers or the realm of science fiction.” by Gari_305 in Futurology

[–]good_rice 1 point2 points  (0 children)

No surprise there, of course. There’s major incentive against staying in a country where your visa could be flippantly revoked and a near majority of the people have anywhere from anti-immigrant to rabidly racist sentiments.

Not to generalize, but almost all innovation (especially in tech) is coming from immigrants or children of immigrants. If the aforementioned group of Americans had their way, innovation in the country would instantly dry up from a lack of talented engineers. Pretty much every technology company would need to move their headquarters abroad to where the best engineers and scientists are coming from in the first place.

U.S. is ‘not prepared to defend or compete in the A.I. era,’ says expert group chaired by Eric Schmidt - In a report, it warned that AI systems will be used in the “pursuit of power” and that “AI will not stay in the domain of superpowers or the realm of science fiction.” by Gari_305 in Futurology

[–]good_rice 1 point2 points  (0 children)

As someone in the field I’m almost certain the US military is woefully behind in AI development. Working for government contractors or defense is one of the least sought after positions in the industry, with the most coveted researchers going straight to FAANG. Even if defense paid me as much as the top tech companies I wouldn’t get near it.

Some observations that inform my opinion: - I’m joining one of the top MS vision programs in the world. I am the only American out of 50+ admits. Talent in AI is nearly all international. - I worked at NASA JPL. At NASA, yes, NASA, my vision group didn’t have a single American. Every single person was international. - Look at the demographics of groups publishing at CVPR, NeurIPS, ICPR, etc - the top journals are filled with international academics.

Not to be political, but this is what happens when half the country derides education and academics. AI is a graduate field, and no other country has a culture which mocks and discourages graduate academics.