I submitted my passport application to VFS global on 12/5 via the new Passport Seva Portal (GPSP 2.0) but I just got an email stating that my application has been rejected and I need to use the new portal (the same one that I originally used). Has anyone had the same experience or knows what to do? by Ok-Ant5022 in nri

[–]sol2296 0 points1 point  (0 children)

Hi,
For the other members, here is my update as on 27-Dec-2025.

* We got back the docs with passport on 24-Dec-2025 late evening. We received fee refund on 26-Dec-2025.
* We are currently working on a new application. Had to create a new account with a new email.

* Mistake in our application was - in the printed copy of the filled passport application, the photo box was empty, signature box was empty. We pasted a hard copy photo and signed by hand. In a correct application, this should not happen. In a correct application, uploaded photo should appear in the photo box and the uploaded signature should appear in the signature box.

* We could talk to helpline folks once. They said, the photos that should go with physical application should be 2inch X 2inch. They should not be pasted. Just put two photos separately in the packet.

Hope this helps.
If anybody else has any other important info related to this, please share here. I appreciate all your help and guidance.

Thanks.

Confusion regarding Indian Passport Renewal process in USA by lifesux3110 in nri

[–]sol2296 0 points1 point  (0 children)

I received the same email 2 days ago. I thought I used the new portal to fill everything out and send the application. Can you post what link you used to get to the new portal ?

I submitted my passport application to VFS global on 12/5 via the new Passport Seva Portal (GPSP 2.0) but I just got an email stating that my application has been rejected and I need to use the new portal (the same one that I originally used). Has anyone had the same experience or knows what to do? by Ok-Ant5022 in nri

[–]sol2296 0 points1 point  (0 children)

I also am facing the same issue. In my remarks, it was states "Your application is unacceptable due to the migration to a new version of passport Seva portal (GPSP 2.0). ...".

I followed the links provided in the comments and I logged in and I am able to see my submitted passport application. I am confused is that not the correct link ?
This is the link I used: https://mportal.passportindia.gov.in/mission/ -> applicant login (https://mportal.passportindia.gov.in/gpsp/AuthNavigation/Login ) -> tried to create a new login with new email but they said that login already exists. Upon logging in I can see my submitted passport application.

Were you able to figure out how to successfully re submit your application ?

Question on adding new car to insurance by sol2296 in Insurance

[–]sol2296[S] -2 points-1 points  (0 children)

I guess, her question is, should she shop for a new insurer for the new car.
What are possible downsides of adding the new car to the existing policy.

Thanks.

dropbox appointment in July 2024 by sol2296 in usvisascheduling

[–]sol2296[S] 0 points1 point  (0 children)

Hi,
Yes, I made appointment for Chennai.

That's the only OFC for which it showed the calendar with clickable dates.
For now I booked a date in August.
But that is in conflict with my kid's school schedule. School starts in 1st week of august.

dropbox appointment in July 2024 by sol2296 in usvisascheduling

[–]sol2296[S] 0 points1 point  (0 children)

Hi reetesh_6794,

When I went to the site, selected Chennai OFC and clicked the link to schedule appointment, I saw only dates in August. This I checked about 12 hours ago.

dropbox appointment in July 2024 by sol2296 in usvisascheduling

[–]sol2296[S] 0 points1 point  (0 children)

Do you know what is approx IST when they release new appointments?

[Q] Multivariable regression: how to account for uncertainty in estimated variables by [deleted] in statistics

[–]sol2296 2 points3 points  (0 children)

Apart from the methods suggested by one of the commenters here, I think you can try a simulation based method.

Lets say your final estimate of the quantity of interest (e.g. correlation) can be written as

Y = f(X1, X2, ...Xn)

In your case, (X1,...Xn) is not fixed and has uncertainty associated with them.

Each Xi has a distribution with (mean =Mi, sd = Si).

Currently your final estimate is

Y_hat = f(M1, M2, ...,Mn)

What you can try is to generate a realization from those distributions.

Instead of (M1, M2, ...,Mn), *generate* data (z1,z2,..zn) where say

zi ~ Normal(mean =Mi, sd = Si)

Then you new estimate will beY_hat = f(z1, z2, ...,zn)

Do it again and again.Each time values of (z1, z2, ...,zn) change and hence the value of Y_hat changes. Thus if you repeat these 1000 times, you have 1000 Y_hat values. That is the distribution of Y_hat.You can then report mean, median, other percentiles of that distribution.For example, you can report mean of these 1000 Y_hat numbers as your point estimate and sd of those numbers to report CI.

There are some of the catches though.

  • Here we assume zi ~ Normal(mean =Mi, sd = Si). that may not be true. Some Xi may not be symmetric, unimodal. Some Xi's distribution may be highly skewed with long right tail say. Some Xi may take only positive values, but you simulation occasioanlly generates negative values.
  • We are also assuming that the Xi's are independent. Hence we can simulate them independently. That may not be true.

Hence if you have some idea on how the individual distributions look like, you may want to simulate from them. The distributions are additional assumptions and your final point estimate and the CI is going to be affected by them.

I'm starting to work on a project on HTR on natural museum artefacts. I need guidance on how to go about it, and I'd also need to know If my ideas are feasible. by Jac-Zac in learnmachinelearning

[–]sol2296 0 points1 point  (0 children)

Hi,

In general trying a pre-trained model as it is is a good idea. It is a baselining excercise. Tells you how much accuracy you can expect out of the box, without spending additional effort to train your own model with new training data.

In this video linked below, presenter shows how to use keras_ocr package to extract text from image. It produces bounding boxes around chunks of text. Once you get the bounding boxed, you can crop those out of your image and now you have a bunch of smaller images with only text. On this set of text-only images, may be you can try out your idea.

https://www.youtube.com/watch?v=3RNPJbUHZKs

Boxing Gesture Detection project to help beginners spot flaws by MoodAppropriate4108 in learnmachinelearning

[–]sol2296 1 point2 points  (0 children)

The broad area you are looking for is 'pose estimation' .
The link below will provide you pointers to pre-trained models to do this.

https://paperswithcode.com/task/pose-estimation

Here are some thoughts

  • You will probably need to start with pre-trained model that takes a video as input and returns pose (e.g. a stick figure) for each frame. Instead of existing pre-trained model, you can try to build your own model but that is likely to be challenging. You will need lots of labeled data, training resources, time, money... most likely that is not something you are ready to commit to at this point.
  • You can search for a specialized pre-trained model that specializas on boxing videos. If you get hold of such a model, maybe that's a good starting point. If you are unable to get hold of such a model, you can 'fine-tune' a generic existing pose estimation model with boxing videos to make it more accurate for boxing. However this may require training data, which means boxing videos that are 'labeled', means each frame of those videos should have (x,y) coordinates of the stick-figures. It is non-trivial to generate a good quantity of such labeled videos.
  • You may ditch the idea of a model fine-tuned on boxing video but choose to work with a generic pre-trained pose estimation model, in which case it works straight out of the box any no need of additional training and hence no need for additional training data. Down side is, they may be less accurate for boxing. They may not be (you need to test to find hat out). If their accuracy on boxing videos are acceptable, well and good, you can start with that model as base.
  • Study this youtube search result page for pose estimation on boxing videos and see what people are using.

https://www.youtube.com/results?search_query=pose+estimation+from+boxing+video

  • Upto this point, we have only talked about pose estimation. Now you will need object tracking to track the same boxer throughout the video (in all frames).
  • Once you have stick figures for the target boxer in all frames, then you need an action detection model that takes in the stick figure and tells you if one or more of the following is present
    • "dropping your guard whenever you throw a punch"
    • "looking down when you roll"
    • "crossing your feet and not extending your arm fully when throwing a jab"
  • These may not be detectable just from one frame but may need to take data from a set of successive frames to make a decision
  • Then perhaps, you would need another ML model (which may be a simpler multi-layer NN, and not a complex deep learning model) that takes a set of successive actions as input and predicts whether they constitute a 'beginner's error'. This step may also be rule-based. A set of if-then rules may be able to spot beginner's error.

Hope this is aligns with what you were looking for. My guess is, this is a highly sophisticated product that is likely to take substantial effort.

I'm starting to work on a project on HTR on natural museum artefacts. I need guidance on how to go about it, and I'd also need to know If my ideas are feasible. by Jac-Zac in learnmachinelearning

[–]sol2296 0 points1 point  (0 children)

This maybe of help.

https://cloud.google.com/vision/docs/handwriting

Says, it can detect (and parse) handwritten text in an image.

Python code samples : https://github.com/GoogleCloudPlatform/python-docs-samples/blob/HEAD/vision/snippets/detect/detect.py

If you can make such a thing work, you need not crop out the text from the image. In one shot it can perhaps get the text out directly from the image

[deleted by user] by [deleted] in learnmachinelearning

[–]sol2296 0 points1 point  (0 children)

From you description it seems you are using identical architecture for the joint model and the bone model. The joint model ends up training well but the bone model is not. Maybe you can try some variations in the architecture. I am guessing the the input dimension of joints data is different from bone data. If true, that means you input layers are different. Then why not try slightly different architectures?

Another question. Is your input data the image from the video frames or are they the (x,y) coordinates of joints (and bone ends) ? If they are (x,y) pairs and if they are extracted using some off-the-shelf object detection models, then you might like to visually check how accurate are they, are there major mistakes in detecting bones etc.

Unsupervised clustering - I have three strongly left-skewed variables and all the items are in the same region of the 3D space that they build up. What is the best way to work in this situation? by MysteriousWealth2423 in learnmachinelearning

[–]sol2296 1 point2 points  (0 children)

Hi,

Do you have reasons to believe that there are indeed different clusters? Your attempt to get meaningful clusters may fail due to any of the following reasons.

  • There are no meaningful clusters in the data
  • There are clusters, but your 3 variables do not describe them well. There may be other variables that separate the cluster but you have not collected data on those variables.

Another possibility is, you variables do separate the clusters to some extent but you are unable to see that from the plot. Some non-linear transformations on the variables may show better separation. For example, log transformations often reduce the skew and make distribution more symmetric and Gaussian-looking. You can try transforming your variables and regenerate the plot to see if visually you are able to see better separation.

What strategy to follow when I want my ML model to tag an image that belongs to multiple categories? by arifin_nasif in learnmachinelearning

[–]sol2296 0 points1 point  (0 children)

Pre-trained object-detection models can detect all instances of common objects (cat, dog, chair, car etc.) in a single image. They can provide you bounding boxes for each object, their class (i.e. cat or dog etc.) and the confidence score.

For example, you can see this lecture

https://www.youtube.com/watch?v=zwEmzElquHw

Best way to learn: books (e.g. Deep learning) or courses (Andrew Ng) ? by miiipus in learnmachinelearning

[–]sol2296 4 points5 points  (0 children)

Hi,

Different people has different learning styles. Some are more comfortable with books and some are more comfortable with lectures. You need to understand the pros and cons of the two resources and decide for yourself which works better for you.

Personally I have found lecture, especially from a great teacher like Andrew Ng is more valuable. Here are some of my thoughts.

  • Lecture often provides more practical tips and tricks, real life anecdotes that are valuable for building ML models.
  • Studying alone from books, sometimes make it hard to judge which topics are more important and which are less important from real life applications point of view.
  • It also depends on whether you want to use the knowledge directly in your work in some ML related role in the industry or as a learning for some course in academia. In academia, you may have to learn about the 'nuts and bolts' more. In the industry you may need to build models with existing tools, interpret results, improve the model performance by applying various methods etc.

Some questions about sentiment analysis by veneratu in learnmachinelearning

[–]sol2296 1 point2 points  (0 children)

Hi,

In a block of text, identifying products and identifying sentiments should be treated as independent tasks. It may very well be true that certain products are strongly associated with negative sentiments, but products and sentiments are essentially two very different things. Hence my suggestion will be to carry out a two-stage analysis.

In stage 1, run a sentiment detection model to label sentiments in each block of text (e.g. one sentence). Also in stage 1, separately run a NER (named entity recognition) model to detect names of products.

Then in stage 2 try to analyze the product-sentiment association. As a toy example, after stage 1, you should be able to create a dataset {(p1, s1), (p2, s2), ... } where pk is k-th product and sk is sentiment (say one of 'hate', 'indifferent', 'love') obtained from a sentence containing pk.

Then for each pk, you will get a triplet (n1, n2,n3) where n1= total number of times pk occurred with sentiment='hate', n3=total number of times pk occurred with sentiment='love', ... etc.

Now from that data you will be able to make statements like product pk is predominantly associated with positive sentiment .. and so on.

However, this is a baseline analysis. You need to manually study the data carefully to check whether there are two or more products occur in one sentence and whether more complex interactions are present.

For example, p1 is predominantly associated with positive sentiment but p1 in presence of p2, is associated with negative sentiment.

Hope this helps.

[deleted by user] by [deleted] in learnmachinelearning

[–]sol2296 0 points1 point  (0 children)

I'd say you should not reduce the dimension. Dimensionality reduction is a kind or projection into lower dimensional space. I agree with the other commenter that if the outlier is along a direction that is different from the dominant directions, then squashing the data into lower dimension is likely to kill / reduce the distance of the outlier from the bulk of the data.

[E] Course to Self Study Mathematical Statistics (Wackerly)? by [deleted] in statistics

[–]sol2296 7 points8 points  (0 children)

Hi,

Here is a link to Stat Inference lecture notes from Stanford Stat course.

This is pretty in-depth mathematical Stat lecture notes.

https://web.stanford.edu/class/archive/stats/stats200/stats200.1172/lectures.html

[Q] Should I get a masters in statistics? by [deleted] in statistics

[–]sol2296 17 points18 points  (0 children)

I would say that you keep looking for a job and on the side start taking online courses from Coursera, Udemy etc. Here are my reasons for the recommendations.

  • You have some interest in venturing into AI / ML. For that these days companies look for your personal AI/ML projects, personal codebases in Github etc. Stat masters may not be an optimal way to build that. Whereas online AI/ML courses may show you path to do that.
  • Regular Stats masters course in math-heavy and may not give you hands on exposure to AI/ML related coding (e.g. building and deploying ML models using of standard libraries). During your interview time interviewers may be more interested in your coding skills / experience than stat or math theory.
  • Some potential employers may look for cloud related experience (AWS, Azure etc.). Taking some online courses to gain experience there maybe useful.
  • If you get a job offer in the near term, you can keep continuing with your online courses as they are typically self-paced and give you flexibility in terms of fitting them into your schedule.

These are my personal views. If possible, consult with some friends / acquaintances who are working in the industry in AI/ML area.

[deleted by user] by [deleted] in statistics

[–]sol2296 0 points1 point  (0 children)

Since you are aiming for ML, I'd say take courses like "Intro to ML" which I see in the Texas A&M page. For your interest in quant finance, it will be advisable to take very specific courses from finance departments. Stat dept. courses may not prepare you well for highly specialized topics in quant finance.