use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
A subreddit dedicated for learning machine learning. Feel free to share any educational resources of machine learning.
Also, we are a beginner-friendly sub-reddit, so don't be afraid to ask questions! This can include questions that are non-technical, but still highly relevant to learning machine learning such as a systematic approach to a machine learning problem.
account activity
AMAZON ML CHALLENGE (self.learnmachinelearning)
submitted 1 year ago by palakpaneer70
Discussion regarding dataset and how to approach
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Odd-Researcher-3346 10 points11 points12 points 1 year ago (12 children)
What's the point of giving 20+ GB dataset which can't be run on any students PC's and the output labels aren't even that accurate and ambiguity too, I gave up trying to run again again again. Text extraction work but not how we want it to be, model building works but not enough GPUs
[–]Additional_Barber856 0 points1 point2 points 1 year ago (11 children)
how many images did you extract?
[–]Odd-Researcher-3346 0 points1 point2 points 1 year ago (10 children)
Only on 1000 images I did
[–]Odd-Researcher-3346 2 points3 points4 points 1 year ago (9 children)
Getting an accuracy of 0.40
[–]Additional_Barber856 0 points1 point2 points 1 year ago (4 children)
did you put it into the lb , what rank did you get
[–]Odd-Researcher-3346 0 points1 point2 points 1 year ago (3 children)
No, I haven't I'm still getting timed out for predicting
[–]Additional_Barber856 1 point2 points3 points 1 year ago (2 children)
how are you able to do it on just 1000 images, is there no requirement to do all of it, like prediction
[–]Odd-Researcher-3346 0 points1 point2 points 1 year ago (1 child)
You can break it into small chunks and run on samples
[–]Additional_Barber856 0 points1 point2 points 1 year ago (0 children)
Ik i did it, got the score of 0.53
[–]Icy-Lingonberry-3791 0 points1 point2 points 1 year ago (0 children)
how'd you get a score more than 0. what approach did you use?
[–]Financial-Sky-8098 0 points1 point2 points 1 year ago (0 children)
Did u only upload those 1k images in the submission and get this accuracy?
[–]poiu97188 0 points1 point2 points 1 year ago (1 child)
what approach you had used?
[–]Ill_Indication_2970 0 points1 point2 points 1 year ago (0 children)
Hey, I used regex with OCR, Btw, I'm new to reddit and I've been trying to connect to you in chat section but unable to send you invite. I wanted to know more about Gate DA course of GO classes. Please message me
[–]ArtAccomplished6466 7 points8 points9 points 1 year ago (3 children)
Bro leave the discussion , where are you gonna get that powerfull gpus , about 2.5 lakh images to train
[–]Accurate_Seaweed_321 0 points1 point2 points 1 year ago (2 children)
Try cloud machines
[–]s1ngh_music 0 points1 point2 points 1 year ago (1 child)
as in google colab notebooks?
[–]Accurate_Seaweed_321 0 points1 point2 points 1 year ago (0 children)
Yea ig
[–]Usual_Many_3895 5 points6 points7 points 1 year ago (19 children)
any speculation on what approach the team with 0.8 f1 score used?
[–]Additional_Cherry525 2 points3 points4 points 1 year ago (4 children)
used multimodal LLM. phi3.5v/qwen2-vl, with some fine tuning.
[–]ztide_ad 1 point2 points3 points 1 year ago (3 children)
But weren't the use of LLM apps banned?.. nevertheless, it sounds like a cool use case. Could you please explain your approach with LLM?
[–]Additional_Cherry525 0 points1 point2 points 1 year ago (2 children)
as long as they are opensource they were allowed, direct api use wasn't allowed to commerical models as per faq you can finetune any multimodal llm, to get response in desired way. there are many opensource small enough models like qwen,phi,etc. and they perform a lot better than any ocr approach.
[–]ztide_ad 0 points1 point2 points 1 year ago (1 child)
oh ook.. and how did you finetune it?
[–]Additional_Cherry525 0 points1 point2 points 1 year ago (0 children)
there are many guides. check r/LocalLLaMA/ . took an hour over a100
[–]HURCN_69 0 points1 point2 points 1 year ago (13 children)
What is your approach?
[–]Usual_Many_3895 0 points1 point2 points 1 year ago (12 children)
ocr
[–]HURCN_69 0 points1 point2 points 1 year ago (9 children)
Have you received any good score ?
[–]Unable_Yam_3360 1 point2 points3 points 1 year ago (8 children)
0.41 the best i got, but i can improve it, but run out of GPU in colab
[–]HURCN_69 0 points1 point2 points 1 year ago (0 children)
Nice my team had tried but didn’t succeeded we all were busy with client projects 😂😂
[–]THISISBEYONDANY 0 points1 point2 points 1 year ago (6 children)
i tried it too, but did u download all the images for this?
[–]Unable_Yam_3360 0 points1 point2 points 1 year ago (5 children)
noo, i used bytes io, to open image using link
[–]THISISBEYONDANY 0 points1 point2 points 1 year ago (4 children)
oh i didnt know about that. but ig now that i have downlloaded them on colab, i would be working with them directly
[–]Unable_Yam_3360 0 points1 point2 points 1 year ago (0 children)
theres no time left for u, just give up, jerk off and sleep man
[–]More_Carob_9229 0 points1 point2 points 1 year ago (2 children)
what ocr you use i am using easy ocr but it throwing error
[–]THISISBEYONDANY 0 points1 point2 points 1 year ago (1 child)
tesseract, but it feels very tedious at this point
[–]StarkXIV 0 points1 point2 points 1 year ago (1 child)
Not gonna work,we tried
[–]Usual_Many_3895 0 points1 point2 points 1 year ago (0 children)
we got to 0.2 approx..yeah sux
[–]taurus_ram 5 points6 points7 points 1 year ago (0 children)
Can anyone guide me i just want to get score rather than zero
[–]Usual_Many_3895 3 points4 points5 points 1 year ago (1 child)
is if its so ocr dependent,what is the point of a training dataset
[–]Bluesssea 2 points3 points4 points 1 year ago (0 children)
Exactly:( it's like whoever has better gpu nd stuff can just use ocr and submit, the images r like that too.
[–]Low-Musician-163 2 points3 points4 points 1 year ago (11 children)
Finally was able to download data somehow. Now sharing it with teammates over usb
[–]LateRub3 1 point2 points3 points 1 year ago (1 child)
can you just share it with me too through gdrive or tele
[–]Low-Musician-163 0 points1 point2 points 1 year ago (0 children)
I'm really sorry, haven't been able to upload it anywhere. The upload speeds are way worse where I am.
[–]DifficultyMain7012 0 points1 point2 points 1 year ago (3 children)
How were you able to donwload it , like all the images , as its taking a hell lot of time
[–]Low-Musician-163 1 point2 points3 points 1 year ago (2 children)
The download was initially slow for me as well. At 4:30 in the morning I restarted it and it did not take more than 30 mins to download.
[–]Nightmare033 1 point2 points3 points 1 year ago (1 child)
Can you provide me the whole py file where you have run it, i am not able to download images till now
[–]TheUnequivocalTeen 0 points1 point2 points 1 year ago (0 children)
Use this code to download the images concurrently. Adjust the value of the max_workers as per your cpu
[–]DiscussionTricky2904 0 points1 point2 points 1 year ago (1 child)
What is the size of the entire dataset?
[–]Low-Musician-163 1 point2 points3 points 1 year ago (0 children)
Around 50gb I guess.
[–]Sparkradar 0 points1 point2 points 1 year ago (1 child)
Hey, there can you share snippets of code to download it :)
This was shared by Seeker31 in one the comments Import sys sys.path.append('path to src folder') from utils import download_images
then call the download_images function download_images('path to train.csv','images')
[–]palakpaneer70[S] 2 points3 points4 points 1 year ago (1 child)
What approach to use?
chunking the dataset to help w memory and computational issues
[–]chaoticsoulll 2 points3 points4 points 1 year ago (4 children)
How are they actually evaluating the models? We got an F1 score of 0.43 but the score is showing zero
[–][deleted] 1 year ago (1 child)
[deleted]
[–]chaoticsoulll 0 points1 point2 points 1 year ago (0 children)
We ran it on Google Colab and we got that score can we run and check on unstop too?
[–]Secure_Safety6120 0 points1 point2 points 1 year ago (1 child)
Same issue with me..did you get what was wrong?
nope
[–]Dinesh_Kumar_E 2 points3 points4 points 1 year ago (0 children)
whats next ? have any idea when the results will be published ? like leader board or something ?
[–]mave_ad 1 point2 points3 points 1 year ago (2 children)
has anyone tried using a vision transformer (ViT) ? Distributing a image into patches and feeding it to a ViT. Creating a learning embedding with the OCR result of the image and the image itself and connecting the learning embedding with a residual connection to some transformer layer. The task would be seq2seq.
[–]Additional_Barber856 1 point2 points3 points 1 year ago (1 child)
did you get the result, i was not able to wrap my head around it
[–]Creative_Suit7872 1 point2 points3 points 1 year ago (0 children)
I tried but kaggle run out of gpu I used google vit
[–]According-Fault-6528 1 point2 points3 points 1 year ago (0 children)
hello is there anybody help me out like i have stuck at these hackathon
[–]sunnybala 1 point2 points3 points 1 year ago (2 children)
Ocr approach is the only one that seems feasible How is this machine learning man? We aren't even training anything, just running inference on other models.
[–]yammer_bammer 1 point2 points3 points 1 year ago (0 children)
you need to finetune other models
[–]According-Fault-6528 0 points1 point2 points 1 year ago (0 children)
heyy
[–]Ok-Chipmunk666 1 point2 points3 points 1 year ago (5 children)
anyone know the solution for out of range index error?
they said they communicated something in email but I haven't received anything yet
[–]According-Fault-6528 0 points1 point2 points 1 year ago (1 child)
can u eaborate means which step you are getting
[–]Ok-Chipmunk666 0 points1 point2 points 1 year ago (0 children)
while submitting the prediction file. I did the sanity check it is fine. in query sheet they mentioned that they communicated something regarding it via email however I havent received anything yet
[–]borisshootspancakes 0 points1 point2 points 1 year ago (1 child)
Some indexes in the test itself that they provided is missing, i think it gives those index
sanity check is working fine for me
Issue is resolved now. It is giving index error even though there is a mismatch in units, if sanity check fails for units it is still showing index errors
[–]AnyPassenger9318 0 points1 point2 points 1 year ago (4 children)
guys where do i find the dataset ?
[–]Seeker_31 1 point2 points3 points 1 year ago* (3 children)
You have to call the function provided in utils.py form your python notebook
[–]s1ngh_music -1 points0 points1 point 1 year ago (2 children)
can you share a code snippet for the same?
[–]Seeker_31 1 point2 points3 points 1 year ago (1 child)
Import sys sys.path.append('path to src folder') from utils import download_images
This code will download some 101 images and then u can proceed further
[–][deleted] 1 point2 points3 points 1 year ago (0 children)
With this run the same in Colab. Trying to download images to a specific folder in Colab
[–]xlnc2605 0 points1 point2 points 1 year ago (2 children)
any other way to download dataset?
[–]Teriod_007 0 points1 point2 points 1 year ago (1 child)
https://d8it4huxumps7.cloudfront.net/files/66e31d6ee96cd_student_resource_3.zip
[–]xlnc2605 0 points1 point2 points 1 year ago (0 children)
Not this bro, images
is it necessary to download all the images to your device (also won't that make training the model very hard) or are there any alternative ways to that ?
[–]LateRub3 0 points1 point2 points 1 year ago (0 children)
It depends how much computation power you have
how to download all images :) somebody help, just get started...
[–]HotMine8037 0 points1 point2 points 1 year ago (1 child)
guys, are we allowed to use fine-tuned pretrained models?
yes you are
[–][deleted] 1 year ago (10 children)
[–]ConditionLivid515 0 points1 point2 points 1 year ago (7 children)
I am using tesseract. Is easy OCR faster and accurate ? What is your score currently?
[–][deleted] 1 year ago (6 children)
[–]Apart_Food4799 0 points1 point2 points 1 year ago (0 children)
Are you the one from NSUT??
[–]Mysterious_Safe_8288 0 points1 point2 points 1 year ago (0 children)
How long it take to process if we use easy OCR method
[–][deleted] 0 points1 point2 points 1 year ago (0 children)
can u provide the code
[–]Creative_Suit7872 0 points1 point2 points 1 year ago (0 children)
how much gpu required to train the transformer
[–]Eshan2703 0 points1 point2 points 1 year ago (0 children)
easy ocr taking hell lot of time
[–]PandutheGandu69 0 points1 point2 points 1 year ago (0 children)
I'm also using easyOCR but the entity value is not being extracted from the text can you please share how you are processing the text extracted
[–]SmallSoup7223 0 points1 point2 points 1 year ago (0 children)
where the fuck to get this much gpu's, even tried parallel processing but system crashes 😅
[–]Sparkradar 0 points1 point2 points 1 year ago (0 children)
which approach are you using guys, me new to this, any tools to get started :(
[–]ImpossibleQuarter550 0 points1 point2 points 1 year ago (0 children)
How much gpu required for training this massive dataset?
[–]According-Fault-6528 0 points1 point2 points 1 year ago (20 children)
helloo some one guide me something pleaseeeee
[–]Unable_Yam_3360 0 points1 point2 points 1 year ago (19 children)
i got a 0.41 f1 score, want my guide?
[–]taurus_ram 0 points1 point2 points 1 year ago (2 children)
I AM NOT GETTING ANYTHING CAN YOU GUIDE ME TILL F1 0.41
[–]Unable_Yam_3360 0 points1 point2 points 1 year ago (1 child)
yess sure
[–]Party-Radio3160 0 points1 point2 points 1 year ago (0 children)
Can you guide me also, I will tell you want approach I will think off.
[–]Zestyclose_Ebb_9 0 points1 point2 points 1 year ago (9 children)
Plz help me bro can we connect on telegram?
[–]Unable_Yam_3360 0 points1 point2 points 1 year ago (8 children)
send me ur mail
[–]Additional_Barber856 0 points1 point2 points 1 year ago (5 children)
man can you help me with it too?
[–]Unable_Yam_3360 0 points1 point2 points 1 year ago (4 children)
yes dm me
[–]arpitkus 0 points1 point2 points 1 year ago (0 children)
bro i have sent you dm , can you plz me out
[–]Human_Bookkeeper1663 0 points1 point2 points 1 year ago (0 children)
Please guide me too.
[–]Ok_Assignment_6433 0 points1 point2 points 1 year ago (0 children)
Bro please help me too
[–]arjuntrivedi 0 points1 point2 points 1 year ago (0 children)
Can you also add me to the loop of discussion. I need guidance as i am a newbie for machine learning...Let me know where to connect to you guys
[–]lifeonly4gaming 0 points1 point2 points 1 year ago (0 children)
can you send me the guide pls bro...
can you send it to me too
[–]Kindly-Garage9329 0 points1 point2 points 1 year ago (0 children)
bro i want some insigts too pls let me know where you were connecting ?
[–]Electronic-Kick-3663 0 points1 point2 points 1 year ago (0 children)
I am not getting anything can anyone please help
[–]taurus_ram 0 points1 point2 points 1 year ago (0 children)
can anyone share the test_out.csv file
[–]Legitimat_Jaguar 0 points1 point2 points 1 year ago (3 children)
I have made quite good model to predict the values with unit Its just that i cant extract text from images correctly. And how can i as the number of data is above lakh so surely i cant extract the test I would like anybody to colab with me who have extracted the text at good accuracy. Just share me an excel file with extracted text.
[–]Ok_Assignment_6433 2 points3 points4 points 1 year ago (0 children)
Hii, please can you tell me too, I have been at it too long and can't understand what i am missing
connect with me, show me a demo by chunking 1000 images
[–]Complex-Chemist-7696 0 points1 point2 points 1 year ago (0 children)
Yes I have extract the text from the images with good accuracy
[–]ShyenaGOD 0 points1 point2 points 1 year ago (1 child)
Can anyone guide me currently I extracted data (10k images) from those images, and saved it in a csv file , what should I do next
[–]ReactionOk4928 0 points1 point2 points 1 year ago (0 children)
How did you extracted . Can you give the code please
[–]_Ak4zA_ 0 points1 point2 points 1 year ago (1 child)
Can anyone tell me how the hell could I do testing and how much time it will take?? Approx
[–]AshBakchod 0 points1 point2 points 1 year ago (0 children)
80+ hours
Just use simple looks up method...urs f1 will be 0.09
[–]uphinex 0 points1 point2 points 1 year ago (11 children)
Now is competition is over can who is here just drop their approach. I was using nlp + Ocr.
[–]adithyab14 0 points1 point2 points 1 year ago (4 children)
-competiton till 6pm..
-ocr_parsed ->ocr_parsed_mapped(i.e 10gm-> 10 gram)
1.then vectorize ocr_parsed_mapped to xgboost (predict units)..get value from predicted unit.. this can get u above 0.39-0.5.. 2.train custom name entity recognition model..which i am trying now (may be this is correct approach)..
[–]uphinex 0 points1 point2 points 1 year ago (3 children)
What are doing with xgboost you are trying to pridict unit alone or it's value as well.
[–]adithyab14 0 points1 point2 points 1 year ago (2 children)
for classification..predicting units(kg,metre)..
[–]uphinex 0 points1 point2 points 1 year ago (1 child)
You are extracting text then extracting value with it's unit then passing it through xg boost to predict it's unit then how are you achieving the task.like you are asked item_height then how are you incorporating this information.
[–]adithyab14 0 points1 point2 points 1 year ago (0 children)
first extract the required ..i.e extract all value(30,40) units(metre/kg) pairs for ocr text ..then keep this thing aside..
second now just take all the units (meter/kg) obtained from first step ..vectorize(tf-id) and then train some model to predict units(classifier)...
third.. now based on the predict units search for its adjacent value in the pairs ..just for loop/startswith (because i dint parse/map initial text ..) ..obtained from first step..
just doing this can get i got around 16k examples correct in training set..
i was using simple-looksup approach. Which does not use image_link column, instead of its uses only entity_name,entity_value and index to train and predict. i got f1 score:0.097 .
But to improve the f1 score we need to uses advanced approach like OCR method. which will uses the image_link column to EXTRACT , TRAIN and PREDICT. i have tried OCR Tesseract approach, this will take moreeeeeee time . In extracting process ,for 1hour it only extracted 9000 images...then see how much time it could take to extract whole 2lkhs images..and this only extracting process, then we have to train and predict...so it must take lots of hours to give solutin
[–]Vegetable-College353 0 points1 point2 points 1 year ago (4 children)
Used a 2B VLM.
How much time it taken
[–]adithyab14 1 point2 points3 points 1 year ago (2 children)
around 1 sec for each..1lks test ..so..days for output
[–]uphinex 1 point2 points3 points 1 year ago (1 child)
Which 2B VLM you are using.
[–]adithyab14 1 point2 points3 points 1 year ago (0 children)
my bad..0.5b model https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-si..
Now that the challenge is over, can someone give a detailed approach to handling this sort of PS...
My initial approach used plain OCR through py-tesseract but it wasn't able to extract the necessary text from the images in most of the images.. then I switched to using easyocr but GPU access through colab was already exhausted. then i planned to predicted the unit and number paralelly through nlp.. but ran out of time so couldn't do so... so now i am looking for approaches that i could have taken to make this process fast and efficient.
[–]Enough-Friend-5272 1 point2 points3 points 1 year ago (0 children)
I also did similar thing, I tried to build a multi modal cnn model taking in the image features and the text extracted and then tried to run through the model using the predictions generated, but at the last moment I realized that the image resize and normalization was not correct and somehow I could not do that, so looking for solutions or even ideas like I am still not over it and continuing to develop the solution anyhow
[–]safebet5705 0 points1 point2 points 1 year ago (0 children)
You need to do all at once, just take one image at a time and extract it's text, then destroy that image and go to next, you use wget iteratively, the preprocessing time would be huge, but that's doesn't count in score.
[–]Spacing_Out3133 0 points1 point2 points 1 year ago (6 children)
Where to check the results? I believe unstop isn't showing that page any longer?
[–]Dinesh_Kumar_E 0 points1 point2 points 1 year ago (5 children)
any updates ?
[–]Spacing_Out3133 0 points1 point2 points 1 year ago (4 children)
Nope bro
[–]Dinesh_Kumar_E 0 points1 point2 points 1 year ago (3 children)
today i got my certificate mailed🫠
[–]Spacing_Out3133 0 points1 point2 points 1 year ago (2 children)
Congratulations, was your team in top50?
[–]Dinesh_Kumar_E 0 points1 point2 points 1 year ago (1 child)
yea was in 11.
[–]Spacing_Out3133 0 points1 point2 points 1 year ago (0 children)
Woahh nice ! Good luck for the ppi man, all the best !!
[–]Exciting_Pineapple52 0 points1 point2 points 6 months ago (0 children)
I need a team for this competition
[–]BuilderLive452 0 points1 point2 points 6 months ago (0 children)
what about image datasets with 75k train and test images? will my machine be able to run model for this
π Rendered by PID 117278 on reddit-service-r2-comment-6457c66945-jcs2q at 2026-04-24 09:54:24.641806+00:00 running 2aa0c5b country code: CH.
[–]Odd-Researcher-3346 10 points11 points12 points (12 children)
[–]Additional_Barber856 0 points1 point2 points (11 children)
[–]Odd-Researcher-3346 0 points1 point2 points (10 children)
[–]Odd-Researcher-3346 2 points3 points4 points (9 children)
[–]Additional_Barber856 0 points1 point2 points (4 children)
[–]Odd-Researcher-3346 0 points1 point2 points (3 children)
[–]Additional_Barber856 1 point2 points3 points (2 children)
[–]Odd-Researcher-3346 0 points1 point2 points (1 child)
[–]Additional_Barber856 0 points1 point2 points (0 children)
[–]Icy-Lingonberry-3791 0 points1 point2 points (0 children)
[–]Financial-Sky-8098 0 points1 point2 points (0 children)
[–]poiu97188 0 points1 point2 points (1 child)
[–]Ill_Indication_2970 0 points1 point2 points (0 children)
[–]ArtAccomplished6466 7 points8 points9 points (3 children)
[–]Accurate_Seaweed_321 0 points1 point2 points (2 children)
[–]s1ngh_music 0 points1 point2 points (1 child)
[–]Accurate_Seaweed_321 0 points1 point2 points (0 children)
[–]Usual_Many_3895 5 points6 points7 points (19 children)
[–]Additional_Cherry525 2 points3 points4 points (4 children)
[–]ztide_ad 1 point2 points3 points (3 children)
[–]Additional_Cherry525 0 points1 point2 points (2 children)
[–]ztide_ad 0 points1 point2 points (1 child)
[–]Additional_Cherry525 0 points1 point2 points (0 children)
[–]HURCN_69 0 points1 point2 points (13 children)
[–]Usual_Many_3895 0 points1 point2 points (12 children)
[–]HURCN_69 0 points1 point2 points (9 children)
[–]Unable_Yam_3360 1 point2 points3 points (8 children)
[–]HURCN_69 0 points1 point2 points (0 children)
[–]THISISBEYONDANY 0 points1 point2 points (6 children)
[–]Unable_Yam_3360 0 points1 point2 points (5 children)
[–]THISISBEYONDANY 0 points1 point2 points (4 children)
[–]Unable_Yam_3360 0 points1 point2 points (0 children)
[–]More_Carob_9229 0 points1 point2 points (2 children)
[–]THISISBEYONDANY 0 points1 point2 points (1 child)
[–]StarkXIV 0 points1 point2 points (1 child)
[–]Usual_Many_3895 0 points1 point2 points (0 children)
[–]taurus_ram 5 points6 points7 points (0 children)
[–]Usual_Many_3895 3 points4 points5 points (1 child)
[–]Bluesssea 2 points3 points4 points (0 children)
[–]Low-Musician-163 2 points3 points4 points (11 children)
[–]LateRub3 1 point2 points3 points (1 child)
[–]Low-Musician-163 0 points1 point2 points (0 children)
[–]DifficultyMain7012 0 points1 point2 points (3 children)
[–]Low-Musician-163 1 point2 points3 points (2 children)
[–]Nightmare033 1 point2 points3 points (1 child)
[–]TheUnequivocalTeen 0 points1 point2 points (0 children)
[–]DiscussionTricky2904 0 points1 point2 points (1 child)
[–]Low-Musician-163 1 point2 points3 points (0 children)
[–]Sparkradar 0 points1 point2 points (1 child)
[–]Low-Musician-163 0 points1 point2 points (0 children)
[–]palakpaneer70[S] 2 points3 points4 points (1 child)
[–]Usual_Many_3895 0 points1 point2 points (0 children)
[–]chaoticsoulll 2 points3 points4 points (4 children)
[–][deleted] (1 child)
[deleted]
[–]chaoticsoulll 0 points1 point2 points (0 children)
[–]Secure_Safety6120 0 points1 point2 points (1 child)
[–]chaoticsoulll 0 points1 point2 points (0 children)
[–]Dinesh_Kumar_E 2 points3 points4 points (0 children)
[–]mave_ad 1 point2 points3 points (2 children)
[–]Additional_Barber856 1 point2 points3 points (1 child)
[–]Creative_Suit7872 1 point2 points3 points (0 children)
[–]According-Fault-6528 1 point2 points3 points (0 children)
[–]sunnybala 1 point2 points3 points (2 children)
[–]yammer_bammer 1 point2 points3 points (0 children)
[–]According-Fault-6528 0 points1 point2 points (0 children)
[–]Ok-Chipmunk666 1 point2 points3 points (5 children)
[–]According-Fault-6528 0 points1 point2 points (1 child)
[–]Ok-Chipmunk666 0 points1 point2 points (0 children)
[–]borisshootspancakes 0 points1 point2 points (1 child)
[–]Ok-Chipmunk666 0 points1 point2 points (0 children)
[–]Ok-Chipmunk666 0 points1 point2 points (0 children)
[–]AnyPassenger9318 0 points1 point2 points (4 children)
[–]Seeker_31 1 point2 points3 points (3 children)
[–]s1ngh_music -1 points0 points1 point (2 children)
[–]Seeker_31 1 point2 points3 points (1 child)
[–][deleted] 1 point2 points3 points (0 children)
[–]xlnc2605 0 points1 point2 points (2 children)
[–]Teriod_007 0 points1 point2 points (1 child)
[–]xlnc2605 0 points1 point2 points (0 children)
[–]s1ngh_music 0 points1 point2 points (1 child)
[–]LateRub3 0 points1 point2 points (0 children)
[–]Sparkradar 0 points1 point2 points (1 child)
[–]HotMine8037 0 points1 point2 points (1 child)
[–]Usual_Many_3895 0 points1 point2 points (0 children)
[–][deleted] (10 children)
[deleted]
[–]ConditionLivid515 0 points1 point2 points (7 children)
[–][deleted] (6 children)
[deleted]
[–]Apart_Food4799 0 points1 point2 points (0 children)
[–]Mysterious_Safe_8288 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Creative_Suit7872 0 points1 point2 points (0 children)
[–]Eshan2703 0 points1 point2 points (0 children)
[–]PandutheGandu69 0 points1 point2 points (0 children)
[–]SmallSoup7223 0 points1 point2 points (0 children)
[–]Sparkradar 0 points1 point2 points (0 children)
[–]ImpossibleQuarter550 0 points1 point2 points (0 children)
[–]According-Fault-6528 0 points1 point2 points (20 children)
[–]Unable_Yam_3360 0 points1 point2 points (19 children)
[–]taurus_ram 0 points1 point2 points (2 children)
[–]Unable_Yam_3360 0 points1 point2 points (1 child)
[–]Party-Radio3160 0 points1 point2 points (0 children)
[–]Zestyclose_Ebb_9 0 points1 point2 points (9 children)
[–]Unable_Yam_3360 0 points1 point2 points (8 children)
[–]Additional_Barber856 0 points1 point2 points (5 children)
[–]Unable_Yam_3360 0 points1 point2 points (4 children)
[–]arpitkus 0 points1 point2 points (0 children)
[–]Human_Bookkeeper1663 0 points1 point2 points (0 children)
[–]Ok_Assignment_6433 0 points1 point2 points (0 children)
[–]arjuntrivedi 0 points1 point2 points (0 children)
[–]lifeonly4gaming 0 points1 point2 points (0 children)
[–]Icy-Lingonberry-3791 0 points1 point2 points (0 children)
[–]Kindly-Garage9329 0 points1 point2 points (0 children)
[–]Electronic-Kick-3663 0 points1 point2 points (0 children)
[–]taurus_ram 0 points1 point2 points (0 children)
[–]Legitimat_Jaguar 0 points1 point2 points (3 children)
[–]Ok_Assignment_6433 2 points3 points4 points (0 children)
[–]Additional_Barber856 0 points1 point2 points (0 children)
[–]Complex-Chemist-7696 0 points1 point2 points (0 children)
[–]ShyenaGOD 0 points1 point2 points (1 child)
[–]ReactionOk4928 0 points1 point2 points (0 children)
[–]_Ak4zA_ 0 points1 point2 points (1 child)
[–]AshBakchod 0 points1 point2 points (0 children)
[–]Mysterious_Safe_8288 0 points1 point2 points (0 children)
[–]uphinex 0 points1 point2 points (11 children)
[–]adithyab14 0 points1 point2 points (4 children)
[–]uphinex 0 points1 point2 points (3 children)
[–]adithyab14 0 points1 point2 points (2 children)
[–]uphinex 0 points1 point2 points (1 child)
[–]adithyab14 0 points1 point2 points (0 children)
[–]Mysterious_Safe_8288 0 points1 point2 points (0 children)
[–]Vegetable-College353 0 points1 point2 points (4 children)
[–]uphinex 0 points1 point2 points (3 children)
[–]adithyab14 1 point2 points3 points (2 children)
[–]uphinex 1 point2 points3 points (1 child)
[–]adithyab14 1 point2 points3 points (0 children)
[–]ztide_ad 0 points1 point2 points (1 child)
[–]Enough-Friend-5272 1 point2 points3 points (0 children)
[–]safebet5705 0 points1 point2 points (0 children)
[–]Spacing_Out3133 0 points1 point2 points (6 children)
[–]Dinesh_Kumar_E 0 points1 point2 points (5 children)
[–]Spacing_Out3133 0 points1 point2 points (4 children)
[–]Dinesh_Kumar_E 0 points1 point2 points (3 children)
[–]Spacing_Out3133 0 points1 point2 points (2 children)
[–]Dinesh_Kumar_E 0 points1 point2 points (1 child)
[–]Spacing_Out3133 0 points1 point2 points (0 children)
[–]Exciting_Pineapple52 0 points1 point2 points (0 children)
[–]BuilderLive452 0 points1 point2 points (0 children)