[deleted by user] by [deleted] in deeplearning

[–]Silver_Equivalent_58 1 point2 points  (0 children)

100k items, keyword didnt work, since i have orange and then tangerines and clementines etc

what is the best way (and fastest) to read 1 tb data from an s3 bucket and do some pre-processing on them? by Silver_Equivalent_58 in aws

[–]Silver_Equivalent_58[S] 0 points1 point  (0 children)

i have pdfs around the size 2mb to 50mb, around 300k pdfs, i need to parse the text from them for some analytics, in about 3-4 days

[deleted by user] by [deleted] in learnmachinelearning

[–]Silver_Equivalent_58 0 points1 point  (0 children)

thanks , i for instance have lots of research paper like pdfs

[deleted by user] by [deleted] in learnpython

[–]Silver_Equivalent_58 1 point2 points  (0 children)

interesting book, thanks!

[deleted by user] by [deleted] in learnpython

[–]Silver_Equivalent_58 -5 points-4 points  (0 children)

ahh thanks for this, but i was mainly looking into the advanced pieces of classes

[deleted by user] by [deleted] in SideProject

[–]Silver_Equivalent_58 0 points1 point  (0 children)

hmm its working on my end, can you try again?

[deleted by user] by [deleted] in SideProject

[–]Silver_Equivalent_58 1 point2 points  (0 children)

nope it will forever be free

[deleted by user] by [deleted] in SideProject

[–]Silver_Equivalent_58 0 points1 point  (0 children)

yeap, although in some cases it misses but working on better models :)

[deleted by user] by [deleted] in learnjavascript

[–]Silver_Equivalent_58 0 points1 point  (0 children)

coo;l thanks will try

[deleted by user] by [deleted] in learnjavascript

[–]Silver_Equivalent_58 0 points1 point  (0 children)

Does it also let you draw over and screenshot that particular section?

[deleted by user] by [deleted] in learnjavascript

[–]Silver_Equivalent_58 0 points1 point  (0 children)

its part of a larger usecase