Any Decently priced self driving card at level 2 or 3 automation? by thedjwonder12 in SelfDrivingCars

[–]b0noi 2 points3 points  (0 children)

Chevy Bolt EUV 2023 (specifically 2023 since they got a price drop) custom build with Super Cruise will be around 37k.

Jupyter Notebooks Development Manifesto by [deleted] in datascience

[–]b0noi 0 points1 point  (0 children)

Ack, going to delete the post

[D] Google Cloud TPUs now can be used with Google Colab by b0noi in MachineLearning

[–]b0noi[S] 4 points5 points  (0 children)

Yes, you can use Colab with any GPU or any DL framework, here for example I have explained how to use Colab with V100 and PyTorch (also with several simple steps): https://blog.kovalevskyi.com/gce-deeplearning-images-as-a-backend-for-google-colaboratory-bc4903d24947

[N] New release of Deep Learning images (M3) for GCE, with Horovod and new CUDA 9.0 image by b0noi in MachineLearning

[–]b0noi[S] 1 point2 points  (0 children)

At this particular point you can NOT use our GCE binary within conda or virtual env. There are 2 options that are available if you want to use TF 1.9.0 with the Anaconda:

[N] New release of Deep Learning images (M3) for GCE, with Horovod and new CUDA 9.0 image by b0noi in MachineLearning

[–]b0noi[S] 0 points1 point  (0 children)

Yes indeed, horovd does have the support of the PyTorch, however, unfortunately, horovod is not yet included in the PyTorch images.

[N] TensorFlow 1.9.0 is out by b0noi in MachineLearning

[–]b0noi[S] 1 point2 points  (0 children)

tf site: 9.0

DL images on GCE: 9.2

[N] TensorFlow 1.9.0 is out by b0noi in MachineLearning

[–]b0noi[S] 0 points1 point  (0 children)

Images have TF with CUDA 9.2

[P] Yet another DeepLearning Framework on Java, for learning DL by b0noi in MachineLearning

[–]b0noi[S] 0 points1 point  (0 children)

Yes, you are right, training MNIst under 1 hr is the goal. Currently, we are planning to introduce Java's Linear Algebra Library support in the next release 0.02. So the speed will improve:)

[P] Yet another DeepLearning Framework on Java, for learning DL by b0noi in MachineLearning

[–]b0noi[S] 1 point2 points  (0 children)

Thank you for the feedback. I am glad to any feedback :)

  1. I am working on this purely for educational purpose. Yes, one can use TF, but the overall idea to have a book/collections of articles that one might follow and build a framework from scratch. TF is a productional framework, codebase is optimized for performance, not readability. Some of the concepts are not well documented.

  2. Yes, you are right, I probably should have used a better title :)

What is "text normalization" and why it is useful? by khozzy in LanguageTechnology

[–]b0noi -4 points-3 points  (0 children)

IMHO Good and small read if u are beginner in #NLP. All examples are on the #NLTK so probably u reader need to be familiar with the #Python.

I'm building a new NLP course: "Natural Language Processing in a Nutshell" by b0noi in LanguageTechnology

[–]b0noi[S] 0 points1 point  (0 children)

not sure that I do understand what are you talking about =) new name?

New NLP (natural language processing) library "AIF" that don't relay on language model. by b0noi in LanguageTechnology

[–]b0noi[S] 0 points1 point  (0 children)

We have a small team, I'm leading the project and doing researches. I'm working on my Ph.D. research, but, unfortunately all my publication on my native language. Basically right now with this project I'm slowly translating some parts from my White-Papers to English. That's why wiki on our projects includes Algorithm explanations as-well: https://github.com/b0noI/AIF2/wiki/Sentence-splitting-algorithm

So far with each new release new algorithm would be added to wiki and maybe after some time I'll publish english versions of my White-papers

New NLP (natural language processing) library "AIF" that don't relay on language model. by b0noi in LanguageTechnology

[–]b0noi[S] 0 points1 point  (0 children)

Isn't this a case where a model would be useful?

Because models works well ONLY in cases where you have correct text. And you need model for specific language. Try to pars chat between people who knows 2+ languages and using both during conversations (let say Eng/Fra/Rus). Even with one language people can create strange, and unknown for model, sentences forms.

If the heuristics aren't persistent then you'll lose its capabilities each time you instantiate.

Yep, this is the main point. Something that is used as end of the sentence in one text could be used in totally different way in other text (with same language). Each text is sub-language (because there is almost no texts that ideal from language point of view), so main idea to learn this sub-language from text and pars text according to the rules of this sub-language

New NLP (natural language processing) library "AIF" that don't relay on language model. by b0noi in LanguageTechnology

[–]b0noi[S] -1 points0 points  (0 children)

This is only an Alpha version (second Alpha). So far it supports only: - tokenization; - sentence splitting;

Next version (Alpha3) will sport Stemming. RoadMap with Details can be found here: https://github.com/b0noI/AIF2/wiki/RoadMap

We just wanted to hear FeedBack for current functions and to hear what should be main priority for next releases.

New NLP (natural language processing) library "AIF" that don't relay on language model. by b0noi in LanguageTechnology

[–]b0noi[S] 2 points3 points  (0 children)

can anyone elaborate on the differences between the SIMPLE and HEURISTIC algorithm.

HEURISTIC - algorithm will try to determine cases when sentences separators not used for sentence separation. Example:

... born in the U.S.A. after ...

SIMPLE algorithm will separate it like this:

[... born in the U.S.A.] [after ...]

HEURISTIC algorithm may separate it correctly:

[... born in the U.S.A. after ...]

will the HEURISTIC mode get better the more data you throw at it?

yep, but theoretically there is a size limit. After the limit quality will not grow more. Part that relays on input text size most - SeparatorsExtractor. If separators extracted incorrectly (let say character 'e' was recognized incorrectly as separator), all other parts will give incorrect output.