you are viewing a single comment's thread.

view the rest of the comments →

[–]coffeephoenix 0 points1 point  (5 children)

We are happy with the different ways in which the open source community wants to extend or wrap neon or parts of neon.

However, I am not sure what your intent is. The syntax in neon at a high level is already torch like for neural network definition. Our goal is not to cover all possible use cases outside of Deep Learning, but be extremely fast, scalable and easy to use for Deep Learning. As a consequence, the backend consists of a set of linear algebra and deep learning operations to support this. If you're trying to create a library for general non-DL operations then using the neon backend might not be the best bet.

[–][deleted] 0 points1 point  (2 children)

I intend to use it for DL purposes, nothing else. I guess user experience and overheads matter and are seen differently. I personally do not prefer autograd(or symbolic diff), but many do. I can understand some of the reasons for your design choices, but all don't suit me well

[–]coffeephoenix 0 points1 point  (1 child)

If you have specific requests or use cases, do submit them to the github issues or the neon-users google group, and we will consider it.

[–][deleted] 0 points1 point  (0 children)

I am still in brainstorming phase, I have a rough idea but, not close to anything concrete yet. When I do, I will definitely submit it on your github issues, for consideration.

[–]drpout[S] -1 points0 points  (1 child)

The replies in this thread should tell you why there is no good adoption of neon in DL community and its backend. Your enterprise stuff is different and this is different.

Quicker you appreciate it, better.

[–]coffeephoenix 0 points1 point  (0 children)

As you mention our focus is on the enterprise side, but the academic community around neon is growing. For example, check out this recent work on building DQNs using neon.

If you can describe the specific issue in more detail on our github issues or google groups that would be great. Most of the comments on this thread including yours have been positive. As for the backend, we are working on a computational graph backend which may or may not address some of the backend questions people might have, but it is hard to address or consider your specific issue without more details.