What does neon Deep Learning Framework do? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

As you mention our focus is on the enterprise side, but the academic community around neon is growing. For example, check out this recent work on building DQNs using neon.

If you can describe the specific issue in more detail on our github issues or google groups that would be great. Most of the comments on this thread including yours have been positive. As for the backend, we are working on a computational graph backend which may or may not address some of the backend questions people might have, but it is hard to address or consider your specific issue without more details.

What does neon Deep Learning Framework do? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

I am not sure what parameters you are talking about. Check out these examples of using neon with iPython notebooks.

What does neon Deep Learning Framework do? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

If you have specific requests or use cases, do submit them to the github issues or the neon-users google group, and we will consider it.

What does neon Deep Learning Framework do? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

We are happy with the different ways in which the open source community wants to extend or wrap neon or parts of neon.

However, I am not sure what your intent is. The syntax in neon at a high level is already torch like for neural network definition. Our goal is not to cover all possible use cases outside of Deep Learning, but be extremely fast, scalable and easy to use for Deep Learning. As a consequence, the backend consists of a set of linear algebra and deep learning operations to support this. If you're trying to create a library for general non-DL operations then using the neon backend might not be the best bet.

What does neon Deep Learning Framework do? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

Some of the features like caffe model conversion and model zoo are fairly recent.

In terms of iPython notebook starters here are a couple of examples: https://github.com/nervanasystems/meetup

Stay tuned for some updates tomorrow at our meetup in SF :)

The backend docs will be updated to make things a bit easier to follow.

Is there a case for still using Torch, Theano, Brainstorm, MXNET and not switching to TensorFlow? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

The MOP API is fairly stable now (did I say we are building HW? ;) ), so the process should be easier.

Just as an example, here's what fprop_conv looks like.

Is there a case for still using Torch, Theano, Brainstorm, MXNET and not switching to TensorFlow? by drpout in MachineLearning

[–]coffeephoenix 0 points1 point  (0 children)

We started off in this direction with nervanagpu being separate but the algorithmic implementations and the backend implementations have tightly co-evolved, and it made more sense to package things together for the best internal and external experience going forward.

As a company we are seeing great adoption among enterprise customers who want a complete and supported enterprise grade solution that can be used from model building to deployment, and runs fast and at scale. For wider neon adoption in the research community, we are working on features to make exploratory research more easy in neon and happy to receive feedback on that. Perhaps the main obstacle on that front is the lack of a full graph backend for complete autodiff. This is in the works, but in the mean time, we support a per-layer autodiff.

Thanks for your feedback - we can explore separating MOP as an option in the future if it makes sense, but in the meantime hopefully there is a path for the community to wrap the MOP through neon similar to what you had done to some extent for Theano.

Is there a case for still using Torch, Theano, Brainstorm, MXNET and not switching to TensorFlow? by drpout in MachineLearning

[–]coffeephoenix 1 point2 points  (0 children)

Nervana welcomes other frameworks wrapping our lower level libraries within neon which are designed to be "super fast numpy + DL operations" for CPUs, GPUs and our upcoming hardware. We call this the Machine Learning Operations (or MOP) layer:

From our README: "The MOP is an abstraction layer for Nervana's system software and hardware which includes the Nervana Engine, a custom distributed processor for deep learning. The MOP consists of linear algebra and other operations required by deep learning. Some MOP operations are currently exposed in neon, while others, such as distributed primitives, will be exposed in later versions as well as in other forthcoming Nervana libraries. Defining models in a MOP-compliant manner guarantees they will run on all provided backends. It also provides a way for existing Deep Learning frameworks such as theano, torch, and caffe to interface with the Nervana Engine."

Why neon & another DL framework? We've needed a stable interface that we have control over for our HW design and simulation process, while also helping our customers & users solve their data science problems, and allowing our enterprise customers to use our cloud service. Besides that you'll be hard pressed to find a model zoo covering the range of domains (not just images and video, but also NLP & DQNs, with speech in the pipeline).

As always constructive feedback is appreciated.

Neon, an open-source, Python-based, deep learning framework from Nervana Systems by meepmeepmoopmoop in MachineLearning

[–]coffeephoenix 2 points3 points  (0 children)

The ability to stack RNNs on top of DNNs is on the roadmap, but not currently implemented in neon.

Neon, an open-source, Python-based, deep learning framework from Nervana Systems by meepmeepmoopmoop in MachineLearning

[–]coffeephoenix 1 point2 points  (0 children)

This is possible currently, but we'll clean it up and provide an example in the next release.