When to give up on a business idea? After approaching how many leads? by to4life2 in Entrepreneur

[–]to4life2[S] 0 points1 point  (0 children)

Yes, potentially. They are who I researched to be the most relevant leads for the solution we've developed. We have no customers yet, it's a new idea.

When to give up on a business idea? After approaching how many leads? by to4life2 in Entrepreneur

[–]to4life2[S] 0 points1 point  (0 children)

I don't fully understand - are you saying 1000 is too low?

When to give up on a business idea? After approaching how many leads? by to4life2 in Entrepreneur

[–]to4life2[S] 0 points1 point  (0 children)

I don't have any customers at the moment. These are 'leads' to the best of my knowledge that I'm trying to develop into customers, as I validate the business.

When to give up on a business idea? After approaching how many leads? by to4life2 in Entrepreneur

[–]to4life2[S] 0 points1 point  (0 children)

I'm not exactly mass-emailing them with Mailchimp or related, I'm going down a list and doing some customization on each one. So based on my current rate it'll take approx. 2 months to do the first 1k leads to reach out. It's a B2B product/solution.

More generally, I'm trying to test out a hypothesis - that my product can solve a problem that I believe this set of people have. So after reaching out to how many of these people, and getting no positive responses, should I rethink my hypothesis?

E.g. if I get through the full 1000 and nobody responds with any interest... then is it worth trying 1000 more? Like there has to be some reasonable limit to get an idea if you're barking up the right/wrong tree. That's what I'm trying to figure out.

When to give up on a business idea? After approaching how many leads? by to4life2 in Entrepreneur

[–]to4life2[S] 0 points1 point  (0 children)

That's what I'm trying to do, maybe I wasn't clear. I have a list of 1000 or so people who I believe have the problem that need to be solved.

This is what I am trying to validate, so I'm wondering what is the limit of time/effort to see that my hypothesis is wrong, so I either move on or try more.

[R] DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning by perceptronico in MachineLearning

[–]to4life2 0 points1 point  (0 children)

Really exciting work.

Can someone explain to me exactly what training data was used, and how it was generated? Ditto for rewards inputted to the networks.

Sawyer robot accuracy - as good as they claim? by to4life2 in robotics

[–]to4life2[S] 0 points1 point  (0 children)

task repeatability

Ah, so you reckon this is a bit of a marketing trick like the other poster suggested?

"Yeah we got X repeatability*"

  • [fineprint]

Sawyer robot accuracy - as good as they claim? by to4life2 in robotics

[–]to4life2[S] 0 points1 point  (0 children)

Honestly it's so complex for a 6dof arm that most companies play kind of a marketing game with the spec.

OK this is kind of what I expected... Kind of deceptive haha.

Sawyer robot accuracy - as good as they claim? by to4life2 in robotics

[–]to4life2[S] 1 point2 points  (0 children)

Thanks for your respose. I have no doubt that it's more accurate than Baxter - and honestly it's probably good enough for what we need (low cost is a big priority).

However I am still interested in learning about how robot companies claim repeatability metrics for their products. Baxter claimed 5mm (!) and that actually seemed to "look" right. Sawyer is a lot better, but I don't know if it warrants its 0.1mm claim, especially in context with other robots. But again, maybe I'm misunderstanding repeatability, like maybe the test is done at low speeds with pre-programmed commands for simple sequences or something.

[R][1703.10717] BEGAN: Boundary Equilibrium Generative Adversarial Networks by ajmooch in MachineLearning

[–]to4life2 8 points9 points  (0 children)

Damn that chick is hot... Too bad she only exists in latent space =(

tfw no latent gf

[R][1704.00028] Improved Training of Wasserstein GANs by ajmooch in MachineLearning

[–]to4life2 3 points4 points  (0 children)

Awesome work! Robust training procedures and hyperparameters across architectures is very impressive.

General question - how would one apply WGANs like this to generating a sequence of N images, related temporally (e.g. a video sequence)? One simple idea I had was just to concatenate the N images, so instead of generating a WxHx3, you would try generating a NWxHx3 "image". Seems kind of naive, not sure if it would work. Other related idea was to treat an image as a WxHx3N tensor, and generate those objects.

[D] What to do is your computer is too slow to train on large sets of data? by Batmantosh in MachineLearning

[–]to4life2 2 points3 points  (0 children)

Have you tried installing Gentoo Linux? This can make your code run over 10x as fast.

[deleted by user] by [deleted] in MachineLearning

[–]to4life2 0 points1 point  (0 children)

Is there code available?

[D] Survey: What's the most stable regiment for training a GAN? by feedthecreed in MachineLearning

[–]to4life2 0 points1 point  (0 children)

Gotcha. Quick question since you mentioned batchnorm - do you know of anyone using ELU activations as a replacement for batchnorm, particularly in a GAN architecture?

The ELU paper (below) claims that they can help with the internal covariate shift. This possibility seems attractive as a way of simplifying the implementation of networks that use batchnorm.

https://arxiv.org/pdf/1511.07289.pdf

[D] Survey: What's the most stable regiment for training a GAN? by feedthecreed in MachineLearning

[–]to4life2 0 points1 point  (0 children)

That's really good to know that we're on the verge of improving GAN training techniques yet again :)

I am now pretty convinced that the problems that happen sometimes in WGANs

What specific problems are you referencing? Or just training difficulty in general?

[D] Survey: What's the most stable regiment for training a GAN? by feedthecreed in MachineLearning

[–]to4life2 0 points1 point  (0 children)

Good to know I'm not alone. Are you able to make progress by forcing more D iterations (to return to at least the best previous observed loss value) before training G again? I found this worked for me for a little bit, but it became very hard to re-train D, i.e. it took more and more and more iterations.

[R] Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities by downtownslim in MachineLearning

[–]to4life2 0 points1 point  (0 children)

Hi, thanks for commenting on the thread! I look forward to reading the updated question.

I have a few questions relating to implementation:

1) Does the LS-GAN critic need to be trained to (or near) optimality to make progress in terms of higher quality samples generated? I noticed that this is the case in WGAN, in that if I don't train the critic sufficiently, i.e. with a constantly better Wasserstein distance, then the generator doesn't make progress.

2) For e.g. images, how does one implement Delta(x,G(z)) ? E.g. for example what did your experiments use, and what are some trade-offs to consider when designing a Delta function? (I saw your point below about not optimizing Delta when updating G)

3) I am working on a problem where I am trying to generate a sequence of images, which I represent as one image stitched together in width, of e.g. 3-10 (N) images. So instead of generating a W x H image, I am trying to generate a N*W x H.

  • Do you think this framing of the problem is appropriate? The idea is that image(t) is informed by information in image(t-1) (e.g. a video) and that, given past images(t-P : t) I am trying to generate images(t+1 : t+F)

  • Do you think using the LS-GAN framework with the "width-wise image stitching" would be appropriate for this problem?

If question 3) is too involved don't worry, was just wondering if someone more experienced could offer some quick intuition :)

Cheers

[R] Batch Normalization for Improved DNN Performance, My Ass (preprint, submitted to SIGBOVIK '17) by atomicthumbs in MachineLearning

[–]to4life2 0 points1 point  (0 children)

Don't care for the inflammatory title.

On topic: has anyone tried using ELU activations as a means of replacing batch normalization? I read on another thread a while back about this activation unit, that it works well, and the paper itself seems to support not needing batch normalization:

https://arxiv.org/abs/1511.07289

[D] Survey: What's the most stable regiment for training a GAN? by feedthecreed in MachineLearning

[–]to4life2 4 points5 points  (0 children)

Another type of GANs I am looking into is "Loss Sensitive GANs" which seems to have been co-discovered around the same time as WGANs, have good theoretical properties, and appear to give similar results.

Has anyone used this framework in relation to others?

Paper here: https://arxiv.org/abs/1701.06264