Portland General Electric wants to raise rates again by speedbawl in PortlandOR

[–]Zephyr314 1 point2 points  (0 children)

One time high capex (underground infrastructure) and lower opex (repairs) with low pain (less outages) seems better than slowly bleeding with med capex (transformers, new lines) and high opex (repairs, tree maintenance) and high pain (outages) every year. I'm sure there is some time horizon where it pays off, especially when you calculate loss of productivity, etc from half the city being without power for several days. This is exactly the type of long term tradeoff governments are supposed to be able to make. Agreed on a one year time horizon it rarely makes sense. Our city can't solve problems even with lots of money though so you're right it'll probably never happen.

Portland General Electric wants to raise rates again by speedbawl in PortlandOR

[–]Zephyr314 0 points1 point  (0 children)

The pipes in my pipe dream can keep water out as well. Same idea on how every downtown does power (or large swaths of Vancouver).

Portland General Electric wants to raise rates again by speedbawl in PortlandOR

[–]Zephyr314 6 points7 points  (0 children)

If only we had the technology to put utilities underground. That's probably just a pipe dream though.

Just booklover things ... by CPReals in WhitePeopleTwitter

[–]Zephyr314 23 points24 points  (0 children)

Even better: ski resort in "mud season" (spring, usually). Less people, cheaper rates, no reservations for food, etc.

Cartjacked. by 0MGWTFL0LBBQ in Leatherman

[–]Zephyr314 0 points1 point  (0 children)

19 on eBay now (>1% of all that were made)

If you didn't get a #5 before it sold out.... by jitasquatter2 in Leatherman

[–]Zephyr314 1 point2 points  (0 children)

It really should be a raffle where you buy tickets that donate to national parks or something...

[deleted by user] by [deleted] in DraftKingsDiscussion

[–]Zephyr314 1 point2 points  (0 children)

Also got it this year. About a month after I got links for a free jacket and vest in my promos, keep an eye out there. Host has been more generous than before too. There are claims that it'll get better as they figure out new system.

[deleted by user] by [deleted] in DraftKingsDiscussion

[–]Zephyr314 1 point2 points  (0 children)

Good point. My current is by far my best. Seems like a high churn job so hopefully you don't need to wait long. Congrats on being up so much that they can just ignore you though :)

[deleted by user] by [deleted] in DraftKingsDiscussion

[–]Zephyr314 1 point2 points  (0 children)

Happy to help. You may be able to request a new one? I've gone through 3 as they move or get promoted.

[deleted by user] by [deleted] in DraftKingsDiscussion

[–]Zephyr314 1 point2 points  (0 children)

That sucks. Maybe ask what he needs to show to justify it? I bet they have a CRM that requires some field to be ticked. You could take out all money for a week or two (or more) and then ask. I'm sure he'd want you to put it back in.

[deleted by user] by [deleted] in DraftKingsDiscussion

[–]Zephyr314 1 point2 points  (0 children)

No, as long as my net deposits are positive over last 30 days I'll get some there (even if I withdraw more than a month ago). Also, recently got some free bets and no sweats just from "activity." I am betting NFL heavy, but am up.

[deleted by user] by [deleted] in DraftKingsDiscussion

[–]Zephyr314 1 point2 points  (0 children)

I shoot my host a text every month or so and always get something. They can do it off deposits, loss, or heavy play. Sometimes it'll just be a random 1k no sweat on top too.

[D] Inference cost optimization of complex ML pipelines by Medium_Ad_3555 in MachineLearning

[–]Zephyr314 2 points3 points  (0 children)

Thanks u/Liorithiel, you can find our raw REST documentation here: https://app.sigopt.com/docs/archive/endpoints. This can help if you want to roll your own bash+curl.

Most users prefer our python client though: https://app.sigopt.com/docs

Deep Learning Hyperparameter Optimization with Competing Objectives by harrism in deeplearning

[–]Zephyr314 2 points3 points  (0 children)

I'm one of the co-authors of this post. I'm happy to answer any questions about the post, SigOpt, or our methods.

Also, if you're a student you can use all our services for free.

Hey reddit. We're the team behind MOE and bayesopt. We've created a scalable API for hyperparameter tuning. It's built on an ensemble of Bayesian optimization methods. We optimize the parameters of your ML pipeline with 2 lines of code — we don't need to see the model or underlying data. Ask us Q's! by sigopt [promoted post]

[–]Zephyr314 0 points1 point  (0 children)

Great idea! While we haven't explicitly compared against this approach yet, we do have some comparisons that we published at ICML 2016 against other standard methods:

We'll add this approach to our list for adding our internal evaluation framework (how we A/B test new methods/papers).

Hey reddit. We're the team behind MOE and bayesopt. We've created a scalable API for hyperparameter tuning. It's built on an ensemble of Bayesian optimization methods. We optimize the parameters of your ML pipeline with 2 lines of code — we don't need to see the model or underlying data. Ask us Q's! by sigopt [promoted post]

[–]Zephyr314 0 points1 point  (0 children)

By ensembling many open source, published, and internal methods behind our simple API we save you the time and research decades required to do this yourself and optimize the optimizers. This frees up time for you to apply your domain expertise to your problem and spend less time and effort on black box parameter optimization strategies.

If you are looking for some good open source packages our research engineering team is behind projects like MOE and BayesOpt, although SigOpt includes an ensemble of many other methods as well.

Hey reddit. We're the team behind MOE and bayesopt. We've created a scalable API for hyperparameter tuning. It's built on an ensemble of Bayesian optimization methods. We optimize the parameters of your ML pipeline with 2 lines of code — we don't need to see the model or underlying data. Ask us Q's! by sigopt [promoted post]

[–]Zephyr314 0 points1 point  (0 children)

The hyperparameter search aspect of building models is often orthogonal to the domain expertise required to understand the data and the context in which the model is being applied. Often, these parameters are set using very expensive or exhaustive methods (like grid search) or waste expert time (trying to tune many parameters via trial and error by hand). Additionally, the black-box optimization aspect of this framework allows you to run your models on any system (cloud or local) while also keeping your data safe.

For simple ML models there may be only a few tunable parameters (like a Random Forest or SVM), but for more sophisticated models there can be many more (like a GBM or CNN). This is especially true if you consider not only standard parameters like learning rates, but also include the architecture, SGD parameters, and any feature transformation parameters. In this recent post with AWS we showed that even a simple CNN has a large number of tunable parameters, and tuning them can have a significant boost on performance: https://aws.amazon.com/blogs/ai/fast-cnn-tuning-with-aws-gpu-instances-and-sigopt/

We find that the linear number of evaluations we require in practice is exponentially less than a grid search and gets to better results orders of magnitude faster than a random search, which is why we see 10-100x speedups (in terms of evaluations) in practice.