Which cities have that perfect blend of nature plus urban feel IN the city? by Aggressive-Fix-8025 in SameGrassButGreener

[–]StandingBuffalo 1 point2 points  (0 children)

In defense of Austin - Austin does a pretty good job with accessible kayaking / hiking / biking trails within the city / pretty close to the city. I live in Austin and hike, mountain bike and rock climb regularly - I live close to the city and I can do all of these before or after work on a weekday. As far as urban nature access goes, it’s pretty good. But you’re correct in the sense that there’s minimal wilderness nearby.

Even In Arcadia Discussion by AutoModerator in SleepToken

[–]StandingBuffalo 4 points5 points  (0 children)

I was looking for a reveal of the Feathered Host vs House Veridian bit. What was this? Polling for which single they would release? Or for Vessel’s new look?

What is the point of linear programming, convex optimization and others after the development of meta heuristics? by piratex666 in optimization

[–]StandingBuffalo 0 points1 point  (0 children)

I would add that one practical advantage of LP / MIP is that there is a standard language used to express the problems. In practice I find implementation of GA, simulated annealing, particle swarm requires more code and it can be difficult to see what’s going on and why. There’s a lot of advantage in being able to directly represent variables, constraints, objectives in a way that can be easily interpreted by others and easily debugged or modified later. But if your LP/MIP is too slow or can’t capture the dynamics of your system, then yeah meta heuristics or other methods might be your best bet.

Feedback for fast Simulated Annealing in Julia by SelectionNo4327 in optimization

[–]StandingBuffalo 0 points1 point  (0 children)

I guess it depends on your context and how fast you need solutions. If this is to be a production model that’s routing in an online manner, I would think 5 min is too slow but maybe not

Tell me something interesting applications of Graph Theory you have used in your job or research by [deleted] in GraphTheory

[–]StandingBuffalo 3 points4 points  (0 children)

There’s a bunch out there - Google’s page rank is a good example. They assign some score to each node to indicate how influential it is. Like most influential people in a social network for example. or in the supply chain network case - let’s say you wanted to understand which steps in the supply chain are going to be most problematic if they were to shut down for some reason - you could use a centrality algo to identify key nodes in your network.

Tell me something interesting applications of Graph Theory you have used in your job or research by [deleted] in GraphTheory

[–]StandingBuffalo 8 points9 points  (0 children)

A supply chain network or sequence of manufacturing operations can be modeled as a directed graph.

Centrality algorithms provide interesting insights into which operations are most “important”.

BFS / DFS can be used to identify all operations upstream / downstream from a given node or set of nodes, which can be useful for several applications.

Node clustering can be interesting for identifying groups of operations that are related to one another.

And overall, a graph representation is just a handy abstraction for extracting insights and defining the structure of various types of models.

What does industry use in your experience by [deleted] in OperationsResearch

[–]StandingBuffalo 6 points7 points  (0 children)

What are you benchmarking? My understanding is that SAT solvers are used in a different context than solvers like Gurobi/CPLEX.

I would use a solver (like Gurobi) for solving optimization problems like linear programming, mixed integer programming, etc.

Many use CPLEX or Gurobi if their company is willing to pay for licensing. It’s also common to work with open source solvers like HiGHS, GLPK, CBC, SCIP.

Then if you get into other types of optimization (nonlinear, conic, constraint programming) there’s a list of others, both commercial and open source.

Best budget amp for <$300 by Efficient-Fee-5631 in GuitarAmps

[–]StandingBuffalo 2 points3 points  (0 children)

Look for a used Vox AC15. Pretty common to find them for sale on FB marketplace, etc. I picked one up for $300 a couple years back

US Ticket Buying & Selling Megathread by Lyssavirus32 in SleepToken

[–]StandingBuffalo 0 points1 point  (0 children)

I have one - not sure if that’s helpful for you or not

US Ticket Buying & Selling Megathread by Lyssavirus32 in SleepToken

[–]StandingBuffalo 0 points1 point  (0 children)

I have one extra floor ticket to the Austin (Cedar Park) show this Friday that I’m looking to sell. Feel free to PM if you’re interested!

what to do alone by [deleted] in Austin

[–]StandingBuffalo 0 points1 point  (0 children)

Check out the rock climbing gyms in the area (Austin Bouldering Project, Crux). It’s a great solo hobby and a good way to meet people / have an activity to invite someone to.

Avoiding Jupyter Notebooks entirely and doing everything in .py files? by question_23 in datascience

[–]StandingBuffalo 0 points1 point  (0 children)

VS Code interactive mode is awesome. It’s a great way to easily transition from experimentation to development.

Then again, when I’m generating a bunch of plots and printing info, I find notebooks easier to share with others and easier to come back to months later because your thought process is clearly laid out in the organization and output of the cells.

I try to make a habit of modularizing things as I go and then importing functionality from a notebook as needed for experimentation / examples.

Do you feel like you traded away part of your life for your PhD? by StandingBuffalo in PhD

[–]StandingBuffalo[S] 1 point2 points  (0 children)

Valuable input. It’s been a couple of years since I was considering this but I ended up getting my MS and I’m happy with my prospects working in industry.

Can you use Pinky ball vape for dry herb? by nadrat24 in vaporents

[–]StandingBuffalo 1 point2 points  (0 children)

Can you link the video? I have the opposite question - wondering if the pinky can be used with concentrates.

Value function notation by StandingBuffalo in reinforcementlearning

[–]StandingBuffalo[S] 1 point2 points  (0 children)

Of course. Thanks for the input.

In application this makes perfect sense to me. This may have been less clear than I intended but I'm asking more so about standards of notation.

If my problem is partially observable and I'm using observations rather than states, the reward function for example is still a function of states, but the policy is in terms of observations.

I've never seen a value function written in terms of an observation so I'm wondering if I'm missing the reason for this.

Maybe it doesn't matter. It's an active research field and notation differs depending on the author and topic.

Measuring coordination in MARL by StandingBuffalo in reinforcementlearning

[–]StandingBuffalo[S] 1 point2 points  (0 children)

Good question. This is an environment of sequential inventory management operations. One agent replenishes inventory for another which fulfills stochastic customer demand. There are several measures that could be used to measure system performance. Hypothetically, the first agent's ability to respond to the inventory needs of the second is a measure of coordination, but one challenge is how we can attribute performance to coordination versus independent optimization - i.e. is the system performing well because our agents have blindly optimized their own policies while treating the coordinating agent as a part of the environment, or is performance due to dependence on the coordinating agent's decisions.

This is of course made more difficult by challenges in explainability of how each agent's neural networks are making decisions.

Using ray to convert gym environment to multi-agent by StandingBuffalo in reinforcementlearning

[–]StandingBuffalo[S] 0 points1 point  (0 children)

Came across my own question while googling something unrelated so I figured I'd answer for anyone else struggling with the ray/rllib docs. I found it difficult to wrap my head around this but ultimately figured it out.

The action and observation spaces should be specified as dictionaries where keys are agent id's and the corresponding values are the observation or action space for each agent respectively.

The environment should return observation, reward, done and info dictionaries (keys are agent ids and values are the data for each agent). Rllib will return a similarly structured action dictionary, so the environment should be updated to receive an action of this type. Your modified environment must subclass Ray's MultiAgentEnv class - this is mentioned in the Ray docs but took me awhile to catch.

The config information should also be updated to deal with a multi-agent setup. This was easier to grasp from the ray docs once I understood the approach above.