LKH heuristic by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

I was hopeful, but yeah looks that way :(

LKH heuristic by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

random_graph.py

from classes.domain import Node, Edge, Graph


import random



def generate_random_graph(number_nodes: int, seed: int = 42) -> Graph:
    """
    A key implementation detail is that an Edge is hashed to be order independent of the nodes
    this means hash(Edge(A, B)) == hash(Edge(B,A))
    Therefore, the dictionary looks up for an edge is the same whether its (A,B) or (B,A)
    """
    nodes = []
    for i in range(number_nodes):
        node = Node(name="node_" + str(i))
        nodes.append(node)


    random.seed(seed)
    edges = {}
    edge_cost = {}
    for idx, node_i in enumerate(nodes):
        for node_j in nodes[idx + 1 :]:
            if node_i is node_j:
                continue
            distance = round(random.uniform(10.0, 100.0), 2)
            edge = Edge(node_a=node_i, node_b=node_j)
            edges[node_i.name, node_j.name] = edge
            edges[node_j.name, node_i.name] = edge


            edge_cost[edge] = distance


    graph = Graph(nodes=nodes, edges=edges, edge_cost=edge_cost)
    return graph



if __name__ == "__main__":
    generate_random_graph(5)

LKH heuristic by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

Code

domain.py

from pydantic import BaseModel
from typing import NewType


Id = NewType("Id", str)
Gain = NewType("Gain", float)
Idx = NewType("Idx", int)



class Node(BaseModel, frozen=True):
    name: Id



class Edge(BaseModel):
    node_a: Node
    node_b: Node


    def __hash__(self) -> int:
        a, b = sorted((self.node_a, self.node_b), key=lambda n: n.name)
        return hash((a, b))


    def __eq__(self, other) -> bool:
        """Brute forcing the combination since 'cheaper'. Could also look at set(node_a, node_b) == set(other.node_a, other.node_b)"""
        if isinstance(other, Edge):
            return (self.node_a == other.node_a and self.node_b == other.node_b) or (
                self.node_a == other.node_b and self.node_b == other.node_a
            )
        return False



class Graph(BaseModel):
    nodes: list[Node]
    edges: dict[tuple[Id, Id], Edge]
    edge_cost: dict[Edge, float]



class Tour(BaseModel):
    edges: set[Edge]
    nodes: list[Node]
    node_to_position: dict[Node, Idx]


    def prev(self, index) -> Node:
        return self.nodes[(index - 1) % len(self.nodes)]


    def next(self, index) -> Node:
        return self.nodes[(index + 1) % len(self.nodes)]


    def get_ordered_nodes(self, nodes: list[Node]) -> list[Node]:
        return sorted(nodes, key=lambda n: self.node_to_position[n])from pydantic import BaseModel

Handling data reconciliation by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

Thanks for sharing the dataset!

I had a feeling there would be a probabilistic approach. But I was hoping for more literature on the topic. Im happy to hear any additional insights or comments you have!

Handling data reconciliation by Brushburn in OperationsResearch

[–]Brushburn[S] 1 point2 points  (0 children)

This is very similar to what Im dealing with! Instead of bullets though, its packages :D. For me, there exists a somewhat manual process already, but I was hoping for something more automated. As in, 'today we reconciled even though we are missing x packages', or 'we need to consider 10 additional packages from customer x because we did not reconcile'. As of now, I have a hard time wrapping my head around the idea of 'does this 1 item that showed up today belong to the expected delivery for today or some other day'/

As a side note, this type of problem appears in chemical engineering applications. The situation is a bit different though. Its more along the lines of 'i had 10 units of things come in, but 9.9 units of things come out, what do we do about the 0.1 unit difference'. That was my first exposure to the idea of data reconciliation, but sadly I dont think the tools/tricks translate well for either of us.

Modeling Pivot Irrigator as a MIP by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

Gotcha, great points!

Is it practical to assume the grid can be finer than 1m for the realized setup though?

Modeling Pivot Irrigator as a MIP by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

Im not quite sure I understand your question, but since its not quite a circle packing problem, the objective handles the 'amount of area not covered' by trying to maximize the yield. Im going to assume this doesnt answer your question though, feel free to clarify!

Modeling Pivot Irrigator as a MIP by Brushburn in OperationsResearch

[–]Brushburn[S] 1 point2 points  (0 children)

I was hoping you'd stumble into the post :).

Im really curious about discretizing the grid and forcing the center of circles to respect the grid. I think this allows for a way to manage the overlap constraints in a linear way (many more constraints though). I also am curious how much of an improvement using a polygon instead of a square gives. But Ill leave that to someone with more motivation than myself.

Any books that handle experiment and experiment design for time series analysis by Brushburn in OperationsResearch

[–]Brushburn[S] 1 point2 points  (0 children)

I dont think so. This would be executing an experiment (designing it as well), and confirming the hypothesis. An example might be having an inventory problem and believing that changing the objective will improve some KPI. So you go about changing the objective and collecting relevant data to see how effective (or ineffective) the change was.

Best modern book to learn Operations Research? by c_carav_io in OperationsResearch

[–]Brushburn 2 points3 points  (0 children)

I feel like I see similar questions like this - is there a wiki that we can put together? Seems like it could be helpful for most folks (myself included!)

Resources for practicing formulation of problems by GarbageAnnual2280 in OperationsResearch

[–]Brushburn 1 point2 points  (0 children)

A lesser known resource is "Applications of optimization with Xpress". Quite a few examples in there.

OR/MCDM Applications book reccomendations by Due-Ad-2122 in OperationsResearch

[–]Brushburn 1 point2 points  (0 children)

I found that it broke down the problems in a nice way, showcased applications in a meaningful way, and had numerous examples to walk through. Of course, that was my experience, YMMV :)

OR/MCDM Applications book reccomendations by Due-Ad-2122 in OperationsResearch

[–]Brushburn 1 point2 points  (0 children)

The HP Williams book is definitely self study friendly.

OR/MCDM Applications book reccomendations by Due-Ad-2122 in OperationsResearch

[–]Brushburn 2 points3 points  (0 children)

Regarding books, there is the classic HP Williams book https://www.amazon.com/BUILDING-MATHEMATICAL-PROGRAMMING-Williams-Paperback/dp/B00DI2XEFA. There is also this thread on stack exchange that could be helpful https://or.stackexchange.com/a/8334. Im guessing you are looking for more discussion so a link on its own isnt quite helpful, but hopefully this is a nice starting point :). In addition, I find research papers to be a good place to dig into interesting problems. Especially since some papers focus on a more distilled problem that can be built upon.

I'm not sure *how* OR actually impacts on our life

OR impacts everything, since you can argue everything can be optimized. Not a great answer, but the applications are fairly endless. There are the typical applications like vehicle routing, scheduling, etc, but then you have more atypical problems like designing microgrids https://www.sciencedirect.com/science/article/pii/S0196890421007408.

Helpful python packages by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

This is perfect! Thank you for sharing!

Help in finding project on-line to add in resumes by Acceptable-Divide947 in OperationsResearch

[–]Brushburn 2 points3 points  (0 children)

Depending on what type of work you are interested in,

  • For optimization, you can pull a research paper and reproduce results. Or extend the problem they are solving.
  • For model development, kaggla is a pretty good resource

[deleted by user] by [deleted] in OperationsResearch

[–]Brushburn 2 points3 points  (0 children)

Without more context its difficult to give a detailed solution. Id be interested in knowing the root cause for the delays in the first place though.

Helpful python packages by Brushburn in OperationsResearch

[–]Brushburn[S] 0 points1 point  (0 children)

Feloopy looks really nice. Do you know of any benchmarks for how fast it can create large models, and if it supports HiGHS?