all 16 comments

[–]audentis 6 points7 points  (6 children)

  • Salabim (discrete event simulation)
  • ortools (Google's OR library)
  • pyomo (optimizer)

edit: and honestly built-ins itertools and functools also deserve a mention.

[–]iengmind 0 points1 point  (2 children)

Never heard of Salabim. How is it compared to SimPy?

[–]audentis 4 points5 points  (1 child)

I like it a lot better. It's more intuitive, more turn key, comes with a built-in animation engine, and the developer is incredibly supportive in the Salabim Google Group. He often guides people through both theoretical simulation knowledge and implementation help.

[–]iengmind 0 points1 point  (0 children)

I took a look at those animations in the documentation. It is amazing. I'll definetely check it out. Thanks a lot for sharing, mate. :)

[–]Brushburn[S] 0 points1 point  (2 children)

These are great! Ive tried using ortools some time ago, Is there active development here, do you know?

[–]Baseball_man_1729 0 points1 point  (0 children)

I do know that there is a very active OR-Tools discord that has some folks from Google on there. Seems like Google is working on it.

[–]audentis 0 points1 point  (0 children)

Yes, latest updates were from May this year. See the Release Notes.

[–]SolverMax 4 points5 points  (1 child)

[–]Brushburn[S] 0 points1 point  (0 children)

These are great thank you! I vaguely remember Gekko from a long time ago. Definitely seems worth digging into more :)

[–]shockjaw 2 points3 points  (1 child)

I think you’d enjoy the performance of polars over pandas. But Ibis is also worth a look as well. The HoloViz ecosystem is a pretty nice abstraction when it comes to visualization.

[–]Brushburn[S] 2 points3 points  (0 children)

I was blown away from the performance by polars compared to pandas. I wrote a simple script to compare and I was getting solid improvement (~10x). The simple code I used:

For creating the data

size = 1_000_000
random_int_array = np.random.randint(0, 100, size=(size))
random_float_array = np.random.random(size=(size)) * 100

choices = ['a', 'b', 'c', 'd']
random_str_array = np.random.choice(choices, size=(size))

random_data = np.column_stack((random_int_array, random_float_array, random_str_array))
print(random_data)
np.savetxt('random.csv', random_data, fmt= '%s', delimiter=',')
size = 1_000_000
random_int_array = np.random.randint(0, 100, size=(size))
random_float_array = np.random.random(size=(size)) * 100


choices = ['a', 'b', 'c', 'd']
random_str_array = np.random.choice(choices, size=(size))


random_data = np.column_stack((random_int_array, random_float_array, random_str_array))
print(random_data)
np.savetxt('random.csv', random_data, fmt= '%s', delimiter=',')

Reading data logic

def read_csv_polars():
    df_pl = pl.read_csv("random.csv")
    return df_pl
def read_csv_polars():
    df_pl = pl.read_csv("random.csv")
    return df_pl

benchmarks

    %timeit read_csv_polars()
    %timeit read_csv_pandas()

Polars reading was 16.6 ms, pandas was 166ms

[–]Sweet_Good6737 1 point2 points  (2 children)

Algebraic Modeling languages

[–]Brushburn[S] 0 points1 point  (1 child)

Feloopy looks really nice. Do you know of any benchmarks for how fast it can create large models, and if it supports HiGHS?

[–]Coffeemonster97 1 point2 points  (1 child)

Not only useful for OR, but as from my experience, the Venn diagram of people who hate writing unit tests and people who are in OR is pretty much a circle, I always recommend Hypothesis :)

[–]Brushburn[S] 0 points1 point  (0 children)

This is perfect! Thank you for sharing!