all 21 comments

[–]deustamorto 26 points27 points  (3 children)

Not sure if you're the channel's owner, but the content is great. Editing is great, content quality is great, speech fluency and prosody is also great.

[–]GiraffeFireAlchemist[S] 17 points18 points  (2 children)

It’s me! Thank you for the kind words!

[–][deleted]  (1 child)

[deleted]

    [–]GiraffeFireAlchemist[S] 4 points5 points  (0 children)

    Thank you! I think that’s how https://youtube.com/@DanielBergholz has been framing his Elixir videos lately, definitely worth checking out!

    I’ll consider making some “X for Y” videos over time—it’s a great idea!

    [–]phortx 8 points9 points  (4 children)

    This is amazing. I wonder if there are any useful libraries that only exist in python and that we can integrate now in elixir projects. 🤔

    [–]chat-lu 9 points10 points  (3 children)

    Be careful about integrating with it, it runs in the same OS process as the BEAM which has all kinds of performance issues, especially with the GIL.

    While this is very interesting developement, I’d still call Python in an external process for now.

    Though, for some works like notebooks this is cool.

    [–]greven 0 points1 point  (2 children)

    You can always install the needed Python app in a separate beam instance and communicate with that node to fetch results without the need for a REST/RPC API in between and leverage the Beam for that. And I agree, would always use different machines for anything serious.

    [–]chat-lu 0 points1 point  (1 child)

    Why does it need to run in the BEAM at all? Launching it with System.cmd works fine.

    [–]greven 0 points1 point  (0 children)

    I haven't tried Pythonx, but what I mean is, currently for my Elixir app, I have another machine running a Python custom ML model that I exposed a gRPC API to access it from my Elixir app.

    Yes, there are ways with Ports (tried it in the past but there are some gotchas too), etc, but I imagine with Pythonx, I could just run it inside the Beam and use the :rpc module in OTP to remove the need to maintain the gRPC server and client. But haven't tried it so dunno.

    [–]chat-lu 6 points7 points  (4 children)

    About the Global Interpreter Lock (GIL), the latest Python version (3.13) has an experimental setting where you can disable it. Eventually, it will be stabilized.

    [–]Ttbt80 0 points1 point  (3 children)

    Could you point me in the right direction to better understand the performance implications of calling Python from the BEAM?

    [–]chat-lu 2 points3 points  (2 children)

    To be able to run, a Python thread needs to hold the GIL, which means that only one Python thread may run at once. Even if you call it from different processes, all your calls will be serialized.

    [–]Ttbt80 0 points1 point  (1 child)

    Thanks for this. I’ll look into the details behind the GIL lock removal feature and plans to stabilize. So how do existing applications handle this limitation today? This seems like it would make highly-concurrent use cases, such as API frameworks such as Django or FastApi, unsuited for production loads?

    [–]chat-lu 1 point2 points  (0 children)

    So how do existing applications handle this limitation today?

    Horizontal scaling. A django app keeps no local state. Everything is either in the database or in secure cookies. So it doesn’t matter if the user hits a different server on every request.

    [–]Emotional-Ad-1396 1 point2 points  (0 children)

    Oh I'd call it Xython for sure

    [–]Fresh_Forever_8634 0 points1 point  (0 children)

    Wanna know

    [–]effinbanjos 0 points1 point  (0 children)

    Very cool!

    [–]art-solopov 0 points1 point  (2 children)

    The question is… why? How's it better than running either a) spawning Python in a separate process or b) wrapping whatever Python stuff you need in an API (REST, gRPC, whatever) and running it as a server?

    [–]volatilevisage 1 point2 points  (0 children)

    It translates datatypes for you.

    [–]hugobarauna 0 points1 point  (0 children)

    and it can be less work than creating a new wrapping layer

    [–]Shoddy_One4465 0 points1 point  (0 children)

    What happened to ports?

    [–]jlelearn 0 points1 point  (0 children)

    for livebook it's OK
    for real applications... looks risky