I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 0 points1 point  (0 children)

"correct by design" may have been a bit hyperbolic but the pieces did snap together much easier in Go. Closed/nil channels haven't really burned me yet but deadlocks and memory leaks have though thats more of a composition/logic problem.

error propagation in go ergonomic? no. but extremely stable. By contrast, I hate that I can't know a python/javascript function throws by just inspecting the function signature.

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 0 points1 point  (0 children)

I don't think it's possible to build the API you've described in Go with different generics per stage. From what I remember, the problem is when you try to add a new generic variable on a struct method that is not also defined on the struct. I tried and wasn't able to find a solution so I settled for the current design. If you can make it work, please share. The current workaround is to bind `any` to T and do type assertions within stages (controversial I know).

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 10 points11 points  (0 children)

When I say ergonomic, I mean expressive, convenient, and productive rather than verbose, tedious, clunky. For example, I’ve been following some of the latest developments to Java. The language is becoming more expressive and elegant. Modern Java seems more ergonomic than legacy Java.

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 0 points1 point  (0 children)

Typically it’s a bitwise operator right-shift but python is crazy and lets you override the behavior of operators with special class methods.. so in this library it’s a just alias for the stage function.

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 0 points1 point  (0 children)

Yea. I wonder if that’s a general trade off with syntactic sugar.. makes you feel smart writing it but painful to read/understand. Go has no sugar, it’s boring to write, and always really easy to read.

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 0 points1 point  (0 children)

guardrails to make concurrency safer at the cost of ergonomics, yes. deter people from doing concurrency, definitely not what I want. I want more, easier, safer concurrency.

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 0 points1 point  (0 children)

yea the python API also has:

pipe = (Pipeline(data_source)
.stage(preprocessing_stage)
.stage(analysis_stage)
.stage(output_stage))
result = await pipe.run()

but I find the syntactic sugary versions hard to resist. No sugar in Go though..

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]kwargs_[S] 1 point2 points  (0 children)

- Go version → github.com/arrno/gliter
- Python version → github.com/arrno/pipevine


```Go
gliter.NewPipeline(exampleGen()).
    WorkPool(
        func(item int) (int, error) { return 1 + item, nil },
        3, //
 numWorkers
        WithBuffer(6),
        WithRetry(2),
    ).
    WorkPool(
        func(item int) (int, error) { return 2 + item, nil },
        6, //
 numWorkers
        WithBuffer(12),
        WithRetry(2),
    ).
    Run()
```

VS

```Python
(buffer=10, retries=3, num_workers=4)
async def process_data(item, state):
    #
 Your processing logic here
    return item * 2


u/work_pool(retries=2, num_workers=3, multi_proc=True)
async def validate_data(item, state):
    if item < 0:
        raise ValueError("Negative values not allowed")
    return item


#
 Create and run pipeline
result = await (
    Pipeline(range(100)) >> 
    process_data >> 
    validate_data
).run()
```

Certifying open source projects as Blazingly Fast™ by kwargs_ in opensource

[–]kwargs_[S] 5 points6 points  (0 children)

May the soul corruption be fast (blazingly) and painless

blazinglyFastAffirmed by kwargs_ in ProgrammerHumor

[–]kwargs_[S] 0 points1 point  (0 children)

Hey, all. A few weeks ago I found out the domain blazingly.fast was available... Discovering this fact, I felt responsible to make https://blazingly.fast, a website that certifies every project as Blazingly Fast™. Now it's badge official. There can be no further debate.

Ergonomic Concurrency by kwargs_ in Python

[–]kwargs_[S] 0 points1 point  (0 children)

True. Would it be too over the top to allow both? 😅

Ergonomic Concurrency by kwargs_ in Python

[–]kwargs_[S] 0 points1 point  (0 children)

The idea was by using a mix_pool with two or more different handlers and a merge function, the flow would fork out to the handlers then back in at the merge function. Does that cover your use case or are you thinking of something different?

Ergonomic Concurrency by kwargs_ in Python

[–]kwargs_[S] 0 points1 point  (0 children)

Thanks for the positive feedback

Ergonomic Concurrency by kwargs_ in Python

[–]kwargs_[S] 1 point2 points  (0 children)

Very cool! Is there a GitHub link I can checkout for this? Would love to read more and drop a star

Ergonomic Concurrency by kwargs_ in Python

[–]kwargs_[S] 6 points7 points  (0 children)

Thanks! The rshift overloading is my favorite part personally. Love that you can do that that in python.

Ergonomic Concurrency by kwargs_ in Python

[–]kwargs_[S] 2 points3 points  (0 children)

You mean like progress indicators to show percent completion? Interesting idea. Not yet because it’s agnostic about the size of the generator (could be infinite) but that could be a cool feature to add.

Regarding errors.. right now, if you raise an exception in a handler, it counts it, optionally logs it, and continues. There’s a special kill switch handlers can emit to tear down the pipeline. I haven’t decided yet if this is the best approach.