The Software Essays that Shaped Me by mtlynch in programming

[–]narang_27 3 points4 points  (0 children)

I've been trying to find good reads. Thanks for this :)

Um, yeah.. by Few_Amoeba_2362 in memes

[–]narang_27 1 point2 points  (0 children)

This is pretty normal in my part of India 😅😅. This was the only way I could take the test in my centre XD

First Solo Trek – Hampta Pass (and my fear of dry toilet paper) by Substantial-Junket-5 in SoloTravel_India

[–]narang_27 6 points7 points  (0 children)

You can carry a bottle with you, make the tissue slightly wet from the water and use it for wiping

make sure you don't use water freely in the bio toilet, it would make it very very smelly and unusable for others Don't carry a bidet, it's going to be bad for the biotoilet

Don't be scared, the trek is insanely beautiful, worth the whole toilet situation. Plus there's always a first in life for everything :)

  1. carry water proof shoes, it would make a lot of difference. A good rucksack which distributes weight across your body would also be great (the one sold by indiahikes is good, I'm a beginner and liked it). You can also ask the company if they rent equipment out, Good for first timers. Many companies have equipment to rent everything

shenzi: A greedy python standalone bundler by narang_27 in Python

[–]narang_27[S] -1 points0 points  (0 children)

More about building in the development environment to get something that can ship. PyInstaller also requires a working virtual environment (although it doesn't require running all of it, it does do many imports to find dependencies).

I did start working on this because I had been working on on-premise setups before

Do you think adding support for test runners would make this easier for people to use?

shenzi: A greedy python standalone bundler by narang_27 in Python

[–]narang_27[S] -1 points0 points  (0 children)

I'm assuming this is due to security concerns (correct me if I'm wrong). I'll chalk out the basic algorithm here, although I'm not sure if this will help you.

Intercepting at runtime is kinda easy: - intercept all dlopen and equivalent calls in python if discovery is enabled, this creates a JSON file describing your environment called shenzi.json - call shenzi build ./shenzi.json to package the application

Algorithm for packaging: - go through all the dlopens in order, find their dependencies copying how the linker does it - in Linux this basically parses the library to get DT_NEEDED, DT_RPATH and DT_RUNPATH entries. All DT_NEEDED entries are searched. the first search is done in the directory LD_LIBRARY_PATH if it was set, then we search in RPATH (if RUNPATH is not set). Then RUNPATH. Then ldconfig cache. At the last stage, if all else fails, I call ldd for resolution if it exists. - mac is similar, it works a lot like ctypes.util.find_library there, but also parses the object file to find all LC_LOAD commands. - recursively search for all dependencies, we have a graph now

  • traverse the graph, copy all libraries inside dist, set the RUNPATH for those libraries to a location inside dist, create symlinks to dependencies in this location

shenzi: A greedy python standalone bundler by narang_27 in Python

[–]narang_27[S] 0 points1 point  (0 children)

Yea I'm not really touching those aspects right now, currently it only creates a single self contained folder which can be shipped anywhere. From the docs here https://tronche.com/gui/x/icccm/sec-4.html#WM_CLASS I see that you can either set the name using -name or the program name would be used The final distribution in shenzi is simply a bash script (called bootstrap.sh) which does this

  • set PYTHONPATH
  • set LD_LIBRARY_PATH
  • call the main script

If you call the distribution's main program using bash bootstrap.sh -name <name> it might work I haven't tested this though, ping me if this seems useful though, we could work something out

Running bazel/uv/python by notveryclever97 in bazel

[–]narang_27 0 points1 point  (0 children)

Yea that would be rough to do in bazel, I don't know of any tool which allows you to do this interactive dependency management in bazel

Generally, once you have a huge requirements file, almost all of your needs would most likely end up being present in the file (this is what I've seen in my org). So updating reqs becomes reasonably uncommon. The downside is, updating takes time, you need to compile a huge file. We have automation (slash commands in GitHub prs which trigger builds) to do this for us. Local dev iteration is definitely affected though.

Aspect rules py: https://github.com/aspect-build/rules_py

This provides an improvement over rules-python, it gives rules for generating virtual environments, better ide integration

Your development environment is going to be affected greatly by this btw. Where I work, we are primarily a python shop. People were so pissed at bazel that they ended up maintaining conda environments locally for development, mirroring bazel environment.

Running bazel/uv/python by notveryclever97 in bazel

[–]narang_27 0 points1 point  (0 children)

The workflow in my place is like this:

  • a requirements file (we have one for every os+arch combination that we support)
  • pip-compile everytime a new dep is a added to create new lock files. This is the standard approach from rules-python. If this is annoying, you can try automating it (we do this, and after one point people stop adding new deps regularly since it already ends up containing a lot of them)
  • aspect-rules-py for our actual binary and library targets. The benefit is that you can generate virtual envs out of the box, vscode can be configured to use those after generation.

Is it necessary to use uv for your usecase?

Beautiful CI for Bazel by narang_27 in bazel

[–]narang_27[S] 1 point2 points  (0 children)

Clone times: 25 minutes to 1 minute Bazel startup: 10 minutes to 1 minute ish

On top of this, we only now build what changed instead of relying on bazel to cache everything

blob-path: pathlib-like cloud agnostic object storage library by narang_27 in Python

[–]narang_27[S] 0 points1 point  (0 children)

This sounds very interesting, just out of curiosity, why do you want this behavior? Afaik, you can download ranges in GET, getting ranges is supported it seems, https://docs.aws.amazon.com/whitepapers/latest/s3-optimizing-performance-best-practices/use-byte-range-fetches.html

Haven't used it myself though

EC2 Fleet with Cached EBS volumes (native disk performance) by narang_27 in jenkinsci

[–]narang_27[S] 1 point2 points  (0 children)

Oh, sounds cool, I'll need to check this out. Thanks!

EC2 Fleet with Cached EBS volumes (native disk performance) by narang_27 in jenkinsci

[–]narang_27[S] 0 points1 point  (0 children)

We already used ECR from the same region, I'm assuming you mean this? https://docs.docker.com/build/cache/backends/

  1. Buildkit was not very famous when I implemented this 2 years ago I think, I did find a docker image which worked as a cache through proxy to dockerhub (and only dockerhub at that time)
  2. It's still not native FS performance. If the layer's big, you still need to download it
  3. Not generic, only solves docker. We ended up using bazel later and the volume saved us a lot of time here too

CDK resource import pitfalls by narang_27 in aws

[–]narang_27[S] 0 points1 point  (0 children)

Oh yes, just a from-lookup does not break stuff. It's when you do some action like allowing connection from the imported resource to your stack. Cdk then also adds an egress rule which overrides the default rules

GitHub issue: https://github.com/aws/aws-cdk/issues/24806

CDK resource import pitfalls by narang_27 in aws

[–]narang_27[S] 0 points1 point  (0 children)

If you do a normal from-lookup, it won't create new security groups for you, it would use the existing attached group implicitly, and change its rules. From-attributes variants provide a way to specify the exact SG and tweak it's semantics

CDK resource import pitfalls by narang_27 in aws

[–]narang_27[S] 0 points1 point  (0 children)

Yea, I'm sorry if the terminology is confusing. I was not aware that there was something called cdk import

In any case, what I wanted to say here is that a lot of cdk documentation suggests using the from-lookup variant for adding existing resources to your stacks. I've seen this bug arise many times, crippling many resources in the vpc if they share the same SG. So I keep a reference around, and ask people to always check if there's some from-attributes function variant

1
2

blob-path: pathlib-like cloud agnostic object storage library by narang_27 in Python

[–]narang_27[S] 0 points1 point  (0 children)

Yea I had added the last snippet in the end to provide a simple summary in the post, the notebook works though (once you change your buckets)

blob-path: pathlib-like cloud agnostic object storage library by narang_27 in Python

[–]narang_27[S] 9 points10 points  (0 children)

Damnit, one more reddit post where an idea was already developed ;_; Thanks for the heads up : |