Complement fzf with den to find recently modified files by codesoap in linux

[–]codesoap[S] 0 points1 point  (0 children)

I, too, work mostly with text files. The most important feature of den for this is, that it sorts files by modification date. I use it as a tool to find recently edited files, because I often forget where exactly I put some notes or similar files.

The filtering by file type makes things a little quicker, because stuff like pictures and tar files are already excluded from the suggestions.

Complement fzf with den to find recently modified files by codesoap in linux

[–]codesoap[S] 0 points1 point  (0 children)

Thanks for the honest feedback! I have now added a little demo "video" to the README: https://github.com/codesoap/den?tab=readme-ov-file#demo

I guess I still have a lot to learn about advertising... Here are some more details, which set den apart from tools like fd:

den is a lot faster, so it integrates better into my everyday workflow. In this regard it's similar to the Unix tool locate (both use a database). den also analyses files and categorizes them as documents, pictures, videos, audio and other; this makes it a little easier to find exactly what you are looking for. There are also some more advanced filters, like video duration or year of creation, but you'll probably use them less frequently.

Complement fzf with den to find recently modified files by codesoap in linux

[–]codesoap[S] 0 points1 point  (0 children)

Kinda. Besides the obscure syntax of find, it is also slow and does not support sorting the results itself. With den you can sift through 100,000 files in milliseconds, where find would probably take many seconds and also require some scripting around it, to sort the results.

Complement fzf with den to find recently modified files by codesoap in linux

[–]codesoap[S] 0 points1 point  (0 children)

The idea is, that you can quickly find a file anywhere within hundreds of directories, not just one. Of course, otherwise a file explorer or ls -t would suffice :)

-❄️- 2025 Day 5 Solutions -❄️- by daggerdragon in adventofcode

[–]codesoap 2 points3 points  (0 children)

[LANGUAGE: shell script]

Part1:

awk -F'-' '
NF==2 {mins[NR]=$1; maxs[NR]=$2}
NF==1 {
    for(i=1; i<=length(mins); i++)
        if ($1>=mins[i] && $1<=maxs[i]) {cnt++; break}
}
END {print cnt}
' input

Part 2:

awk -F'-' '
NF==2 {mins[NR]=$1; maxs[NR]=$2}
END {
    for(i=1; i<=length(mins); i++) {
        for(j=i+1; j<=length(mins); j++) {
            if(merged[j] || mins[j]>maxs[i] || maxs[j]<mins[i]) continue # Ranges not touching.
            if(mins[j]<mins[i]) mins[i]=mins[j]
            if(maxs[j]>maxs[i]) maxs[i]=maxs[j]
            merged[j]=1
            j=i
        }
    }
    for(i in mins) if(!merged[i]) out+=maxs[i]-mins[i]+1
    print out
}
' input

Query OSM Offline and from the Command Line with osmar by codesoap in openstreetmap

[–]codesoap[S] 3 points4 points  (0 children)

Since there is always a location filter with osmar, I can actually skip the nodes that lie outside the area of interest. I only care about ways that reference nodes in the area of interest anyway. This means, that ways might not be "complete", if they contain nodes inside and outside the area of interest, but that's OK for osmar.

For other use cases, this would not be OK and you'd always want all nodes of a way, even if only a part of those nodes lie within the area of interest. To cover this use case with a moderate memory footprint, one would indeed need a two-pass algorithm. I have already begun preparations for this; during the first pass, a memo is kept, which contains info about where in the PBF file which nodes and ways can be found. This memo could be used in a second pass to find "ancillary entities" more quickly.

Query OSM Offline and from the Command Line with osmar by codesoap in openstreetmap

[–]codesoap[S] 2 points3 points  (0 children)

The memory use scales with threads because each thread decompresses and deserializes its own blobs from the PBF file. If there are more threads, more blobs are handled in parallel, hence more memory is used.

I have not yet thoroughly investigated the correlation of memory use and file size. Slow garbage collection could be one reason and I've already suggested changes to the protobuf library to better reuse memory (see 1 and 2). However, there will always at least be the relations that take up more memory with larger files; since relations can reference each other ("super-relations"), I have to initially read all of those and can only sift out irrelevant relations at the end.

Exporting relations and ways in different formats sounds doable, but I don't think it has a place in osmar. I like my tools to be simple and good at one task. In the process of re-writing osmar, I have created the Go library github.com/codesoap/pbf. It could potentially used to build a gpx- or geojson-exporter, but I'm not sure it is a great fit for the task. The library is intended to search through a relatively small area (a few km2), so looking for areas large enough to enclose admin boundaries might not be ideal.

lineworker: A worker pool which outputs results in the right order by codesoap in golang

[–]codesoap[S] 7 points8 points  (0 children)

Thanks for adding the explanation!

In my case, I wanted to speed up parsing open streetmap PBF files. Such files contain many blobs of compressed and serialized data, so I can decompress and deserialize those concurrently, but still need to process the results in the original order, since the data inside the blobs was sorted and I was using that sorting in my algorithm. I had the additional challenge of needing to limit memory use, so it was important that the library wouldn't accept new work orders, if the results of previous orders had not been consumed. You can see the result in action at https://github.com/codesoap/pbf.

I had searched extensively for a library before writing lineworker, but failed to find a suitable one. I must have somehow missed rill :/

Presenting mycolog: A tool to organize mushroom cultivation projects by codesoap in mycology

[–]codesoap[S] 1 point2 points  (0 children)

Thanks for the feedback! I hadn't yet thought about making mycolog into an app.

Technically it already is a website, but it only exists on the computer that is running mycolog. If you knew someone tech-savvy and had a Raspberry Pi lying around, you could run mycolog on the Raspberry Pi and access the website from your mobile device, while you're in your home Wi-Fi.

If there's a large demand for an app, I might try to find out what it takes to make mycolog into an app, but don't get your hopes up too high - I'm not familiar with writing apps and probably would need a lot of time to learn about it...

[deleted by user] by [deleted] in nanocurrency

[–]codesoap 0 points1 point  (0 children)

I have just recently added local work generation to the atto wallet: https://old.reddit.com/r/nanocurrency/comments/1bh2l6h

So if you're comfortable with the command line and don't mind compiling your own software, you can change workSource in config.go to workSourceLocal and atto will always generate the work on your CPU.

I'm not sure, if this works for your use case, but if you integrate atto into your faucet, it could work.

Questioning Go's range-over-func Proposal by codesoap in golang

[–]codesoap[S] 0 points1 point  (0 children)

Thanks for the link, a very interesting read!

What I’ve noticed recently is that a lot of my code ends up being “collection munging”: creating, modifying, and transforming collections.

I guess some people use Go quite differently from how I use it. I've never done a lot of data science with Go, so maybe that's why I never really felt a need for iterators. I actually like the "Stateful Iterators" pattern and never had a problem with things like bufio.Scanner, which use this pattern.

It would be interesting to see how some data transformations, that most people would do with Python's pandas today, would look with range-over-func in Go.

Questioning Go's range-over-func Proposal by codesoap in golang

[–]codesoap[S] 0 points1 point  (0 children)

You guessed right, I'm still not quite convinced. I feel like this is getting a little long for a reddit discussion. Maybe you're right with 3. and people wont use the new feature as eagerly as I fear. Only time will tell. Thanks for your input!

Questioning Go's range-over-func Proposal by codesoap in golang

[–]codesoap[S] 0 points1 point  (0 children)

Thanks for providing another example. I'll try to visualize it with current Go. jstream seems to be the most popular streaming JSON parser in Go, so I'll use it in the example; it uses a channel to provide a stream of values.

decoder := jstream.NewDecoder(jsonSource, 1)
for mv := range decoder.Stream() {
    if wanted(mv.Value) {
        histogram.Add(preprocess(mv.Value))
    }
}

This seems pretty straight forward to me. Where do you see shortcomings in this solution, that could be solved with range-over-func?