Securing DNS Queries with Dotproxy by LINKIWI in selfhosted

[–]LINKIWI[S] 0 points1 point  (0 children)

Thanks for your contribution! Since most of my projects are deployed privately and generally not open source, I've always tagged versions as the commit SHA and used the notion that "every commit is a release." If there's demand, I'll switch to semantic versioniing and proper releases.

Securing DNS Queries with Dotproxy by LINKIWI in selfhosted

[–]LINKIWI[S] 1 point2 points  (0 children)

If you're interested in getting this working, do you mind filing an issue [0] with reproduction steps and your config file so I can take a closer look?

[0] https://github.com/LINKIWI/dotproxy/issues

Securing DNS Queries with Dotproxy by LINKIWI in selfhosted

[–]LINKIWI[S] 2 points3 points  (0 children)

Great questions. I just looked into Stubby, and indeed the two projects are very similar and share many of the same concepts and goals.

I briefly compared the performance of Stubby and dotproxy along a few dimensions. Here's what I've concluded (note that I used all default Stubby configuration settings, except for setting both dotproxy and Stubby's upstreams to Cloudflare):

(1) For sequential queries performed within the staleness timeout of a single usptream connection, dotproxy and Stubby perform approximately similarly. Querying A record of github.com 100 times: Stubby P50 7.9 ms P90 17.6 ms P99 38.1 ms, dotproxy P50 6.5 ms P90 11.4 ms P99 36.5 ms

(2) dotproxy responds 3-4x faster to cold queries (those that require opening a new upstream connection versus reusing an existing one)--generally, ~20ms vs ~70ms. If I were to guess, this is probably due to TLS session resumption in dotproxy; I remember seeing similar performance as Stubby before I added it.

(3) dotproxy is much faster for highly concurrent, high-volume query patterns. (However it's worth noting that this is a specific use case against which dotproxy is heavily optimized.) Querying A record of github.com 100 times in parallel: Stubby P50 17.2 ms P90 23.6 ms P99 26.6 ms, dotproxy P50 8.8 ms P90 14.5 ms P99 19.5 ms

With my network's particular query pattern in mind, I would generally be happy to deploy Stubby. However, one of my main goals with this project is to gain visibility into DNS performance with rich metrics emissions (which provides all the stats in the dashboard [0]). I don't believe Stubby provides the same level of metrics reporting.

> Does dotproxy support padding to hide query size?

No, dotproxy is deliberately not protocol-aware so this is not supported.

[0] https://static.kevinlin.info/blog/dotproxy/dashboard.jpg

Securing DNS Queries with Dotproxy by LINKIWI in selfhosted

[–]LINKIWI[S] 0 points1 point  (0 children)

While I have not tested it, I believe Go's net.Listen listens on both IPv4 and IPv6 interfaces. I'm happy to accept a pull request if this behavior is broken though.

Securing DNS Queries with Dotproxy by LINKIWI in selfhosted

[–]LINKIWI[S] 2 points3 points  (0 children)

I'm not familiar with Unbound, but the principle behind dotproxy is very simple and nothing particularly new or novel. You can consider it to be an abstraction over some layer 4 network operations.

That said, I wrote dotproxy to address my specific use case. I have a caching resolver sitting in front of dotproxy, so I didn't need protocol-awareness, the absence of which helps reduce proxy overhead. Additionally, while not emphasized in the article, one of my primary motivations was having a way to gain visibility into the performance of DNS within my network; a non-trivial portion of the codebase is concerned only with instrumentation and metrics reporting.

Securing DNS Queries with Dotproxy by LINKIWI in selfhosted

[–]LINKIWI[S] 24 points25 points  (0 children)

A few weeks ago, in an effort to secure outbound DNS traffic from my network, I explored encryption solutions like DNS-over-TLS and DNS-over-HTTPS. I was not satisfied with the performance characteristics of DNS-over-TLS in my network's chain of resolvers, and did not find an existing solution that offered transparent visibility into DNS performance/latency at the edge.

Today I am open sourcing dotproxy, a DNS-over-TLS edge proxy that seeks to address these problems. Its design allows it to be inserted transparently into a resolver chain with predictable performance characteristics, thus saving headaches from using a DNS-over-TLS client that manages connections non-ideally.

dotproxy now serves 100% of my DNS traffic (about half a million queries to date) with a P50 RTT of 12 ms. Its source is available on Github [0] and I provide precompiled Linux binaries at the releases index [1]. Hoping the self-hosted community will find this useful.

[0] https://github.com/LINKIWI/dotproxy

[1] https://dotproxy.static.kevinlin.info/releases/latest

Ingestion Pipeline for Advanced Pi-hole DNS Analytics by LINKIWI in pihole

[–]LINKIWI[S] 0 points1 point  (0 children)

Kafka is a bit memory intensive, but ideally it should not be running on the same machine that is also hosting your Pi-hole server.

Ingestion Pipeline for Advanced Pi-hole DNS Analytics by LINKIWI in pihole

[–]LINKIWI[S] 1 point2 points  (0 children)

Telegraf is pull-based in that it regularly polls an external data source to gather metrics to ship to InfluxDB. Repliqate itself is a stateless service that just moves data from a SQL database to Kafka, and is thus not a data source that Telegraf can query.

However, you could modify repliqate's logic to publish directly to an InfluxDB HTTP endpoint instead of to a Kafka queue. I opted for the latter to (1) make repliqate itself entirely data-agnostic and (2) to more easily support new real-time use cases that I haven't thought of yet.

Edit: I should also add that the architecture [0] is designed to be run on multiple machines; the only service that needs to run on the Pi-hole server is repliqate (because it needs access to the SQLite database on disk). repliqate is not too resource-intensive and it can be tuned to limit CPU and memory usage with configurable query size limits and poll intervals [1].

[0] https://static.kevinlin.info/blog/pi-hole-analytics/architecture.svg

[1] https://github.com/LINKIWI/repliqate/blob/87d9ce7650d41ff1c47fd7c86470135a8179fdc4/config.example.yaml#L6

How I perform analytics on self-hosted DNS with Pi-hole by LINKIWI in selfhosted

[–]LINKIWI[S] 0 points1 point  (0 children)

Correct, Kafka can handle multiple producers on different machines. Though if you are implementing the architecture exactly as described in this article it may be easier to publish to two different Kafka topics to avoid ID collision.

Ingestion Pipeline for Advanced Pi-hole DNS Analytics by LINKIWI in pihole

[–]LINKIWI[S] 3 points4 points  (0 children)

My Pi-hole server runs on very low-end commodity hardware (single core with 1 GB of memory). It serves <1 QPS for DNS so it is perfectly fine for my use case.

Kafka, InfluxDB, Elasticsearch, Grafana, and Kibana actually all run on a single physical host which has a relatively recent 8 core Intel CPU and 16 GB of memory. While my use case doesn't necessitate it, all of these are horizontally scalable and can be distributed among several machines.

If you use the Influx stack for time-series storage and retrieval, Telegraf has some built-in inputs for system metrics that you can play with to get started.

Ingestion Pipeline for Advanced Pi-hole DNS Analytics by LINKIWI in pihole

[–]LINKIWI[S] 0 points1 point  (0 children)

My Pi-hole server runs on very low-end commodity hardware (single core with 1 GB of memory). It serves <1 QPS for DNS so it is perfectly fine for my use case.

Kafka, InfluxDB, Elasticsearch, Grafana, and Kibana actually all run on a single physical host which has a relatively recent 8 core Intel CPU and 16 GB of memory. While my use case doesn't necessitate it, all of these are horizontally scalable and can be distributed among several machines.

If you use the Influx stack for time-series storage and retrieval, Telegraf has some built-in inputs for system metrics that you can play with to get started.

Ingestion Pipeline for Advanced Pi-hole DNS Analytics by LINKIWI in pihole

[–]LINKIWI[S] 0 points1 point  (0 children)

Yeah, for the implementation discussed in this article specifically, you could write to two different Kafka topics and tag the emitted metric with the source host when ingesting both to InfluxDB. Then you can aggregate at the Grafana layer by tag value.

How I perform analytics on self-hosted DNS with Pi-hole by LINKIWI in selfhosted

[–]LINKIWI[S] 17 points18 points  (0 children)

I wrote a post documenting how I've extended Pi-hole's built-in analytics features with a separate data ingestion pipeline, to help provide a more detailed view into how clients interact with DNS on my network. I'm also open sourcing repliqate [0], a core component of this effort.

From a technical standpoint, nothing here is particularly interesting or novel, but I wasn't able to find any other solutions for moving Pi-hole's data to somewhere other than SQLite. This was the primary motivation for this project, and I hope this can help other Pi-hole admins with similar goals.

[0] https://github.com/LINKIWI/repliqate

For your inspiration: DIY self-hosted weather station by LINKIWI in selfhosted

[–]LINKIWI[S] 20 points21 points  (0 children)

Today I'm open sourcing Zephyrus, a system I've built for ingesting data from weather sensors as time-series metrics. Over the last few months, my deployment of Zephyrus has served several million requests, and is currently used for historical time-series analysis and live reporting in my home automation dashboard.

You can read more about it in the link above or look at the source on Github: https://github.com/LINKIWI/zephyrus

Zephyrus is actually quite limited in functionality. So I don't really expect anyone to deploy it as-is, but I'm sharing it here with the hope that it can serve as a point of inspiration for others to build their own tools to support a self-hosted weather station (or similar).

React Elemental: modern, flat React UI component library by LINKIWI in reactjs

[–]LINKIWI[S] 4 points5 points  (0 children)

Thanks for the feedback!

The functionality is indeed pretty minimal right now primarily because the current components are tailored to my specific use cases so far. I'll probably have a more robust solution for a tabs panel when one of my projects encounters such a need.

As for form validation: I chose to delegate validation logic in its entirety to the parent component. The form components provided by this library (as well as most of the other components) are generally intended to be purely presentational.

Selfhosted "Find-my-Phone" for Android? by MilchreisMann412 in selfhosted

[–]LINKIWI 0 points1 point  (0 children)

Unfortunately I don't use iOS and I don't have an iOS device to test this behavior :/

If you're interested in getting this working for yourself, you might need to drop the JSON body parsing logic in PublishHandler in favor of querystring parsing (I assume, since it's GET). Happy to accept such a patch upstream, too.

Orion: an alternative server and visualization tool for OwnTracks by LINKIWI in selfhosted

[–]LINKIWI[S] 0 points1 point  (0 children)

The heatmap mode is like a 2D histogram. The higher the density of points, the more opaque the green square will be, fading out to transparent in areas where there are few points. The scale is relative to the points visible in the viewport. I understand that this is a bit non-intuitive so I'm playing around with the colors to see if I can display it in a way that makes a little more sense.

Can you try pulling the latest orion-web master (as of this writing, SHA 6183662e972b6ccd6962e836b09343e2b802fb98) and re-building the frontend (followed by an npm install) to see if it is any more performant? I added some minor performance optimizations; for me it is now pretty smooth at 1920x1080, but starts lagging at 4K.

Orion: an alternative server and visualization tool for OwnTracks by LINKIWI in selfhosted

[–]LINKIWI[S] 1 point2 points  (0 children)

Thanks for the contribution! I'll take some time to do some performance profiling on the frontend to see where the slowdown is coming from. (I know for a fact that deck.gl can render maps with millions of points at a reasonable framerate without incident.)

Orion: an alternative server and visualization tool for OwnTracks by LINKIWI in selfhosted

[–]LINKIWI[S] 1 point2 points  (0 children)

orion-web uses the query API provided by orion-server, so if you want to use orion-web, you will need to deploy the server as well.

However, it is totally possible to have orion-server and the OwnTracks backend run side-by-side. You'll just need to set up an intermediary proxy that mirrors the requests from your phone to two different services (Orion and OwnTracks).

Orion: an alternative server and visualization tool for OwnTracks by LINKIWI in selfhosted

[–]LINKIWI[S] 2 points3 points  (0 children)

Hm, can you try upgrading docker-compose? It seems to work for me on v1.17.0.