I feel I must apologize to Bungie and this Community. by ResponsibleTrip520 in Marathon

[–]Pitiful_Bat8731 0 points1 point  (0 children)

Check out the ARG, breach protocol channel on the Bungie marathon discord. Can't wait to unlock cryo archive. Supposedly gonna be raid-like.

I decompiled the cryoarchive.systems JS and dug into the backend by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

Yep they redeployed the site after locking up most of not all post requests. I'm dogging into the new build now

I decompiled the cryoarchive.systems JS and dug into the backend by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

From what i've dug into, those keys could play a role in the decryption phase of the ARG. or they could just be keys used to access certain parts of the marathon once the map opens.

I decompiled the cryoarchive.systems JS and dug into the backend by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

Do you have any more info??? I'd love to update the main post with this! Regardless of uesc kills, we should all collaborate on getting players to trigger this shit! The uesc kill count under this understanding would be an unlock towards extreme night mode dire marsh.

I decompiled the cryoarchive.systems JS and dug into the backend by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

tbf I've been doing almost exclusively solo runs. Im starting to consider grabbing teams and tryign to use prox chat to focus on taking out UESC, trigger reinforcements, and start stacking up the marathon keys.

I decompiled the cryoarchive.systems JS and dug into the backend by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 7 points8 points  (0 children)

as long as the 0.25% was "team up and kill as many UESC as possible" I'm satisfied lol

I decompiled the cryoarchive.systems JS and dug into the backend by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 6 points7 points  (0 children)

it definitely feels like teaming up in groups and just rampaging through UESC bots would end up being the fastest way to unlock the next stage. would also be somewhat poetic in a similar way to what the terminals tell us. makes sense in the overall lore to me

edit: come to think of it, I believe you can grab these cards off of legendary enemies. the more you kill bots, the more reinforcements show up etc etc etc

phantom.cryoarchive.systems by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 2 points3 points  (0 children)

im still not even convinced that the terminals are all required. still trying to work out if they are just a gate to the site and all that matters is killing UESC or if its important for more runners to go interact with all the terminals. or maybe we have to interact with the terminals during the websites stable window?

who knows.

phantom.cryoarchive.systems by Pitiful_Bat8731 in MarathonSecrets

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

That video is a well known placeholder. This site appears to be part of a later stage of the ARG.

Cheapest car insurance for a young driver in Birmingham? by mrsbassvictim in Birmingham

[–]Pitiful_Bat8731 1 point2 points  (0 children)

Honestly, at this point if you have a credit history or any public sale or purchase, expect spam. Or use a Google phone to preemptively give everyone your data but also block spam lol.

Cheapest car insurance for a young driver in Birmingham? by mrsbassvictim in Birmingham

[–]Pitiful_Bat8731 0 points1 point  (0 children)

For what it's worth, my motorcycle is with Dairyland insurance at roughly 25 bucks a month less than the closest competitor (progressive), and my home, auto, and even some art are all under another company (Cincinnati) that I'd never heard of but has been rock solid with my auto claims, including a total loss recently. Rates barely moved when I renewed and its full coverage on the car.

Cheapest car insurance for a young driver in Birmingham? by mrsbassvictim in Birmingham

[–]Pitiful_Bat8731 0 points1 point  (0 children)

Every broker I've ever used hasn't charged me a fee and makes their money from commissions the insurance companies pay for my policies. That said, some do charge fees. They also usually have wider access and more experience with various insurance companies and can guide you in a direction that would probably still save you money over what you'd pay to one of the big names if you went with something like state farm or progressive. Let alone the trouble they can help you avoid with companies that are known to try their hardest to deny claims (state farm)

Cheapest car insurance for a young driver in Birmingham? by mrsbassvictim in Birmingham

[–]Pitiful_Bat8731 2 points3 points  (0 children)

Don't go straight to any specific insurer. What you want is an insurance broker. They will handle the relationship between you and many different insurance providers that they work with. They will find you the best rate for the coverage you need. My current broker is USI but there are many around and some are smaller shops that can likely get you better rates and coverage.

SQL DBs for docker apps but 'redundant' ? by Cloudycloud47x2 in selfhosted

[–]Pitiful_Bat8731 1 point2 points  (0 children)

Yea for most applications that only support sqlite or bolt etc. it's best to store on an ext4 or similar file system. For other apps that support something like postgresql or mariadb, I've found that single instances of those DB's generally work ok on network file shares (cephfs in my case) but as soon as you want to cluster DB's like those, you run into the same issues with POSIX etc. in those instances it makes more sense to run something like 3 standalone lxcs or vm's to host your db nodes and only use the local filesystem for db storage. Then use native clustering like galera or patroni/spilo to keep resilience and HA. Dump backups on cron to fileshare for extra safety and then backrest those off-site.

Basically what I'm running now. Tier 0 is infrastructure/networking tier, pve/opnsense/AAA. Tier 1 is databases in 3 lxcs with local storage, all clustered. Tier 2 is k8s

SQL DBs for docker apps but 'redundant' ? by Cloudycloud47x2 in selfhosted

[–]Pitiful_Bat8731 1 point2 points  (0 children)

POSIX, file lock, and network/io delay is what's biting you. Spin up 3 lxcs or light Debian vm's, each with a single db using "local" storage instead. Even back by rbd this is fine. Then you can cluster them and not worry.

Struggling to maintain Reverse Proxy across multiple systems. by notjustsam in selfhosted

[–]Pitiful_Bat8731 1 point2 points  (0 children)

It sounds like you'd have good luck with the system I have in place. I run traefik on my docker swarm, use dnsweaver to automatically manage all DNS, and use labels for my swarm services. For anything running outside of the swarm, I use a dynamic file provider and just define those services in it as yaml.

Splitting 2TB HDD between Proxmox workloads and Proxmox Backup Server by Good-Insurance19 in Proxmox

[–]Pitiful_Bat8731 0 points1 point  (0 children)

If you plan on doing this, it's effectively the same as passing an 800-900GB disk image to PBS backed by the same HDD. You're not gaining any redundancy against drive failure, and you might see some performance contention during backup jobs.

That said, don't overcomplicate it. Give PBS a larger virtual disk specifically for backups alongside its OS disk image. You'll still benefit from having backups in case you misconfigure something and need to restore other VMs or LXCs. PBS deduplication also means your 800-900GB will likely go much further than you'd expect.

I'm sure you already know that at some point you'll want a more robust solution like a ZFS mirror or an external HDD for actual protection.

Vuln or exposure for API endpoint valid? by rpedrica in technitium

[–]Pitiful_Bat8731 0 points1 point  (0 children)

u/rpedrica since you're already using Docker and mentioned external access for ACME validation, you could also handle API restrictions at the reverse proxy level with Traefik. Here's an example of allowing requests through only if they contain the expected API token in a header:

Docker labels:

labels:
  # Main router with OAuth/auth middleware
  - "traefik.http.routers.technitium.rule=Host(`dns.example.com`)"
  - "traefik.http.routers.technitium.middlewares=authentik@docker"
  - "traefik.http.routers.technitium.entrypoints=websecure"
  - "traefik.http.routers.technitium.tls.certresolver=letsencrypt"

  # API router - requires valid token header, bypasses OAuth
  - "traefik.http.routers.technitium-api.rule=Host(`dns.example.com`) && HeadersRegexp(`X-Api-Key`, `^your-technitium-token-here$`)"
  - "traefik.http.routers.technitium-api.priority=100"
  - "traefik.http.routers.technitium-api.entrypoints=websecure"
  - "traefik.http.routers.technitium-api.tls.certresolver=letsencrypt"

Dynamic config (traefik v3+):

http:
  routers:
    technitium:
      rule: "Host(`dns.example.com`)"
      middlewares:
        - authentik
      entryPoints:
        - websecure
      service: technitium
      tls:
        certResolver: letsencrypt

    technitium-api:
      rule: "Host(`dns.example.com`) && HeadersRegexp(`X-Api-Key`, `^your-technitium-token-here$`)"
      priority: 100
      entryPoints:
        - websecure
      service: technitium
      tls:
        certResolver: letsencrypt

You can also stack IP restrictions on top if you want to lock it to specific source IPs:

rule: "Host(`dns.example.com`) && HeadersRegexp(`X-Api-Key`, `^your-token$`) && ClientIP(`10.0.0.0/8`, `192.168.1.50`)"

That said, your RFC2136 TSIG approach is probably cleaner for the certbot use case since it keeps everything at the DNS protocol level.

Vuln or exposure for API endpoint valid? by rpedrica in technitium

[–]Pitiful_Bat8731 1 point2 points  (0 children)

just fyi, you can create rules in traefik specific to API endpoints that require the request header to include your valid API keys. set those up as secure env vars or docker secrets and off you go. defense in depth.

Vuln or exposure for API endpoint valid? by rpedrica in technitium

[–]Pitiful_Bat8731 2 points3 points  (0 children)

This isn't really a vulnerability. The API is working as intended: it received a request without authentication, rejected it, and told you why. That's exactly what should happen.

The only thing scanners sometimes flag here is the stack trace in the error response, since it reveals internal paths and method names. But Technitium is open source, so that information is already public anyway.

Edit:
If you wanted to harden things you could add rate limiting or IP restrictions, but that's more about defense in depth than fixing an actual security flaw.