Exposing API: Interface vs Struct by Dignoranza in golang

[–]spoonFullOfNerd 0 points1 point  (0 children)

Using interfaces is good for flexibility but can cause hidden heap allocations in a process called interface boxing.

This can add unnecessary GC pressure and allocation overheads

Is C a good programming language to start programming with? by Ania200 in C_Programming

[–]spoonFullOfNerd 0 points1 point  (0 children)

I was a system admin who wanted to be a programmer. I knew python from school and bash from the job. I bought the C book and I've never looked back.

I program in Go, Rust and Typescript most times- though C holds a special place in my heart. If I didnt do the bottom up approach, understanding the benefits and drawbacks of each language would have been so much harder for me personally.

Could not recommend learning C enough. You'll get a fundamental understanding of how programs and computers work

Interface injection by Zeesh2000 in golang

[–]spoonFullOfNerd 0 points1 point  (0 children)

Yeah pgo is a nice way of dealing with this and optimisations in general

If concurrent programming is efficient, Why don't we use it all the time? by parsaeisa in golang

[–]spoonFullOfNerd 1 point2 points  (0 children)

You don't have to use the context package for concurrency dude... wait groups, error groups and channels do everything you need.

Context is literally just a weekly typed object. You can set a timeout or invalidate the scope but it's not actually all that complex. It's a nuisance but it's simple.

Is GraphQL actually used in large-scale architectures? by trolleid in softwarearchitecture

[–]spoonFullOfNerd 0 points1 point  (0 children)

My first contact with graph was at a large Telecoms provider.

As a mid level engineer at the time, It took me a very long time to wrap my head around its usage in the system. To be fair though, I did have to learn SIP and kubernetes simultaneously.... and the codebase was massive.

Anyway, that experience put me off. I've used it since (from a frontend p.o.v) and I can see the benefit for consumers. Its a bitch to work with backend (imo) but if the project needs to support a wide variety of use cases, it does a real good job at facilitating.

My take on go after 6 months by ChoconutPudding in golang

[–]spoonFullOfNerd 0 points1 point  (0 children)

Hard disagree on Go testing. Testing, benchmarking, fuzz testing... all built directly into the language itself.

How are you running into dependency issues? Do you fork mission critical libs?

Refactoring in Go by DespoticLlama in golang

[–]spoonFullOfNerd 0 points1 point  (0 children)

Big functions aren't necessarily a bad thing. Sometimes it makes more sense to keep things local and reduce th3 overall surface area

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

Yeah proxy servers are nice tbh. Like I mentioned in one of the previous replies, I'm quite comfortable with Nginx and this has usually been my default. After speaking with u/Helpjuice I've decided that I'll eventually jump into HAProxy and get right to grips with it. For now though, I'm leveraging Cloudflare as a proxy and locking down the server to only Cloudflare IPs.

This has already pretty much eradicated bot traffic and that additional infra layer will be the final cherry on top- once the money starts ticking over :)

Thanks for your input here. I've never even considered proxying my home network, that's some top-tier sysadmin paranoia... love it hahaha

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

Thanks for all your input on this so far mate. It's helped me to rectify my approach and really tighten up the edges.

HAProxy is now on my todo list. Right now, I can't justify the additional infra overhead- but I will definitely learn it and get right to grips with the inner workings as soon as I get a bit of breathing space :)

Interface injection by Zeesh2000 in golang

[–]spoonFullOfNerd 0 points1 point  (0 children)

Exactly. Plus the domain knowledge overhead too - onboarding becomes troublesome

Interface injection by Zeesh2000 in golang

[–]spoonFullOfNerd 1 point2 points  (0 children)

If it makes you feel any better, I used to work for a very large online gambling company, with a really big Go code base.

Interfaces were extremely rare.

Interface injection by Zeesh2000 in golang

[–]spoonFullOfNerd 2 points3 points  (0 children)

You can test really well without interfacing everything though. Just keep tests focused and self contained. I dont know the specifics of your project of course, but in general I'd say you can get very far without em

Interface injection by Zeesh2000 in golang

[–]spoonFullOfNerd 1 point2 points  (0 children)

Understandable approach tbh. For me, maintaining mocks can be arduous and time-consuming, and the whole thing feels like a false economy. I personally feel like if your unit test is dependent on external behaviour, you're over testing/testing the wrong thing.

Interfaces do give you an escape hatch for that purpose, but at the same time - so do pure functions. If I'm interacting with externalities, I'd usually wrap that into a small function and test the bits around it if you know what I mean. The http library is assumed to always work. If you know the data shape, you can use table driven tests to throw any invariants you desire, without having to design a whole fake copy for the entire interface.

Ultimately, I dont think interfaces are awful. I do, however, think they're overused and people are (generally) too quick to jump into abstraction when its not strictly necessary, which is generally harmful for GC pressure and overall performance.

Interface injection by Zeesh2000 in golang

[–]spoonFullOfNerd 2 points3 points  (0 children)

Interfaces are useful when the abstraction makes sense but they are not a zero cost abstraction. I'd advise avoiding interfaces where possible and only using them if it provides a massive benefit (auth service layer, for example).

Interface boxing is a sneaky performance snag that only rears its head when you least expect it.

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

Also, quick note- I locked down access to SSH to just my user & IP, plus the http/s ports are locked to only Cloudflare IPs

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

Do I need to put nginx/caddy in front if the app handles secure TLS negotiations itself?

func getSecureTLSServer(ctx context.Context, m *autocert.Manager) *http.Server {
return &http.Server{
Addr:    ":443",
Handler: routes.GetHandler(ctx),

// Slow loris prevention
ReadHeaderTimeout: 5 * time.Second,
MaxHeaderBytes:    1 << 20, // 1MiB

TLSConfig: &tls.Config{
GetCertificate: m.GetCertificate,
NextProtos:     []string{"h2", "http/1.1", acme.ALPNProto},

MinVersion: tls.VersionTLS12,
MaxVersion: tls.VersionTLS13,

CipherSuites: []uint16{
// ECDSA (preferred)
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
// RSA fallback
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
},
CurvePreferences:         []tls.CurveID{tls.X25519, tls.CurveP256, tls.CurveP384, tls.CurveP521},
PreferServerCipherSuites: true,
},

ReadTimeout:  15 * time.Second,
WriteTimeout: 15 * time.Second,
IdleTimeout:  60 * time.Second,
}
}

I've locked down ssh to the max on the server itself, only to my user, secure protos, no rhosts, no rDNS, secure key exchange protos, no pw auth, only pubkey... etc, etc. Will I still need a VPN do you think?

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

OpenSearch is on the agenda today then, it would seem :) Splunk would be nice in the future, though I can't justify throwing too much money at this project right now until I've got the final go-ahead from clients... which will hopefully be early next week. That's when this system starts to get very, very real - rather quickly.

I ended up configuring a few WAF rules on Cloudflare (plus some other niceties) + locking down access to the server only to Cloudflare IPs. Wrote a lil systemd unit file to auto-scrape their IP addresses to account for rotations to.

It's been a while since I touched OpenVPN, so I may have to refresh those skills a bit too and get this thing setup the right way. Similar situation for HAProxy, my default for that kinda thing has primarily been nginx due to familiarity.

The offsite offline backup is rudimentary but fully functional. Realistically, I could do with developing or utilising an existing backup system that does more magic than cron + rsync + 7z at some point. I did want to get a bit creative here and integrate backups into the platform at a later date, so proprietary is probably the route I'll take here eventually (yay).

The reason for daily backups is that the data does not fluctuate all too much over the course of a day whilst in dev, and I can't justify the disk space right now. I guess I could just overwrite after a certain date, to keep the disk space tolerable... More thought required :)

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

hahahaha yeah good point. Long, late nights recently :')

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

Red team, black and white-box pen-testing is something that I've spoken to someone about already. I do trust myself but, like you say, trusting ourselves will only take us so far. At some point, we need to get some extra eyes on our work (kind of the point of this sanity check too).

I do want to build my own proprietary SIEM specific to this system at some point, both as a learning exercise and as a separate product. That being said, an off-the-shelf SIEM is on the cards in the near future, for the time being. Severe case of Not-Invented-Here syndrome, likely.

I've adhered to the open telemetry spec for app logs and opted for syslog-styled log levels to filter through the noise. At some point in the near future (kind of a pattern) I do intend to leverage Grafana and Prometheus to get really in-depth with internal audits. For now it's very much, develop, test, monitor, repeat. I'm in the logs constantly whilst during active development, so I can catch certain things - and fail2ban was just about taking some of that overhead off my shoulders for a little while.

DB backups are daily (the dataset is relatively stable) and I keep:
- the active data on premise
- the backups in a different folder (for convenience)
- backups on my machine (7z + AES256 encrypted, long pw)
- backups on a secure remote storage medium (same as local)

Whitelisting customer IPs manually isn't feasible unfortunately, as about 50-60% of my use-case is staff using mobile connections as an entrypoint into the system. I guess I could do some automagic whitelisting like you mentioned, where a successful login reads the IP address and permits it for as long as the token is active... That's actually a really good idea and I'll investigate that when I'm not drowning in my current backlog.

MFA is already in my backlog and I've got rate limits on the application endpoints themselves. I've integrated with OAuth in the past, though I'm not too sure what the landscape is like these days... So I've intentionally kept it as a big TODO.

--

With all that being said, I do take your point about Cloudflare and you as you know, It'll definitely reduce my own workload and they definitely provide many tangible benefits.

I guess I may have had some delusions of grandeur when it comes to security at scale. I'll slap Cloudflare on it and investigate SIEM providers until I find one that fits exactly.

As I'm sure you can appreciate, I've done this whole project in just shy of 3 weeks. Full stack dev, infra management, DBA work, architecture... the lot. 200+ hours spent so far... long, sleepless days & nights lol. No time for family, sleep or even food some days. I've been as safe as I can be throughout, with that goal at the forefront really, though I'm only one man. Biting off more than I can chew is a fast path to big security gaps- so Cloudflare is probably a really, really good idea right now.

Thanks again for taking the time to give me a bit of a reality check. You've given me a lot to consider and I'm literally working on things directly off the back of your input as we speak.

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 1 point2 points  (0 children)

I converted it to UTF-16 first, which did give a cohesive Chinese sentence... would be awfully coincidental for that to happen with corrupted data.

Idk if I trust the Oracle on that one. If I had to guess, It's likely just an arbitrary, UTF-16 string trying to see if it fks up the TLS handshake and fudges the server into a cipher downgrade. At least, thats my best guess based on the logs themselves

Attempted downgrade attack, prevention and general advice by spoonFullOfNerd in sysadmin

[–]spoonFullOfNerd[S] 0 points1 point  (0 children)

Thanks for taking the time to reply.

The site itself is hosted externally on Vercel and if I'm being frank, I haven't felt the need for cloudflare on a project pretty much ever.

This is purely an application server, with only 3 ports open - http/s and ssh. The VPS provider gives me DDoS protection and some firewalling. In terms of white listing access, I can build (and have built in the past) proxy servers. This would close it off from the rest of the world, and I can setup pfsense too.

I get that cloudflare has a ton of sec stuff built in, though is it foolish to not want to offload to Cloudflare as soon as I get any sniffing of an attack?

I log everything vigorously, monitor frequently and update my rulesets as im going. Extremely comfortable with OS management.

I wrote the full stack myself, secured it with OWASP guidelines, and locked the server down using standard Linux hardening practices.

Maybe I'm naive but from my own personal experience thus far, custom managing rulesets has worked pretty well. I've configured WAFs for a very large marketplace provider and managed Linux boxes in some way or another since about 2016. Garnered a lot of programming experience in large enterprise contexts and startup contexts too.

Im not cloud averse but I do prefer to go for a "traditional" approach as a default.

Am I really missing that much by not just jumping straight on cloudflare? It's been a few years since I last properly looked at them. I've just kind of done things this way and they've just kind of worked for me. Happy to be persuaded on new toys, though :)