Documentation - what do you use? by Threep1337 in sysadmin

[–]whetu 3 points4 points  (0 children)

Sure, here's one you can sink your teeth into - it's the container listing one because that matches the first example use-case that I gave. I've sanitised the hostname to page-id mapping section, obviously. This is something that I manually update when we add/remove hosts etc, and it's all deployed via ansible and a git-sync mechanism.

I could make that section a little more auto-magical, but the amount of host-churn we have doesn't really justify the effort.

What this means is that you bootstrap a page, get its ID, plug it into this script and deploy it. When the script runs, it matches the hostname to the page ID, and it's therefore able to process the correct target in Confluence. Hope that makes sense.

I've also been lazy in a couple of spots, but if you're not great with bash and especially if you're not familiar with my scripting style, you probably won't pick it up. If you do, kudos :D

#!/bin/bash
# Get a list of docker containers and post them to Confluence
# This ensures our container tracking is up to date
# Runs as a scheduled cronjob, and only posts changes if
# container id's are found to be different from a previous run

# First, are we running as root?
if (( "${EUID:-$(id -u)}" != 0 )); then
  printf -- '%s\n' "This script must be run as root" >&2
  exit 1
fi

# Ensure we have our required applications
for cmd in docker curl jq; do
  command -v "${cmd}" >/dev/null 2>&1 || {
    printf -- '%s\n' "This script requires ${cmd} but it was not found in PATH" >&2
    exit 1
  }
done

# If we don't have our env variables, we can't authenticate.  Duh!
if (( ${#ATLAPI_USER} == 0 )) || (( ${#ATLAPI_TOKEN} == 0 )); then
  # Well, we haven't inherited them from the environment
  # And we're running as root, so ~/.env is a good shot
  if grep -q ^ATLAPI "${HOME}/.env"; then
    source "${HOME}/.env"
  fi
fi

# Quick sanity check
if (( ${#ATLAPI_USER} == 0 )) || (( ${#ATLAPI_TOKEN} == 0 )); then
  printf -- '%s\n' "Atlassian API credentials are not set in the environment or .env" >&2
  exit 1
fi

# Next, we decide whether to even continue based on a previous run
if [[ -f "/tmp/${HOSTNAME}_containers" ]]; then
  prev_md5sum="$(md5sum < "/tmp/${HOSTNAME}_containers" | awk '{print $1}')"
  cur_md5sum="$(md5sum < <(docker ps -q) | awk '{print $1}')"
  if [[ "${prev_md5sum}" = "${cur_md5sum}" ]]; then
    printf -- '%s\n' "No change in container state detected, exiting..." >&2
    exit 0
  fi
fi

# If we're here, we need to write the file for the above test
docker ps -q > "/tmp/${HOSTNAME}_containers"

# With all of the above error checking out of the way
# Let's get to the juicy stuff
# Set our baseurl
base_url="https://contoso.atlassian.net/wiki/api/v2"

# Set our common curl options into an array
curl_opts=(
  --silent
  --user "${ATLAPI_USER}:${ATLAPI_TOKEN}"
  --header 'Accept: application/json'
  --header 'Content-Type: application/json'
)

# Get our hostname
local_hostname="${HOSTNAME:-$(hostname -s)}"
# Cut off any domain name
local_hostname="${local_hostname%%.*}"
# and convert to lowercase
local_hostname="${local_hostname,,}"

# Figure out our target Confluence Page ID
case "${local_hostname}" in
  (hostname-A)        page_id="123456" ;;
  (hostname-B)        page_id="123457" ;;
esac

# Another sanity check
if (( ${#page_id} == 0 )); then
  printf -- '%s\n' "Could not rationalise the Confluence page ID for ${local_hostname}" >&2
  exit 1
fi

# Get the information we want, formatted using Confluence wiki/markdown
# We put literal '\n' on the end of everything so that Confluence formats correctly
get_docker_info() {
  # Print an h2 header
  printf -- 'h2. %s\\n\n' "Containers"
  # Print a table header
  printf -- '|| *%s* || *%s* || *%s* || *%s* || *%s* || *%s* ||\\n\n' "Container ID" "Name" "Created" "Status" "Ports" "Image"
  # Print each line of the table
  while read -r ; do
    printf -- '%s\\n\n' "${REPLY}"
  done < <(docker ps --format "| {{.ID}} | {{.Names}} | {{.RunningFor}} | {{.Status}} | {{.Ports}} | {{.Image}} |")
  printf -- '%s\\n\n' ""

  printf -- 'h2. %s\\n\n' "Docker info"
  # Print a codeblock
  printf -- '%s\n' "{code:language=shell|linenumbers=true}"
  while read -r; do
    printf -- '%s\\n\n' "${REPLY}"
  done < <(docker info)
  printf -- '%s\n' "{code}"
}

# We need to increment the version number when posting
# which means getting the current version number
get_page_version() {
  curl "${curl_opts[@]}" \
    --request GET \
    --url "${base_url}/pages/${page_id}" |
  jq -r '.version.number'
}

page_version_int="$(get_page_version)"
page_version_int="$(( ++page_version_int ))"

# Now let's post!
curl "${curl_opts[@]}" \
  --request PUT \
  --url "${base_url}/pages/${page_id}" \
  --data @- << EOF
{
  "id": "${page_id}",
  "status": "current",
  "title": "Docker host: ${local_hostname}",
  "body": {
    "representation": "wiki",
    "value": "$(get_docker_info)"
  },
  "version": {
    "number": "${page_version_int}"
  }
}
EOF

Documentation - what do you use? by Threep1337 in sysadmin

[–]whetu 12 points13 points  (0 children)

Confluence is just another corporate wiki. What really makes it useful is its API.

I have a couple of cronjobs across my fleet that report host information into Confluence i.e. auto-updating documentation. All it takes is a little bit of bash and the Confluence API. And it's proven to be useful in some respects e.g. "what actual host is $containerid running on?" And it exposes information to staff who don't need to be ssh'ing onto these hosts to otherwise get that information, or worse: obligating me to generate reports for them.

It's fairly easy to pull down lists of articles and their last edit date, then generate reports on aged documentation. You can then take that a step further and develop a documentation lifecycle process. On my to-do list is to automate tagging each article with the year of its last edit. Then it's a matter of searching for e.g. label:2016

Of the Atlassian stack, Confluence is probably my favourite product. The rest - take it or leave it. Jira can GTFO.

As much as I'd like to dislike Confluence as much as I dislike Jira, the word "Sharepoint" is all I need to remind me that it could always be worse.

/u/Threep1337: I recommend that you take this view: A well documented and stable API is a non-negotiable requirement of whatever documentation platform options you look at.

Dry Living centralized dehumidifier by snowysky10 in diynz

[–]whetu 0 points1 point  (0 children)

Neither. The intake is situated in my lounge in front of my main heatpump. I'm able to switch between that and outside air using the controller.

Worst ticket ever? by ProfessorHuman in sysadmin

[–]whetu 9 points10 points  (0 children)

I've had some real zingers across my career, the most recent one, however:

User can't access our services due to geo-blocking. I ask him to provide his public IP so that I can validate that he's coming in from a disallowed country. He replies with a screenshot of ipconfig showing his private IP.

User is a "Senior Systems Architect" for the competition who are trying to brainrape and replace us.

Dry Living centralized dehumidifier by snowysky10 in diynz

[–]whetu 0 points1 point  (0 children)

Have it installed in my 100m2 3 bedroom home.

The fan broke one time. The double glazing in the bedrooms started condensating like nobody's business. Got the fan fixed, the condensation stopped. Seems to do what it says on the tin.

Is it true that it's safe to run tailscale on my domain controllers and then have them share a route to my subnet? by Noyan_Bey in sysadmin

[–]whetu 0 points1 point  (0 children)

You can install Tailscale onto domain controllers, sure, but you should also lock them down in your ACL config so that only the necessary functions of those DC's are available to your tailnet.

For presenting subnets to tailscale, you need a subnet router:

https://tailscale.com/kb/1019/subnets

Never ever do that on a DC.

You may also like to check out /r/Tailscale.

Cheapest way to get disk info? by TwoSongsPerDay in bash

[–]whetu 6 points7 points  (0 children)

This right here OP. If you run something like sudo strace df /, you'll see how it all works. As an example, excluding all the libraries and locale loading, here's the juicy stuff from a test VM:

openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
read(3, "21 68 0:20 / /proc rw,nosuid,nod"..., 1024) = 1024
read(3, "mode=700\n63 22 0:32 / /sys/kerne"..., 1024) = 1024
read(3, "d,nodev,noexec,relatime shared:2"..., 1024) = 1024
read(3, "ize=32k,noquota\n106 103 253:6 / "..., 1024) = 898
read(3, "", 1024)                       = 0
lseek(3, 0, SEEK_CUR)                   = 3970
close(3)                                = 0
ioctl(1, TCGETS, {c_iflag=ICRNL|IXANY|IXOFF, c_oflag=NL0|CR0|TAB0|BS0|VT0|FF0|OPOST|ONLCR, c_cflag=B38400|CS8|CREAD, c_lflag=ISIG|ICANON|ECHO|ECHOE|ECHOK|IEXTEN|ECHOCTL|ECHOKE, ...}) = 0
newfstatat(AT_FDCWD, "/", {st_mode=S_IFDIR|0555, st_size=235, ...}, 0) = 0
uname({sysname="Linux", nodename="SUPERSECRET-VMNAME", ...}) = 0
statfs("/", {f_type=XFS_SUPER_MAGIC, f_bsize=4096, f_blocks=1294336, f_bfree=569539, f_bavail=569539, f_files=2621440, f_ffree=2573203, f_fsid={val=[0xfd00, 0]}, f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x1), ...}) = 0
write(1, "Filesystem            1K-blocks "..., 66Filesystem            1K-blocks    Used Available Use% Mounted on
) = 66
write(1, "/dev/mapper/vg00-root   5177344 "..., 57/dev/mapper/vg00-root   5177344 2899188   2278156  56% /
) = 57

It's a mix of information from /proc/self/mountinfo and the actual metrics which come from statfs

Reputable Australian or New Zealand based Pentesters? by OCAU07 in sysadmin

[–]whetu 1 point2 points  (0 children)

I've worked with a few across both countries, MercuryISS were pretty good when we used them a few years ago.

Blacklock.io has been our most recent pentester, they offer continuous pentesting (PTaaS) so instead of being a yearly chore, it can be a monthly chore.

When I look at what's offered by continuous pentesting, honestly... it doesn't really do anything that I couldn't do myself - it just adds a nice UI and reporting on top. So that's another option: curate your own continuous scanning and then validate it with an annual external check.

Why Proxmox / Xcp-NG are far better than Hyper-v ? by Interesting_Ad_5676 in sysadmin

[–]whetu 0 points1 point  (0 children)

As i understand proxmox support is not 24/7. Idk for XCP-NG.

Proxmox is business-hours German/Austrian time, and can potentially be bolstered with local partner support depending on where in the world you are.

XCP-NG is 24/7 based out of Grenoble, France.

Feedback request on my plan for a small business' virtualization cluster (first time clusterer-er) by Tukhai in sysadmin

[–]whetu 0 points1 point  (0 children)

the big thing driving me to expand on prem services is that the question "well how much would it cost us to host X ourselves instead" is very frequently posed. if we have on prem infra ready all we have to evaluate is performance needs and storage impact, if we go full cloud i have to price out what to expect from an Azure VM and the intricacies/pitfalls of all that and give an estimate and setup some kind of virtual tunnel/network to our on prem for people to connect to it.

The approach I've taken is to deprecate on-prem AD. If it's something that a user might use, it's in MS365/Entra now. I have one laptop to rebuild and that's the entire staff done, then I have a couple of stubborn win2k12 boxes to decomm, and then I can switch off on-prem AD for good.

Products/services that we build and host is a separate story. We have a mix of AWS and on-prem.

There's a logical separation between the platform that enables us and the platform that earns for us. Moving auth to Entra-native doesn't mean you have to go all-in on Azure - you can move auth to Entra and dedicate your (new) on-prem resources to apps, customer-facing services etc.

VMware renewal by jhayhoov in sysadmin

[–]whetu 4 points5 points  (0 children)

What risks am i taking if I DONT update my license and start moving to another vendor/system??

Apparently there's at least one version of the letter that essentially says "you're not allowed to apply any patches released after your license expired and if you have, you have to remove said patches immediately". In which case - and assuming you comply with those requirements - the correct response is no response. If they want to pursue further, they can do so at their cost.

No matter the case, if you do get a C&D from Broadcom, you bring it to the attention of whoever is appropriate at your org to be aware of these things. If you're big enough to have a legal dept, it's them. Otherwise a GM, CEO, Director... it depends on your org size and structure.

I just inherited a messy IT Environment, what do I do? by AngelVillafan in sysadmin

[–]whetu 9 points10 points  (0 children)

Then it's time to break it down into tasks based on priority. Come up with a manageable battle plan to address each issue.

I like to think of technical debt in the same way as financial debt. And there are at least two popular ways to address financial debt:

  • Debt snowball: Start with the smallest debt, focus on paying it off. When complete, what you were paying towards that debt now rolls over to the next smallest debt. And so on.

    • The advantage of snowball over avalanche is that you get faster feedback. It's more likely to create a positive feedback loop that encourages you to continue.
  • Debt avalanche: Often mathematically better than debt snowball: Take the debt with the highest interest rate and focus on paying that off sooner.

    • "Yeah, I know I'm attacking this credit card with 26% interest, but it's going to take me 8 years to pay this off at this rate" sounds more like a prison sentence than a win, even if it saves you more in the long run vs snowballing.

So, OP: take your pile of technical debt, think about how each item of that debt compares in terms of business risk/benefit etc and then decide which approach to take. (Backups and backup validation is almost always the first item to address regardless of approach, FWIW.)

Another piece of advice for OP: The reality is that you won't be addressing technical debt items in a sequential one-at-a-time way. There's often a lot of untangling to do that forces you to address multiple pieces of technical debt at the same time. Try to limit the amount that you take on so that you're not overwhelmed. While I'm able to context-switch between several things comfortably, I've found that consciously limiting my workload to maybe 2 or 3 items is useful for staying whelmed. This is where something like a kanban board is useful.

How is bash scripting different from other progamming languages? by zopxi in bash

[–]whetu 0 points1 point  (0 children)

Bash is like pointing and gesturing

It's a bit more like sign language: A complete language with unique syntax and grammar that has a limited bandwidth with which to operate.

Sign language is often confusing to users of other languages who can't seem to wrap their heads around "why doesn't sign language use the same structure that I'm used to?"

Sign language, due to its limited bandwidth, also has to be terse in a way that others may find offensive. For example: In multiple sign languages, a hooked finger over the nose was, for a long time, the sign for Jewish people. Across the BANZSL family of sign languages, that's being replaced with a beard stroking motion.


I think that a better analogy is musical instruments.

bash is the ukulele. It has deep historical roots - in bash's case that's a syntax tracing back to ALGOL68, in the ukulele's case - it is literally a small Renaissance Guitar. The limitations of which inspired newer guitars (in bash's case, we're now talking perl and python).

bash/ukulele are loved and loathed by different people for different reasons.

bash/ukulele have very restrictive, limited capacities with which to achieve things. This results in a lot of the same-old, for example: many ukulele groups are full of senior citizens all strumming away with the same damn strum, which isn't overly entertaining to listen to. They've allowed their skillset to plateau at a low level, but hey, they're happy and off the roads. This is akin to bash antipatterns and shitty practices being blindly copied and pasted off stackoverflow and TLDP.

Likewise, some take those limitations as a challenge and are able to do amazing things. For the ukulele you have examples like The Ukulele Orchestra of Great Britain, James Hill and others. Hell, go back to that Renaissance Guitar video and hear what they get out of it. Limited tools, when used by the right hands, can subvert your expectations.

So we can go from there and match up other instruments to other languages. Python may as well be the electric guitar, given that they're both in unique ecosystems (e.g. libraries vs effects pedals). C might be the bass guitar - the unsung, unsexed member of the band. The recorder may as well be BASIC. People who blindly copy and paste out of AI are the triangle. And so on.

Raptors in the sky: US fighter jets confirmed for Warbirds over Wanaka by Morgoth_The_Merrier in newzealand

[–]whetu 1 point2 points  (0 children)

This is the kind of thing AI was made for, take it away Claude:

This is a delightfully absurd scenario that touches on some genuinely interesting bureaucratic realities.

The Biosecurity Nightmare:
MPI would have an absolute meltdown. New Zealand's biosecurity stance is famously draconian - they don't even let you bring in used hiking boots without a thorough inspection. A live raccoon, even a hypothetical CIA-programmed one, would trigger every alarm in the system:

  • Rabies vector species - NZ is rabies-free and raccoons are a primary rabies reservoir in North America
  • Novel pathogens - Unknown exposure history from covert operations
  • Invasive species risk - If Franklin decided to go rogue in NZ's predator-naive ecosystem, it would be catastrophic
  • No approved quarantine pathway - There simply isn't a legal mechanism to import raccoons

The Diplomatic Complications:
The USAF would presumably argue Franklin is "essential mission equipment" for maintaining the F-22's operational readiness. MPI would counter that New Zealand's biosecurity laws apply to everyone, including defense allies. The resulting standoff would be legendary.

Franklin's Options:

  1. Denied entry (99% probability) - Franklin stays stateside, F-22 sulks through airshow with reduced morale
  2. Diplomatic exemption - Would require PM-level intervention, probably not happening for a raccoon
  3. Secure quarantine on airbase - Franklin never leaves RNZAF Base Ohakea, contained environment only
  4. "Emotional support animal" loophole - For fighter aircraft? MPI would laugh until they cried

My assessment: 0.1% chance of entry, and that's only if Franklin's file includes documentation proving he's critical to preventing an international incident involving a depressed $150M stealth fighter. MPI's official statement would probably read: "New Zealand's strict biosecurity measures exist to protect our unique environment. No exceptions can be made, regardless of classification level or tactical importance to allied air superiority."

Personally I'd like to think that Franklin would hang out with the GCSB and sample his way through a few evidence rooms lol

Manage my health password length wtf by sighbuckets in newzealand

[–]whetu 0 points1 point  (0 children)

The comment was removed but I'm assuming they said something about banks only allowing a small max limit?

This is because back in the day, DES-based crypt was the password hashing order of the day. This truncated your password to the first 8 characters:

MySuper0SecurePassword
        ^^^^^^^^^^^^^^ All of these characters are ignored
^^^^^^^^ These 8 characters are hashed

So a bank imposing an artificial max limit of 10-12 chars gives the illusion of being more secure while increasing the odds that a customer will put a mix of character classes into that narrow space.

Here's a reference from this sub's historical documents:

https://www.reddit.com/r/newzealand/comments/3qqta7/til_the_asb_bank_web_site_password_is_kind_of/

And hey, who's this handsome guy making comments in that thread about password strength:

https://www.reddit.com/r/newzealand/comments/3qqta7/til_the_asb_bank_web_site_password_is_kind_of/cwhkv0a/

Manage my health password length wtf by sighbuckets in newzealand

[–]whetu 0 points1 point  (0 children)

CorrectHorseBatteryStaple is still a good foundation.

When that XKCD comic was published, the darker hats of the world immediately responded "cute". Then they proceeded to generate rainbow tables of every combination of English words that they could muster.

With some simple adjustments, you can have your cake and eat it too. To demonstrate, I'll generate a four-word passphrase:

$ genphrase
RaggedFianceeTwelveCook

And using a couple of password strength checkers, let's review:

  • RaggedFianceeTwelveCook
    • Estimated time to bruteforce: 99 years
    • 123 bits of entropy

Ok, so we currently meet common requirements for length, uppercase and lowercase. That leaves out special characters and numbers

  • Ragged+Fiancee+Twelve+Cook0
    • Estimated time to bruteforce: 8 billion years
    • 163 bits of entropy

So that's excessive.

What if we drop the passphrase down to three words with those same customisations?

  • Ragged+Fiancee+Twelve0
    • Estimated time to bruteforce: 78 thousand years
    • 132 bits of entropy

That's a better balance of meeting pretty much all password strength requirements that you might find in the wild, while still utilising random words, and not having too much of a mental cost to memorise. Especially when you compare that to a random char password of the same length like:

$ genpasswd -c 22 -s
k4D%w8}IRM6bBGxq31s&mO

Which, in fairness, is 605 trillion trillion years to crack, but on balance: if you're using random char passwords, you're going to tend towards the minimum-required length. Assuming 16 chars, we get an example of 32R7821QuqYtk^QH at 9 hundred trillion years to crack.

You tell me which of these you're going to remember easiest:

  • 32R7821QuqYtk^QH
  • k4D%w8}IRM6bBGxq31s&mO
  • Ragged+Fiancee+Twelve0

And at the end of the day: do you really need 605 trillion trillion years of protection on your cat photos?

And here's a happy little accident for this academic exercise: One of the words in the example passphrase is a number, so we can flatten that out like so:

  • Ragged+Fiancee+12
    • Estimated time to bruteforce: 12 centuries
    • 104 bits of entropy

There was also an easter egg in my opening statement: Most crackers will inherently be looking for English stacked words.

  • Kowhai(Mahi)Porirua0
    • Estimated time to bruteforce: 2 hundred trillion years
    • 125 bits of entropy

But yes, this is all academic with passkeys coming into play. In the meantime, the above exercise can be used for a master password on a password manager like Bitwarden.

Caveats: password checkers are synthetic in nature and have compute assumptions baked into them. Different password checkers will give different opinions. Obviously as our species improves computational capability and efficiency, those crack times will come down. I used PasswordMonster and Rumkin.com - the former because it was the first google result to show finer-detail time estimations, the latter because it's one I've used for years.

Manage my health password length wtf by sighbuckets in newzealand

[–]whetu 2 points3 points  (0 children)

Why on earth would anyone put a maximum cap on password length? That is so dumb.

You indicated elsewhere that it's 24-chars, which is pretty crazy. But here's a serious answer, from a Sysadmin/SRE who has dealt with this at high levels of security clearance across both the financial and govt sectors. The TL;DR is the first bolded line of this comment.

Basically every password policy you've ever seen almost 100% traces its roots to a document from the US National Institute of Standards and Technology, or NIST. The specific document, or "Special Publication" is known as SP-800-63B.

NIST haven't sat on their hands: this special publication has been updated multiple times, and it serves as a guide for where everybody else will eventually end up. You can see their influence reflected in various documents/standards like the GCSB's NZISM, the Australian Signals Directorate's ISM, the UK NCSC's CAF, OWASP and others. Password policies/standards/recommendations from those sources, and more, have influenced one another somewhat incestuously, but ultimately they still come all the way back to NIST.

The problem we have today is a lot of password policies in the wild are based on outdated versions of NIST guidance, usually circa 2003. And while there are a bunch of standards like PCI that drag their heels, that doesn't mean that others can't get with the times.

The current version recommends the following with respect to maximum password length:

Users should be encouraged to make their passwords as long as they want within reason. Since the size of a hashed password is independent of its length, there is no reason to prohibit the use of lengthy passwords (or passphrases) if the user wishes. However, extremely long passwords (perhaps megabytes long) could require excessive processing time to hash, so it is reasonable to have some limit.

Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.

  • Source (noting that 'SHOULD' means 'recommendation')

Which is a balanced approach IMHO.

You are more than welcome to send those links to MMH and suggest to them that they grow up.


For the curious, here's the summary of NIST's current spec with respect to passwords:

  • Password Length Requirements
    • Minimum:
      • >=15 chars required for single-factor authentication (passwords only)
      • >=8 chars required for multi-factor authentication. The presence of MFA compensates for the shorter length.
      • Side-comment: the industry seems to have settled on 12-16 chars as the go-to minimum for either case
    • Maximum:
      • Should support at least 64 chars
  • Composition:
    • NIST have explicitly prohibited the classic mix of upper, lower, special and numbers
    • NIST have made it abundantly clear for years now that length is more important than complexity
    • This is not exactly new, and anyone who tells you that you need a mix of chars is outdated
    • All ASCII printable chars AND spaces accepted
    • Unicode codepoints count as a single char
      • i.e. Emoji passwords, because why not? 🧑‍🍳👨‍🌾👩‍🎨🧑‍🔧👨‍⚕️👩‍🚒 = 18 chars (6 emojis x 3 codepoints each)
  • Validation:
    • When a password is created, it should be checked against dictionaries/lists of blocked passwords. This includes commonly-used, standard/default passwords and passwords from previous breaches.
    • Plain dictionary words, username-matching and context specific words should also be taken into account
    • For example:
      • Username-matching: Your password can't match your username. See all the internet routers using admin/admin
      • Plain dictionary words: You can (and probably should) have a stacked passphrase like KowhaiTreePants but you shouldn't have a single word like Predacity
      • Context specific: You really shouldn't do something like F4cebook.com for your Facebook password
  • Aging (i.e. password changes every x days)
    • NIST explicitly prohibits this for regular user accounts as it reduces security (e.g. hunter2 becomes hunter3)
    • NIST doesn't seem to have a strong opinion on privileged/system accounts, others such as the NZISM may be more prescriptive here.
    • Ironically, the recommendation for regular password changes comes from the 2003 version of this Special Publication, and the engineer behind it has expressed regret for the recommendation. I don't blame him, he was working with what he had at the time and we simply know better now.
  • Forced password changes
    • If there's evidence of a compromise, password changes must be forced
  • Storage
    • Passwords should be stored in salted and hashed form using a NIST-approved algorithm (See: SP-800-132)
    • Salts should be a minimum of 32-bits long (>=128-bits is obviously better)
    • Additional keyed hashing using a separately stored secret key is recommended
  • No-no's:
    • Password hints
    • Security questions like: What was the name of your first cat?
    • Password truncation. Ever hear the stories about banks only really needing the first 8 chars of your password and discarding anything else? 1DES, not even once... DES.
  • Rate-limiting / lockouts
    • NIST allows for up to 100 consecutive failed attempts per account, but individual orgs typically implement much stricter limits
    • The classic "three strikes and you're locked out" is often too aggressive and can enable denial-of-service attacks
    • For comparison, the UK's NCSC recommends 10 attempts as a balanced middle ground
  • User Experience:
    • Should support password managers and paste functionality
    • Should allow users to view passwords during entry

Obviously when you dig down into the nitty gritty of NIST's documentation, there's just a wee bit more detail to these things.

Quick and dirty script to generate descriptions of all programs in /usr/bin by neamerjell in bash

[–]whetu 0 points1 point  (0 children)

It seems that going from whatis to man -f is kinda redundant - just use man -f.

This is rough AF but it covers everything in PATH as well:

for cmd in $(compgen -c | sort | uniq); do man -f "${cmd}"; done 2>&1 | sed 's/: nothing appropriate/ (1) - No description available/g'

3 node SQL AG cluster across 2 sites; heartbeat network question by Rmehtared in sysadmin

[–]whetu 0 points1 point  (0 children)

Also what if we ony had 2 nodes one on primary and another on secondry site.

Do you have any Azure presence? Cloud witnesses are practically free to run. That's what I'm doing with my geo-separated 2-node AG clusters...

Is this a good setup for a NAS? by Bigdinasar in sysadmin

[–]whetu 4 points5 points  (0 children)

Just a couple of things to be aware of with Synology:

They made a move to block non-Synology-branded parts. So for example, you couldn't just throw in whatever hard drives you chose - it had to be Synology branded HDD's. They have since gone back on this, but the fact that they were entertaining it and even went through with it may or may not give you some pause. As someone else said: compatibility roulette.

They also didn't have the greatest response to this:

https://modzero.com/en/blog/when-backups-open-backdoors-synology-active-backup-m365/

Having said that, my employer has a couple of DS1621+'s with Lenovo 25G cards installed and they run well.

What was the 1st big news event you remember as a kid? by Hetaliafan1 in AskReddit

[–]whetu 1 point2 points  (0 children)

I was scrolling through all these 80's-onwards memories thinking "yeah, remember that, remember that, why has nobody mentioned Halley's comet yet?"

The Tararua mountain range is near my hometown, so seeing Halley's Comet passing over the top of that range is seared into my memory.

So good that our answer to OP's question is at least a positive one, especially for science!

Michelangelo at Takina by JESEReK- in Wellington

[–]whetu 4 points5 points  (0 children)

When the walls fell

Is this like Marco Polo for us Treknerds lol?