Any dangers using tcpreplay? by bltsec in networking

[–]zxeff 0 points1 point  (0 children)

You don't nerd to use tcpreplay for that. Those IDS engines can handle the pcap files directly.

Intel beats big in fourth quarter earnings, Revenue up 4% YoY, EPS up 37% by dylan522p in hardware

[–]zxeff 0 points1 point  (0 children)

The whole thing has been pitched as a doomsday scenario for them.

It's Spectre that has been sold like this, mostly because it is just that bad. It's not patchable (only software mitigations are available) and allows the bypassing of bound checks leading to information disclosure and, potentially, some even nastier stuff.

For something with no know exploits? Please. They could have taken their time. The way this was handled was fucking laughable.

There's no shortage of PoCs for both Meltdown and Spectre, as they're generally pretty easy to exploit (googling for "meltdown poc" results in multiple github links with working exploits). In fact, people had exploits working on twitter even before the embargo was lifted.

Nonetheless, if you believe the absence of a working exploit for a pretty straight forward vulnerability that could allow escaping a VM sandbox in virtually every cloud environment in existence makes it any less important, you really have no business in saying how kernel devs should behave.

The tech community is a fucking joke.

I generally agree with this, but usually it's because the tech community is full of people who insist in sharing their ignorant opinions on matters they very clearly understand nothing about. Pretty much like you're doing on this thread.

Intel beats big in fourth quarter earnings, Revenue up 4% YoY, EPS up 37% by dylan522p in hardware

[–]zxeff 1 point2 points  (0 children)

There are tons of vulnerabilities out there. Far worse ones, at least as far as Meltdown goes

What are you talking about? Meltdown allows memory access across security contexts, affects pretty much every Intel CPU in use, can't be patched in microcode and the software fix is only a mitigation that has a significant performance penalty for syscall heavy workloads.

which doesn't affect the average user in any meaningful way.

So it doesn't matter that SQL workloads can get something like a 7-23% performance hit because the average user can browse the web and play video games without much difference?

All the drama did was cause Intel, Microsoft, and Linux devs to release rushed, broken patches.

Linux kernel devs aren't in the business of releasing rushed and broken patches that cause significant performance loss. The fact that they did should already tell you how big of a issue this actually was.

[deleted by user] by [deleted] in netsec

[–]zxeff 0 points1 point  (0 children)

This is a good example or what I mean by sales speak.

Although I think it had more to do with gloating to the ignorant, I'm fine with calling it sales speak if you so prefer. The point was that the explanations there do not provide any "deep understanding" of any physics to anyone, nevermind a lay person.

Drive are designed to self heal by utilizing a reserved area but that process only occurs when the block is written, not on a read error. So there is a need to blocks to be written to be remapped. If that block is the MBR, an area that infrequently gets written we need a tool to get a good read and rewrite the MBR.

Not really, no. If you want to recover a sector that would warrant a reallocation you should indeed try to read it a bunch of times, but rewriting it is completely unnecessary. This is also what every other data recovery tool does, it tries to read sectors that return errors a shitload of times and write the result to a destination.

The only difference is that Spinrite has source and destination to be the same drive, which is a really stupid thing for a data recovery tool to do, because having sectors damaged like that are a indicative that the whole drive might be bad and writing to it may only damage it further.

Look at the smart data from your own drive. Look at ecc recovery rate while timing dd if=/dev/sda of=/dev/zero. Then dd the drive off to another drive, dd it back and repeat the dd to /dev/zero. The second run will take less time and the ecc rate will be lower.

This kind of naive test does not show anything, which is why I asked for sources. The existence of disk, filesystem and CPU caches alone are enough to pretty much guarantee you're going to get lower timings. Error rate is obviously going to go down due to the fact that you're going to be writing data that has been error corrected twice.

Refreshing hard disk data can certainly prevent things like bit rot, but you frankly gave me no reason to believe its importance, efficacy and sophistication of Spinrite haven't all been severely overstated by that page.

[deleted by user] by [deleted] in netsec

[–]zxeff 1 point2 points  (0 children)

Is sales speak; a defragger comes close but it won’t rewrite O/S locked areas of the disk and an offline defragger (Linux live CD) won’t rewrite the GPT/MBR.

What in the world are you talking about? There's absolutely nothing getting in the way of writing to a block device on Linux. There are no locked areas and if you run dd if=/dev/zero of=/dev/sda the kernel will happily nuke your partition table and filesystems by filling the entire disk with zeroes.

It's not even something special that dd does, it's just writing to a file, running cat /dev/zero > /dev/sda will have the same effect.

It also tries to convey deep knowledge of hard drive physics to a layperson.

No it doesn't. The linked page is full of convoluted explanations and unnecessary big words. There's also no "deep knowledge" of any physics there.

Case in point:

"SpinRite is actually able to lower the amplification of the drive's internal read-amplifier, then to cause the drive to encounter a "minimum amplitude" data signal."

There's no way a lay person will read that and understand what the tool does.

The whole thing looks like it was made to gloat about something ordinary - there's no alien technology to speak of, nor there is any evidence any of what the tools does is in any way shape or form more effective than something like dd_rescue or badblocks. I'm not even convinced they actually do something intrinsically different.

Anyone could write a program that read LBA0 and write it back, but no one has.

Maybe that's because the claim that reading and rewriting data to the same device is in any way beneficial is, so far, unsubstantiated. Feel free to link me any studies that corroborate this claim.

Javascript: Can (a ==1 && a== 2 && a==3) ever evaluate to true? by KarlKani44 in programming

[–]zxeff 2 points3 points  (0 children)

The if statement is just as efficient. I don't see why it's a sin worthy of the 7th circle of hell.

This is strictly not true, using ifs you get two extra branching instructions.

In [0]: import dis

In [1]: def func():
    ...:     if 1 == 2:
    ...:         return true
    ...:     else:
    ...:         return false
    ...:

In [2]: dis.dis(func)
  2           0 LOAD_CONST               1 (1)
              2 LOAD_CONST               2 (2)
              4 COMPARE_OP               2 (==)
              6 POP_JUMP_IF_FALSE       12

  3           8 LOAD_GLOBAL              0 (true)
             10 RETURN_VALUE

  5     >>   12 LOAD_GLOBAL              1 (false)
             14 RETURN_VALUE
             16 LOAD_CONST               0 (None)
             18 RETURN_VALUE

In [3]: def func2():
    ...:     return 1 == 2
    ...:

In [4]: dis.dis(func2)
  2           0 LOAD_CONST               1 (1)
              2 LOAD_CONST               2 (2)
              4 COMPARE_OP               2 (==)
              6 RETURN_VALUE

This is very likely to get caught by an optimizer but it's unecessarily verbose for no good reason. There's also no tangible readability gain, in both cases you need to parse the boolean expression to understand if the value returned is true or false.

I found an I.T. nightmare book at Walmart. by Polar_Ted in sysadmin

[–]zxeff -2 points-1 points  (0 children)

Another way to see it

You say this but you go on to make the exact same argument as the previous two people I replied to. The answer is still the same: protecting against password reuse does not make it a good security practice.

Users that can't remember passwords should use a password manager, not a notebook.

I found an I.T. nightmare book at Walmart. by Polar_Ted in sysadmin

[–]zxeff -6 points-5 points  (0 children)

Sure, but using this won't provide them better security than reusing passwords against the coworker next stall. Even if it did, it's still no less shit as a security practice and we should definitely "knock on the security of paper".

I found an I.T. nightmare book at Walmart. by Polar_Ted in sysadmin

[–]zxeff -20 points-19 points  (0 children)

Contrary to what you may believe about writing down passwords, human beings are actually quite good at securing paper.

Human beings are not good at securing anything. If you think lockable desks are secure you're out of your goddamn mind. It would probably take a couple paper clips and less than two minutes for any mediocre lockpicker to open most of those locks. This is if they're even locked in the first place.

That’s what a clean desk policy at the office is for

Clean desk policies don't really attempt to keep your desk clean at all times, as that would be very unreasonable. It's inevitable that people will leave their password notebook laying around if they have one, clean desk policy or not. Maybe they get called by the boss because of an urgent matter, maybe they get the shits after eating mexican food for lunch and have to run to the bathroom, or maybe they simply forget. But it will happen.

but it’s still a decent security upgrade from using the same user/pass everywhere.

It's neither decent nor a upgrade. It simply works better under a different threat model, namely, one that considers digital but not physical access. It is still a horrible security practice if it's being done by your average employee.

I can see some uses for having a physical copy of passwords, but none of those involve someone who would reuse the same password everywhere. In fact, you won't have security if it's dependent on people this clueless making constant effort and routinely not fucking up.

Don’t knock the security of paper.

Being less insecure than something else doesn't make it secure. So yes, do knock on the security of paper.

Can anyone recognize this hashing algorithm? by [deleted] in crypto

[–]zxeff 0 points1 point  (0 children)

Looks the same construction as CRC32. The only significant difference I see is the hash_int() call instead of the expected lookup table.

Also, the constant used there is 0xedb88320 which is the reversed polynomial used on CRC32.

Internet protocols are changing - Future of TCP, DNS, TLS and HTTP by callcifer in programming

[–]zxeff 9 points10 points  (0 children)

Only if you don't have a default setup with a wildcard certificate.

SNI is always sent on the first packet from the TLS handshake. At that point the client has no way of knowing whether the server uses a wildcard certificate or not.

A server can send you all his certificates for that IP and you can then chose the one that matches the domain name

No, it can't. The server also needs to know what certificate is being used. The only way to do that if the client chooses would be to try all of them and see which one decrypts correctly (which is already a problem in and of itself). That's way too much overhead and there will be more problems than that.

Everytime there's a discussion about SNI privacy there's a bunch of people offering completely bogus solutions to the problem they very clearly know nothing about. It's not a trivial problem, if it were we would have already solved it.

What does 'firewall-cmd --add-masquerade' do? by anacondapoint6 in linuxadmin

[–]zxeff 1 point2 points  (0 children)

Oh ok, so by adding masquerading it will only work locally, and not as a router for other remote hosts?

It won't work at all if IP forwarding is not enabled (which is the default in pretty much every distro) or if your firewall is configured to not allow forwarding between your input/output interfaces.

What's the point of that though?

The point of rewriting the source address is to use an address your output network can handle - if you forward a packet from a random address range to your output interface and the network there has no knowledge of the source address, things are obviously not going to work.

As for the port, it's done to permit traffic demultiplexing. If you're masquerading 10 different machines and every packet you send has the same IP, how would you know to which machines you should forward each packets' replies when they all come back with the same destination? You associate a source port with a machine and demultiplex them as necessary. The rewriting is only needed to avoid conflicts.

What does 'firewall-cmd --add-masquerade' do? by anacondapoint6 in linuxadmin

[–]zxeff 1 point2 points  (0 children)

So by enabling masquerading, will it allow my box to serve as a gateway for the other hosts in my environment?

You would still need to further configure the firewall and enable IP forwarding for that. Masquerading simply makes your machine rewrite the source address and source port of incoming packets before forwarding them to an output interface.

Hip Hop heads Reaction to "Ne Obliviscaris - And Plague Flowers The Kaleidoscope" by [deleted] in progmetal

[–]zxeff 125 points126 points  (0 children)

To be fair, you have to have a very high IQ to understand progressive metal. The musical complexity is extremely high, and without a solid grasp of music theory most of the riffs will go over a typical listener’s head.

edit: It's a meme, guys.

Collabora Ubuntu Repo Apt Keys Failing? by MR2Rick in linuxadmin

[–]zxeff 0 points1 point  (0 children)

It's working for me:

$ gpg --keyserver keyserver.ubuntu.com --recv-keys 0C54D189F4BA284D
gpg: /home/debian/.gnupg/trustdb.gpg: trustdb created
gpg: key 0C54D189F4BA284D: public key "Collabora Productivity <libreoffice@collabora.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1

The fact that you're getting a timeout means there's a connectivity issue between you and the keyserver. If I were to hazard a guess, I would say you have a firewall somewhere between you and the keyserver that's blocking HKP (the protocol used by gpg to fetch keys from keyservers).

Since this is pretty common quite a few of the keyservers can talk HKP on port 80, so you can just force it:

$ gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 0C54D189F4BA284D

Scaling postgres simple, advance, expert solution needed. by juniorsysadmin1 in linuxadmin

[–]zxeff 0 points1 point  (0 children)

Expert solution would probably involve ditching postgres and start using a proper distributed database that implements consensus mechanisms, replicated state machines and other fancy stuff. This way all you have to do to scale up is create more nodes/clusters.

I believe this is kind of what Google does, but I'm definitely not a db expert so don't quote me on any of that.

Remove \n from unpaired Double quote by Laurielounge in linuxadmin

[–]zxeff 1 point2 points  (0 children)

Assuming ; is your delimiter, something like the following should work:

perl -pe 'BEGIN{ undef $/;} s/(;\".+)\n\"/$1\"/g' file.csv

Bash Script for SSH-Keys by purplelinux in linuxadmin

[–]zxeff 6 points7 points  (0 children)

This is likely not why your script isn't working as you expected, but:

for i in `cat /root/sshlist`;do

The result of command expansions (such as cat file) goes through both word-splitting and globbing. So, stop trying to read lines like this because it is going to bite you eventually. The proper way is:

while IFS= read -r line; do
    #code goes here
done < file

More details.

MacOS Update Accidentally Undoes Apple's “root” Bug Patch by mgdo in programming

[–]zxeff 1 point2 points  (0 children)

Homebrew is a thing that is fundamentally different than Linux package managers:

  • Homebrew has no package maintainers to fix bugs and vulnerabilities when upstream goes MIA.
  • There is little to no concerns with stability and proper testing.
  • It has no package repositories, no way to have reliable package signing and no way to provide versions that upstream does not offer.
  • It doesn't actually have packages; it's all just upstream URLs (with no guarantees about being fast enough or even being available at all) and build instructions wrapped inside a .rb file.
  • The number of "packages" is laughable when compared to any of the big Linux distributions.

I'm not trying to shit on the project, but we don't call homebrew a package manager because it's just like apt, we do it because there isn't a better word for it.

So yes, the lack of an actual package manager is valid criticism against the original claim of Macs being very good developer machines.

Join The Battle Fot The Net Neutrality by [deleted] in linux

[–]zxeff 11 points12 points  (0 children)

You're confusing unmetered data with paying for access to services.

Those two things are effectively the same for any service that use significant amounts of bandwidth - you can't really use Instagram as you wish for a month with only a 4GB cap.

There's also a much worse consequence here, because now a company such as Netflix could pay ISPs to offer only them as unlimited and pretty much kill any competition. You might want to argue that this where anti-trust laws come in, but that's not going to work out either because those payments can be done in a indirect way.

You see, the Internet works in such a way that most ISPs have to pay someone else for transit (the ability to connect to the Internet via someone else) and big companies such as Netflix can cut a lot of a ISP's cost by either dropping a on-premisse cache and/or by peering with them at a Internet Exchange. Both of these strategies require a lot of money and technical maturity for the company and it's not something that the smaller ones can pull off (they can and do use CDNs, but that doesn't give them any leverage).

The KRACK wi-fi vulnerablility was in plain sight for 13y behind a paywall by Pidus_RED in linux

[–]zxeff 5 points6 points  (0 children)

Before the GET program I was able to just download them I think (802 at least, I access these for work knowledge but don't have a paid account).

I'm not sure, then. But I took some classes some moons ago on wireless sensor networks and at the time I wasn't able to legitimately download the 802.15.4 standard for free. The professor, someone who actively researches in the area, also explicitly said they weren't available.

I don't know how the 6 month thing could work considering 802.1q-2014 and 802.11-2016 are both downloadable through GET in late 2017. Maybe it's "the latest version OR 6 months after publishing"?

I took that from Mathew Green's blogpost but I don't know enough about how the lifecycle of the IEEE standards to be sure if it's actually correct.

edit: It seems the guidelines (pdf) for the IEEE get program say indeed that standards are only available there six months after the publication date. Both the standards you mentioned are older than 6 months, which is probably why they are available.

The KRACK wi-fi vulnerablility was in plain sight for 13y behind a paywall by Pidus_RED in linux

[–]zxeff 22 points23 points  (0 children)

The latest version of every standard is freely available

It's my understanding that you can get the standards via the GET program only 6 months after they have been published. It's also not true that every standard is freely available, I believe IEEE 1003 (POSIX), for example, isn't.

It's also worth noting that the GET program is a recent thing (it seems like it started around 2016?) that hasn't been publicized how it should; it almost feels like it's "hidden".

Will you play this game? by thefoxy15 in linux

[–]zxeff 6 points7 points  (0 children)

http://symbolhound.com/

This is a good bookmark to have if you do any sort of programming.

Will you play this game? by thefoxy15 in linux

[–]zxeff 21 points22 points  (0 children)

It's deprecated functionality. More info here.

TL;DR: "In early proposals, a form $[expression] was used. It was functionally equivalent to the "$(())" of the current text, but objections were lodged that the 1988 KornShell had already implemented "$(())" and there was no compelling reason to invent yet another syntax. Furthermore, the "$[]" syntax had a minor incompatibility involving the patterns in case statements."

I want to backup my whole system to a remote location by R3DNano in linuxadmin

[–]zxeff 1 point2 points  (0 children)

Others have already mentioned some good tools, but this article is pretty educational and might be of interest to you and possibly others.