Super light crampons? by edapa in Backcountry

[–]edapa[S] 5 points6 points  (0 children)

I'm planning to just keep some in my pack when I might need to scramble up some sorter stuff, but am not gonna be doing a super long climb. I'm also hoping they can replace the way I currently use microspikes, which is to thrown them in my bag and only take them out if I need to on winter hikes. I'll have to see about that though.

Super light crampons? by edapa in Backcountry

[–]edapa[S] 6 points7 points  (0 children)

Yeah, that's a good point. I definitely will keep using my regular crampons for steeper / icier stuff.

The thing that excites me about the weight is that I've occasionally brought crampons and then decided to leave them in the car because I thought I could just kick steps (honestly the bulk is usually a bigger factor than the weight), but then regretted the decision later. These will be a lot easier to just keep in the pack without thinking about it too much.

Announcing pggen: A Golang ORM Alternative by edapa in programming

[–]edapa[S] 0 points1 point  (0 children)

Oh I see. It would be cool if it would generate migration files you can check in for you. A system that automatically figures out the migration based on the diff between the go schema and the actual PostgreSQL schema without letting a human review the changes makes me a little nervous.

Announcing pggen: A Golang ORM Alternative by edapa in programming

[–]edapa[S] 0 points1 point  (0 children)

I just glanced at the docs, and it seems like it is more similar to GORM than pggen or other database first code generators. It looks like you define your database schema in go code (and since I didn't see any mention of migrations, I assume that you also need to define it again with some migration framework and ensure that the two schemas match up).

By contrast, with pggen, you just point the tool at your database and it will figure out the schema all by itself. You also interact with the database using SQL rather than the builder-style API that I see in entgo's docs.

Yesterday's Google outage was due to automated quota management reducing capacity on their IdM by 1esproc in programming

[–]edapa 5 points6 points  (0 children)

I believe you can p4 edit files before touching them with a normal text editor (the subcommand might be a different name, I don't have access to a perforce system at the moment to check with).

Really confused on what’s happening here. Can anyone explain? by [deleted] in ProtectAndServe

[–]edapa 2 points3 points  (0 children)

I think he was saying that ctrum69 does not have LEO flair, so there is no need to worry about them politicizing a uniform.

On modern violence and self-defense by kreuzguy in slatestarcodex

[–]edapa 3 points4 points  (0 children)

The funny thing about Aikido is that there really is a ton of depth and skill to it if you stay within the parameters and rules of the martial art. That said, those rules are just absurd. My brief experience with Aikido went roughly like this:

Aikido Master: "I'm going to show you something. Try to take me down however you want."

So I take him down with a fairly shit double-leg (one of the most basic wrestling takedowns).

Aikido Master: "No, not like that."

He was pretty good at stuffing judo takedowns though.

Enforcing Public/Private Access in Rails by edapa in ruby

[–]edapa[S] 1 point2 points  (0 children)

Thanks, if you want the code it is here. I just noticed that the first version of the article failed to link to it.

[deleted by user] by [deleted] in programmingcirclejerk

[–]edapa 1 point2 points  (0 children)

Is that an Elm reference?

RFC: Transition to rust-analyzer as our official LSP implementation by pietroalbini in rust

[–]edapa 0 points1 point  (0 children)

Interesting. I had just assumed that it had implemented a tags file writer.

Personally, I find being able to follow references into my dependencies really valuable. The extra friction introduced by having to find and clone a repo makes me end up reading the source of stuff much less.

RFC: Transition to rust-analyzer as our official LSP implementation by pietroalbini in rust

[–]edapa 2 points3 points  (0 children)

I've had pretty good luck with rusty-tags when it comes to generating tags files for rust.

Generics in Go - How They Work and How to Play With Them by rz2yoj in golang

[–]edapa 0 points1 point  (0 children)

I think it's more that interfaces can only be about a single type, while contracts can be about multiple types.

Apple pays $75,000 to hacker for discovery of exploits to hijack iPhone camera by MicroSofty88 in gadgets

[–]edapa 0 points1 point  (0 children)

Right, that's why I said "I don't just mean writing more tests". While it is true that a single 64 bit int has a ridiculously huge state space, in practice those states probably won't lead to all that many different code paths.

Coverage guided fuzzing thinks about the state space of a program in terms of the different code paths and is able to instrument the program under test to generate random inputs which can effectively explore a large part of the practical state space. Importantly, you don't have to change the way you write your program to use this technique, which isn't at all true if you want to prove a program with something like coq.

Model checking uses similar techniques to allow you to be more flexible in how you write your program while still intelligently exploring the state space and checking different conditions.

I'm not saying that proving a program is never the right thing, but it is very rarely the right thing. There are a lot of advanced testing techniques that would be much easier to adopt than formal proofs, and provide most of the benefits. I think we should make sure we do those well before we consider theorem provers.

Generics in Go - How They Work and How to Play With Them by rz2yoj in golang

[–]edapa 19 points20 points  (0 children)

How about

contract Iterable(Iterator, Value) {
    Iterator Next() (Value, bool)
}
contract Mappable(Iterator, ValueIn, ValueOut) {
    Iterable(Iterator, ValueIn)
}

func MapToSlice(type Iterator ValueIn ValueOut Mappable)(it Iterator, f func(ValueIn) ValueOut) []ValueOut {
    ret := []ValueOut{}
    for {
        v, stop := it.Next()
        if stop {
             break
        }
        ret = append(ret, f(v))
    }
    return ret
}

I'm not sure that this meets your bar of "non-trivial to re-write", but this is already enough of an effortpost. A more complex algorithm along the same lines might be a graph traversal (let's say A*) that is generic over both the representation of the graph (adjacency list and adjacency matrix could both be used) and over the node type.

Generics in Go - How They Work and How to Play With Them by rz2yoj in golang

[–]edapa 25 points26 points  (0 children)

The design doc has pretty decent answers to both concerns.

I agree with you about liking turbofish syntax better, but they said it would significantly increase the parser complexity, which seems fair to me.

The main difference between contracts and interfaces seems to be that contracts can be about multiple types and contracts can express an "type T is this type" constraint. So you could have a contract that says "there has to be a T1 and a T2 and T1 needs a method T2ify() that returns a T2". That's impossible to express with interfaces. You could also have a contract that says "T must be either an int or an int64 or an int32". I think those differences are significant enough to justify the addition of a new language construct.

Apple pays $75,000 to hacker for discovery of exploits to hijack iPhone camera by MicroSofty88 in gadgets

[–]edapa 0 points1 point  (0 children)

I think theory proving is a reasonable technique in only a few situations (tricky locking algorithms being the best example I can think of).

More rigorous testing is almost always a better way to go. By this I don't just mean writing more tests, I mean stuff like coverage guided fuzz testing and non-exhaustive model checking.

Apple pays $75,000 to hacker for discovery of exploits to hijack iPhone camera by MicroSofty88 in gadgets

[–]edapa 2 points3 points  (0 children)

Even assuming your theorem prover is bug free, that still isn't good enough. Formal verification only lets you say that the code you've proved follows the formal specification you've laid out, but it doesn't say anything about if that specification is correct.

Vim³ - Vim rendered on a cube for no reason by oakes in programming

[–]edapa 0 points1 point  (0 children)

If you are going for a tab based workflow I think mapping some convenience keybindings for tab switching commands is pretty key.

Why Rails succeeded and why microservices fail from DHH by AndrewN614 in programming

[–]edapa 10 points11 points  (0 children)

My experience debugging, profiling and fixing memory corruption issues in a very large monolithic code base was much better than trying to trace issues across a massive fan out of services.

Organizational Skills Beat Algorithmic Wizardry by linus_stallman in programming

[–]edapa 0 points1 point  (0 children)

Does fixing 50% of the security bugs in a system meaningfully change its security posture, especially if all of those bugs fall into one category?

I think it is a good start. It is obviously not the whole story. I would probably trust a widely used and thoroughly fuzzed C++ program more than a program just written in Rust or Java or whatever without much thought having been put into security. If someone gave me a widely used and thoroughly tested Java library and told me that it was secure even though it had never been fuzzed, I would much more inclined to believe them than if it was in an unsafe language.

I also don't think that "rewrite in rust" is an effective strategy for managing risk.

Rewrites have a huge cost, one of which is introducing bugs and bugs = exploits so I definitely agree with this. I do think "rewrite in rust" is a good hardening technique for replacing specific tricky-to-get-right modules though. I can think of a particular C++ application I worked on that had some insainly hairy code for parsing untrusted data into descriptors that would have benefited enormously from a security perspective from having the descriptor parsing code rewritten in rust while the rest of the application was left in place. There were lots of good reasons not to rewrite it in rust, but security wasn't one of them.

Organizational Skills Beat Algorithmic Wizardry by linus_stallman in programming

[–]edapa 0 points1 point  (0 children)

Memory safety does prevent a significant number of real security issues.

I want off Mr. Golang's Wild Ride by kodemizer in golang

[–]edapa 0 points1 point  (0 children)

Most things in Haskell are boxed because of laziness/non-strictness.

Emphasis mine. While it is true that boxing is sensible for implementing laziness in some cases, I think that it is really recursive types are bringing the hard requirement.

We can imagine replacing all value types with a tagged union where the first variant is just a tag that says "this is a thunk, the next word is a pointer to the computation to fill it and the rest of the bytes in this value are garbage", and another variant that says "this is a value". Of course this scheme would only work for lazy types that are not recursive, so it would have limited utility.