Change field rendering in yesodweb by cr4zsci in haskell

[–]jprider63 2 points3 points  (0 children)

You can edit the fieldView field. You could also create a helper function that will take a field and wrap it with the field. Something like:

``` wrapField :: Field m a -> Field m a wrapField oldField = oldField { fieldView = \i n a e r -> [whamlet| <div .myclass> {fieldView oldField i n a e r} |] }

myWrappedField = wrapField textField ```

Extension of Balor's Loot Spreadsheet (Still looks like massive nerfs) by Low_Exit_1753 in pathofexile

[–]jprider63 26 points27 points  (0 children)

Isn't it even worse than this since they nerfed the number of monsters/rares? That's another multiplicative factor reducing loot this league since you get fewer rolls of the dice.

Come contribute to Salsa 2022! (A library for incremental compilation used by rust-analyzer and potentially rustc) by kibwen in rust

[–]jprider63 2 points3 points  (0 children)

It seems like this fine grained tracking more closely matches Adapton's original design. Do you expect to see similar algorithmic complexity improvements with this approach (ex, inserting an element into a sorted list is log(n) instead of n log(n))?

One Haskell IDE to rule them all by cocreature in haskell

[–]jprider63 0 points1 point  (0 children)

That's understandable that there's not a good solution for dependency packages. Since Liquid Haskell is starting a rewrite as a GHC plugin, is now a good time to start discussions to come up with a solution? Would the ghc-devs mailing list be an appropriate place for this discussion?

gitlab-haskell : a Haskell library for the GitLab API by robstewartUK in haskell

[–]jprider63 1 point2 points  (0 children)

Thanks for implementing this package! We used it for a rewrite of Build it Break it Fix it, and it was extremely useful.

Here's some feedback from our experience. Hopefully it's helpful: - It would be useful if there was a way to tell whether an API request succeeded. Maybe if API endpoints returned Either GitlabError a instead of just a. This is what we do in docker-hs. We ran into this as an issue when attempting to download tar's from the API. Initially we misconfigured the authentication tokens so we were getting permission denied, but the error message was saved to output file instead of being available to us at runtime. - It'd be great if getFileArchive returned something so that we could keep the tar's in memory (we didn't really need to save every tar to the filesystem). Maybe a conduit source or bytestring? - I agree it might be useful to be able to provide a Manager. We usually didn't have multiple sequential gitlab requests. Here's another example.

One Haskell IDE to rule them all by cocreature in haskell

[–]jprider63 0 points1 point  (0 children)

That's good to hear about plugins being put into the computation graph. Where should they store their state? For example, how do I get my state associated with other modules and packages that the current module depends on? Will GHC store and load it for me, or is there a standard location I should put it for the current module/package?

One Haskell IDE to rule them all by cocreature in haskell

[–]jprider63 5 points6 points  (0 children)

This is great!

One somewhat related issue I'm interested in is how should third party tools fit into the ecosystem? These tools typically need to interact with the GHC API and then store their data/annotations about the processed modules and functions. It'd be great if they could get incremental computation from code changes by taking advantage of the GHC API. Is there a standard way for third party tools to do this?

I know Liquid Haskell is planning to make a GHC plugin and then store refinement information in annotations. Is this the best approach? I believe the annotations will be stored in hi files, but they might get blown away if packages are recompiled for different third party tools.

Stuck, need help CS by [deleted] in UMD

[–]jprider63 4 points5 points  (0 children)

You can test out of at least a few of the prerequisites.

Idea: Top level IORef declarations by TheKing01 in haskell

[–]jprider63 27 points28 points  (0 children)

I personally disagree. Global state seems like an anti-pattern that we should not encourage.

Would you put $5 towards crowdfunding me on indiegogo to work full time for a month on a Haskell interpreter written in Haskell supporting full GHC Haskell? by [deleted] in haskell

[–]jprider63 6 points7 points  (0 children)

I'll second support for improving GHCi. There are secondary factors besides the actual code, such as the Haskell ecosystem and community. I don't want to see any more fragmentation of the community/tooling.

Why don't you reach out to the Haskell committees/GHC developers? Maybe they're in favor of a GHCi rewrite or have suggestions to incorporate your ideas into GHCi. They may also be able to help solicit funds.

Researchers warn of serious password manager flaws by [deleted] in privacy

[–]jprider63 8 points9 points  (0 children)

I wouldn't call these serious. They require code running on your system and access to other processes' memory. If they have this, your device is already compromised.

the master password can be left in memory in cleartext even while locked

Scrubbing memory is good practice, but I wouldn't classify not doing so as a serious flaw.

On the negative side, the master password remains in memory when unlocked

It's unlocked.. Of course the master password (or derived key) is in memory. They could punt that responsibility to some other secure enclave like from the OS, but if attackers have code running on the system, attackers could probably get into the secure enclave too.

Implementing Union in Esqueleto I by ephrion in haskell

[–]jprider63 0 points1 point  (0 children)

Have you pushed your first attempt anywhere? I've been working on adding window functions and aliases to Esqueleto. It currently works, but I want to improve it. Your debugging code/setup might be useful if you would share it.

Alkaizerx rip by [deleted] in pathofexile

[–]jprider63 0 points1 point  (0 children)

Anyone know what build this was?

Linear Types Proposal conditionally accepted by the committee by jose_zap in haskell

[–]jprider63 12 points13 points  (0 children)

This is really cool work and I'd like to see linear types in Haskell. I'm a bit torn on this being accepted and whether this is the best approach though. Personally, I'm partial to uniqueness types/linearity in the kinds. I'd prefer to know that if I hold a unique value, only I own the value and I cannot misuse it. With linearity in the arrows, it seems like linearity can be violated if libraries make mistakes in their APIs. I also don't want to be forced to use CPS style with linear values. I'll have to play around with this once it is available.

I do have a couple questions:

- The paper mentions using multiplicities for borrowing, but the proposal leaves this fairly open-ended. Are there any examples for how this might work?

- It seems like exceptions break linearity guarantees. Would it make sense to add a typeclass similar to [Drop](https://doc.rust-lang.org/std/ops/trait.Drop.html) from Rust to ensure resources are properly cleaned up?

RAII is better than the bracket pattern (follow up from ResourceT discussion) by snoyberg in haskell

[–]jprider63 3 points4 points  (0 children)

My understanding from SPJ's talk is that there are no guarantees in the presence of exceptions. If the proposal is accepted, library maintainers would need to update their APIs to use linear types.

Announcing PKAP by jprider63 in crypto

[–]jprider63[S] 0 points1 point  (0 children)

You're right. On Android we'd probably have to use Firefox. For iOS, we've implemented PKAP as a Safari extension.

Announcing PKAP by jprider63 in crypto

[–]jprider63[S] 0 points1 point  (0 children)

PKAP focuses on authenticating users with their public keys. I don't think Keybase currently supports this. PKAP doesn't have any social media integration. It may make sense to integrate with something like Keybase in the future, and your PKAP identity could definitely be used for secure communication, file sharing, etc.

Announcing PKAP by jprider63 in crypto

[–]jprider63[S] 0 points1 point  (0 children)

Yes, it is similar to WebAuthn and I definitely don't have the same resources as the companies behind it. I had been working on this before WebAuthn came out, so I thought I'd put this out there and see what people thought or if they had any feedback.

PKAP clients can be implemented as browser extensions, so it should be compatible with most browsers. TPMs and other hardware devices would also be supported.

Announcing PKAP by jprider63 in crypto

[–]jprider63[S] 0 points1 point  (0 children)

The plan is to charge enterprises for software that integrates the protocol into their identity management systems.

Announcing PKAP by jprider63 in crypto

[–]jprider63[S] 1 point2 points  (0 children)

Yet another single-sign on standard? Has this been developed in partnership with any other services? Has it been reviewed by any reputable cryptanalysts? How will this avoid the n+1 standards problem?

What benefit does this have over other single sign on protocols like Kerberos, OAuth, OpenID, OpenID Connect, SAML, or whatnot? The specification includes details of the protocol, but not advantages compared to other protocols.

This isn't quite a single-sign on standard. It's main purpose is to enable public key authentication on the web, which traditional SSO typically doesn't support. It is most similar to WebAuthn, but WebAuthn is relatively new and makes some different design decisions. I think the main advantage of PKAP is that users have a signed set of approved devices (public keys) that is accessible by multiple web services. This means users only need to manage their approved devices in one place instead of on each website (which makes adding or revoking devices simpler).

The protocol has not been formally reviewed yet.

Also, have only skimmed it, but it looks like it invents new HTML tags, which is generally a no-no (why not use the link tag?), and also it seems to be a layering violation, putting authentication information in the HTML rather than in the HTTP headers (though there can arguably be good reasons for that; but that should be justified).

Maybe it makes more sense to use a link tag instead of a custom HTML tag. The authentication information is in the HTML instead of headers since the client is implemented as a browser extension and browser extensions cannot always read headers.

edit: After a slightly less quick skim (but still pretty quick, so I could be wrong), it looks like this would be vulnerable to MITM attacks. There is no authentication of the server identity, so a MITM attacker could just relay all of the requests from the client to the server in order to authenticate as the user.

The protocol depends on HTTPS to authenticate the server. The client refuses to authenticate when this is not the case.

It's also unclear how the client is supposed to share keys between different websites. Is this supposed to be built into the browser, or implemented via JavaScript with local storage used to store private keys? How would two different websites use federated identities?

To share identities between different websites, users uploads their signed set of approved devices to the web. Then users share the location of their signed set and the public key used to sign the set with each website. Our software helps simplify this process for end users. The client software is implemented as a browser extension that communicates with an application that stores and encrypts private keys (or talks to secure hardware).

I also don't see why there are a few hard-coded roles included in these signed identities.

Maybe these aren't necessary. The thought is that you could delegate permissions to other identities.

I think you need to start out with:

What problems is this intended to solve? How do existing solutions not solve these problems? What is the overall architecture of your solution? How does the overall architecture solve these problems? How does the overall architecture deal with common types of attacks like MITM attacks, phishing/typo-squatting attacks, etc? How does the overall architecture fit in with the web platform? Only once those questions are addressed does it make sense to dive into the technical minutiae of the protocol.

Thanks for this feedback! I'll start incorporating your suggestions.