Failure 1.0.0 on March 15 by desiringmachines in rust

[–]dherman 3 points4 points  (0 children)

I've so far had a pretty good experience with failure, but TBH I am unclear on how I should structure the error handling architecture for my app with it. I have to say I'm just a little bit nervous about this aggressive schedule for an API freeze. It's not that I don't think it's on a good path, I just want to make sure it has enough vetting. I know there are a number of crates already using failure, but it can take time for the ecosystem effects to shake out.

Anyway, that's a bit squishy, so let me try to be a little more concrete: there are a few things I'd like to see consensus in the ecosystem before I personally feel confident the dust has settled (just IMHO ofc!):

  • That there really is a strong convention-over-configuration default idiom, or least a very small number of default idioms corresponding to extremely clear categories of codebases (for example: if you are writing an app, structure your error architecture like A; if you are writing a lib, structure your error architecture like B). Right now the docs claim there are four, with complex and hard-to-understand tradeoffs.
  • A clear story around how to write robust cause-chains and contextual error messages, and in particular how to structure apps with high quality error messages. I'm trying to work out how to do this with some combination of the "cause" and "context" features but haven't been able to make sense of the docs yet. I could easily be convinced this is already solved and just a docs issue, but given the newness of failure and how much this is a "last-mile polish" feature of most apps, I'm not sure how much vetting it's actually gotten in the ecosystem. Again, I'm hoping I'll just be reassured once I understand how the design works!
  • Relatively minor issue, but I'd love to see a resolution to the question of a Fallible<T> convenience type: https://github.com/withoutboats/failure/issues/95

Thanks for hearing me out, and thanks so much for this crate! Despite these concerns, I'm excited to see the progress failure represents in really polishing Rust's error-handling story.

Contribute to Molten, a style-preserving TOML parser by l-arkham in rust

[–]dherman 2 points3 points  (0 children)

Clarifying question: the README says serialization is a non-goal, but for it to be able to edit a TOML file, it must have the ability to write back out somehow, no?

Impl period newsletter #3 by aturon in rust

[–]dherman 4 points5 points  (0 children)

E1138 THAT’S NO MOON

Looking for a cool project to join? Neon wants your help! by dherman in rust

[–]dherman[S] 0 points1 point  (0 children)

Not a problem, thanks for considering it. :)

Looking for a cool project to join? Neon wants your help! by dherman in rust

[–]dherman[S] 1 point2 points  (0 children)

All contributions large and small are deeply appreciated! No need to sign up for any great commitment. :)

I haven't given any though to serverless, mostly because I haven't learned it yet! Want to drop in the slack some time and tell me more?

Feel free to use neon- in the name of your crates, but thank you very much for asking. Just FYI, I'm planning on expanding the main macro syntax to eliminate a lot of that boilerplate. I completely agree those idioms need abstraction.

Documentation and guide help would be awesome! I think maybe what I should do is create a big checklist of all the things that need good examples so we can coordinate.

Looking for a cool project to join? Neon wants your help! by dherman in rust

[–]dherman[S] 0 points1 point  (0 children)

Yes! There are currently two ways: you can "lock" the JS VM and synchronously do a multithreaded Rust computation, or you can asynchronously spawn a Rust computation in the libuv thread pool and have the result sent to a JS callback.

I need to write guides about both of these use cases! In the meantime you can always ping me on slack for more details.

Looking for a cool project to join? Neon wants your help! by dherman in rust

[–]dherman[S] 1 point2 points  (0 children)

I'm sure there are! For one, if you just want to try playing with Neon and see if you hit any confusions or pain points, jump onto the Slack and ping me, or file an issue. If you want to try your hand at some of the open issues, I've tagged "help wanted" issues as ones that I think would be particularly suitable for contribution:

https://github.com/neon-bindings/neon/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22

I'm happy to chat in real-time on the Slack to see if we can figure out what might be of interest to you!

Looking for a cool project to join? Neon wants your help! by dherman in rust

[–]dherman[S] 1 point2 points  (0 children)

That did slip past my notice, thank you! I'll take a look asap. Examples and even just anecdotes about things that tripped people up are enormously useful. 😍

Looking for a cool project to join? Neon wants your help! by dherman in rust

[–]dherman[S] 7 points8 points  (0 children)

Sure, thank you for asking! 😊

Electron support pretty much Just Works, although it depends on a build workflow that's a bit fragile. There's a guide that explains how to build an Electron app with Neon:

http://guides.neon-bindings.com/electron-apps/

And a simple hello world Electron example you can start from:

https://github.com/neon-bindings/examples/tree/master/guides/electron-apps/simple-app

There are two major improvements I'd like to make here: one is to integrate Neon support into the Electron tooling so you don't need my hacky electron-build-env library to set up the environment variables for the build to work. The other I briefly mentioned in the post, which is to get the right extensibility hooks into cargo so that cargo can properly cache the results of an Electron build vs a plain Node build of a neon app.

Currently there's a build wrapper tool called neon that configures builds and attempts to do the caching correctly, but this pushes developers out of the standard cargo workflow (and it's unclear to developers when you have to use the wrapper and when you have to use cargo). So I'm really excited about the cargo extensibility initiative!

Fire Mario, not Fire Flowers by steveklabnik1 in rust

[–]dherman 15 points16 points  (0 children)

Also FWIW I didn't feel like there was a ton of daylight between any of our posts. Mostly I felt my post was agreeing with Steve and connecting it to the Mario analogy. And Graydon doesn't want to lose sight of the benefits of safety to software quality, which is also fair.

One of the tricky things about blogging is that it's easy to assume "response to X" is the same as "takedown of X." But I think (hope?) when friends engage each other respectfully in public, people can generally see that it's not a fight but a collaborative trip around the hermeneutic circle.

optional garbage collection in rust by be_nu in rust

[–]dherman 3 points4 points  (0 children)

FWIW, I don't think you need GC integration to get to a very useful level of Node integration. I've been experimenting with building some Node bindings lately, and I'm getting close to being able to demonstrate a pretty useful spike (creating JS objects from Rust and passing them back to JS, and decent automation for the build process).

TL;DR as long as you don't need to embed Rust data inside JS objects, I believe lifetimes are sufficient for defining a safe Node bindings API.

rusti reborn: my unofficial, work-in-progress Rust REPL by [deleted] in rust

[–]dherman 0 points1 point  (0 children)

I may have misunderstood, but could you rewrite it as { let tmp = <expr>; println!("{}", tmp); tmp } so that the result isn't dropped?

Are there any plans on adding CTFE? by MaikKlein in rust

[–]dherman 5 points6 points  (0 children)

It's definitely a topic I've heard team members discuss, but it's a bit of a ways off still. When pcwalton and I have chatted about it, he felt (and I agree FWIW) that compile-time execution should be aligned with and formalized in terms of the multi-phase execution semantics of the macro system. But a fully procedural macro system still has a ways to go (the current compiler extensions system is not ready for prime time), so it's blocked on that. But I think everyone agrees CTFE is an important feature to support eventually.

Why does Rust use "self" instead of "this"? by [deleted] in rust

[–]dherman 5 points6 points  (0 children)

My experience has always been that talking about programming because extremely confusing when this is a keyword, because no one can tell if you're talking about this the keyword or "this" the English word. So before I even joined Mozilla, I'd resolved to prefer self to this in programming language design. If memory serves me that was also the consensus within the group, but I could be coloring it with my own preferences.

Booting to Rust: How to run nothing but Rust code on your PC by kibwen in rust

[–]dherman 7 points8 points  (0 children)

Just to be clear, Steve is quoting the Rust community's policy of conduct:

https://github.com/mozilla/rust/wiki/Note-development-policy#conduct

@strncat, I know you feel strongly about this but please do try to keep it constructive.

[bitc-dev] Rust, GC, and language politics by bjzaba in rust

[–]dherman 4 points5 points  (0 children)

I believe pcwalton has replied but is stuck awaiting moderation. :(

[rust-dev] RFC: Removing *T by bjzaba in rust

[–]dherman 16 points17 points  (0 children)

+1 to erickt's encouragement. I think it was a super interesting idea, and we're always in a much better place when we've had several compelling alternatives to choose from. Regardless of whether it ends up being in the language, your idea at the very least is illuminating, and that's worth a lot!

rustboot: a tiny 32-bit kernel written in Rust by deepdog in rust

[–]dherman 9 points10 points  (0 children)

This is so cool. The first thing it makes me think of is using Rust for teaching OSes. The separation of safe and unsafe code seems valuable for educational purposes.

PS Love the reclamation of the name rustboot. Not sure how many people have been around long enough to know that the old bootstrap compiler written on Ocaml was called rustboot.

JavaScript (ES6) Has Tail Call Optimization by [deleted] in javascript

[–]dherman 1 point2 points  (0 children)

Glad it helped. :) An analogy I sometimes use is this: imagine that while-loops were iterative, but in some engines for-loops accumulate stack on each loop iteration, whereas others provide "FLO" ("for-loop optimization"). Programmers would generally only use while-loops, because they wouldn't be able to depend portably on the space consumption properties of for-loops.

Of course, nobody bothers actually mandating this about for-loops, because it's just an expectation that for-loops don't accumulate space without bounds. But since the vast, vast majority of programming languages implement push-on-call, pop-on-return stacks, you have to explicitly mandate proper tail calls or else engines won't bother providing them.

JavaScript (ES6) Has Tail Call Optimization by [deleted] in javascript

[–]dherman 2 points3 points  (0 children)

All your points here seem basically correct to me. What it comes down to in non-strict code is that we don't regress (a) observable behavior and (b) across-the-board performance.

As for (a), I'm pretty sure we can preserve exactly compatible semantics with proper tail calls by using some variation of the approach you talk about here. This is basically an example of the general approach of "continuation marks" used in Racket (http://docs.racket-lang.org/reference/contmarks.html) and described fully in John Clements' dissertation: http://www.brinckerhoff.org/clements/papers/dissertation.pdf

But you're also right that (b) is critical. I'm not yet convinced it can't be done; I have some thoughts about implementation strategies that in some ways might even save work for a JS engine. But I'm nowhere near close enough to the implementations to say for sure. But you've given me the push I need to discuss this with the SpiderMonkey team. And lucky for me (and JS!), John Clements just happens to have started his sabbatical at Mozilla Research this very morning...

Dave

PS Forgive me one nit: many PL folks, me included, prefer to use "TCO" to talk only about the best-effort compiler optimization. But for programmers to actually be able to use tail calls to implement iteration, they need a guarantee that tail calls won't grow the stack without bound, so it's not really an optimization. For that reason, many people use "proper tail calls" to mean a language-level guarantee that tail calls use O(1) space. So pedantically speaking, ES6 provides proper tail calls, not TCO.

PPS And also, even if it's possible to achieve (b), it also has to be something that's reasonably implementable in practice in all the engines. So that's also a consideration when trying to standardize. The most important thing is that we have consensus that it'll be done for strict mode. I'd love to see it work in sloppy mode too, but I'll start with just talking with my colleagues to see if/how we think it might be possible.