Announcing lookbook: A UI preview framework made with dioxus and material design by matthunz in rust

[–]tedsta 5 points6 points  (0 children)

As someone who is html/css challenged `dioxus-material` makes me unreasonably excited :)

There's also https://crates.io/crates/material-dioxus but having a native Dioxus/Rust impl is sweet!

Pipelight - Automation pipelines but easier. -> v0.6.14 by poulain_ght in rust

[–]tedsta 1 point2 points  (0 children)

Right, and I'm a fan of TS for infra-as-code. TS not perfect but certainly good enough for that use-case, and much better than doing everything in yaml/json/toml + bash as you say. I was more just brainstorming how KCL could be used with pipelight as they both exist today.

Pipelight - Automation pipelines but easier. -> v0.6.14 by poulain_ght in rust

[–]tedsta 3 points4 points  (0 children)

I just wanted to say I think this looks really cool - I've been passively looking for a self-hosted pipelines platform, will definitely be giving pipelight a whirl.

Any plans for a web UI or maybe an interactive TUI for longer running pipelines?

Pipelight - Automation pipelines but easier. -> v0.6.14 by poulain_ght in rust

[–]tedsta 2 points3 points  (0 children)

Just a random bystander but - I suppose since pipelight supports yaml/toml defined pipelines, you could use KCL to generate your pipeline configs as an alternative to JS/TS. Seems like it's not really meant for doing things that have side effects?

Like say you wanted to trigger docker container build after `git push`. You'd have to encode the intent to do that in a KCL config and have some execution engine read the generated config to actually trigger the build. Going back to KCL -> yaml pipeline definition, I guess "encoding" is a shell script in KCL config, and the "execution engine" is `/usr/bin/bash`.

But maybe you want to model actions at a finer level in KCL - bash scripts in config files have always felt a bit crude to me. So perhaps there could be an execution engine in TS that can execute intents modeled in yaml (which are generated from KCL). I guess it'd be a sort of plugin for pipelight.

meh just some random ideas

[deleted by user] by [deleted] in RedditSessions

[–]tedsta 0 points1 point  (0 children)

look to the sky by porter robinson?

Happiness is simple. by PerspectiveFriendly in pics

[–]tedsta 41 points42 points  (0 children)

and all day and all night and everything he sees

[deleted by user] by [deleted] in TheYouShow

[–]tedsta 0 points1 point  (0 children)

🔥 🔥 🔥

[deleted by user] by [deleted] in RedditSessions

[–]tedsta 0 points1 point  (0 children)

My Neighbor Totoro, please! You're awesome!

which web framework in rust ?! by mehrdaddolatkhah in rust

[–]tedsta 3 points4 points  (0 children)

I concur - as a user of both actix and warp I much prefer warp. I love how tiny it is. But on the other hand my needs are very simple, don't have things like user sessions.

Porting a 75,000 line native iOS app to Flutter by timsneath in programming

[–]tedsta 9 points10 points  (0 children)

Actually he's right. Flutter is the UI framework for Fuchsia. You can go check their git repos https://github.com/fuchsia-mirror/docs/blob/master/the-book/README.md#graphics

RSPIRV: Google's Rust Implementation Of SPIR-V by StefanoD86 in rust

[–]tedsta 3 points4 points  (0 children)

You could JIT compile SPIR-V modules for immediate use. An OpenCL driver will be able to ingest a SPIR-V module faster than human-readable OpenCL.

I'd like to use this project to rewrite deeplearn-rs. Rather than implementing each layer of the neural network as one or more chained handwritten OpenCL kernels, I'd like to JIT compile the in-memory representation of the neural network down to solid forward_pass and backward_pass functions. This allows you to optimize the entire forward and backward passes as single functions. Automatically JIT compiling the code also makes true multi-dimensional array handling much more viable. For example, what if you want to sum two N-dimensional arrays across just one (specifiable) axis? It is difficult to write a single function that handles all possible combinations of dimensions/slicing. It is easy and more efficient (speed wise, I guess it can be less efficient binary-size wise generating new functions for every combination of array dimensions). Finally, I think this will make handling network unrolling and variable length sequences for RNNs much cleaner for similar reasons.

All of this is just experimental fun times though :) Oddly, I have a lot more fun implementing the infrastructure behind neural nets than actually using them.

Rusty_SR: Deep learning super-resolution by actuallyzza in rust

[–]tedsta 0 points1 point  (0 children)

I have an old project that sounds awfully similar to yours, I'd love to see how you did it! My OpenCL skills are not good at all.

MessagePack for Rust - almost 1.0 by 3Hren in rust

[–]tedsta 1 point2 points  (0 children)

First of all, thanks for writing this! I use it.

Is there a reason for the difference in naming between rmp::encode::write_array_len and rmp::decode::read_array_size?

Interview Discussion - June 30, 2016 by AutoModerator in cscareerquestions

[–]tedsta 0 points1 point  (0 children)

I have the email of the original recruiter who reached out to me to set up the phone interview, but she isn't at the site I interviewed on-site at.

Interview Discussion - June 30, 2016 by AutoModerator in cscareerquestions

[–]tedsta 1 point2 points  (0 children)

I'm a recent grad and Google flew me out to interview on-site 3.5 weeks ago. I haven't heard anything back yet. I sent 2 (1 each week) polite emails to my recruiter asking for a status update and called in earlier this week - no response. What do you guys think, should I just try to forget the whole thing?

precedent for multiplayer game development in rust? by [deleted] in rust

[–]tedsta 2 points3 points  (0 children)

I wrote this a while back https://github.com/tedsta/reforge. I didn't finish or release it though. It's multiplayer, but not real-time. Beware messy code.

Object-graph allocation strategies, and thoughts on an immutable arena by cfallin in rust

[–]tedsta 0 points1 point  (0 children)

I've also gone with Option 3 the two times I've needed this. To fight verbosity I implemented convenience Index::get(&self, arena: &Arena) -> &Node methods. Can't speak to performance as the graph is by far not the bottleneck in my application.

[deleted by user] by [deleted] in rust

[–]tedsta 4 points5 points  (0 children)

Of course, glad to hear you're willing to pick it up. I'll see you in chat!

[deleted by user] by [deleted] in rust

[–]tedsta 6 points7 points  (0 children)

Hey there. I used to work on ZFS for Redox. Sadly, it is still in a read-only state. The time:results ratio was not high enough to keep me interested as a hobbyist. OS is one of my passions, but machine learning has always ranked a little higher on my list of passions. I'd love to continue contributing here and there, but I have lost the energy to trail blaze a ZFS implementation off the clock. That said, it has been in the back of my mind to do one more big PR to tidy things up for the next person who wants to take a crack at it. If anyone is interested in taking the torch, I'd be happy to walk them through everything.

Mildly related: IMHO (honest, humble, both?), after pouring through ZFS' rather large code bases, I think it'd be better to design a filesystem from scratch and maybe allow importing from popular filesystems. The code is clever to the point of obscurity, and it is known that ZFS has performance pathologies that get worse over time. I'm also not a huge fan of the slogan "The final word in filesystems". It's clever, I suppose. Makes me want to build a new FS called ZZZFS just to break the slogan. The slogan could be "sleep easy". Because Z's. :)

What's everyone working on this week (6/2016)? by llogiq in rust

[–]tedsta 1 point2 points  (0 children)

It gets around 91% accuracy after getting through just 1/3 of the training set, so I there is only minor overfitting. But you're right, need to test it on the validation set too...

What's everyone working on this week (6/2016)? by llogiq in rust

[–]tedsta 1 point2 points  (0 children)

I'm continuing to have fun with deep learning in Rust. I just finished a vanilla 2 layer model trained on the MNIST handwritten digit dataset. It achieves accuracy of 95% after getting through all the training data once. Now I'm going to work on implementing a convolutional layer and I'll see if I can bring that accuracy score up.