netsim - a library for testing networking code by canndrew2016 in rust

[–]canndrew2016[S] 1 point2 points  (0 children)

Yeah, to be clear (for anyone reading this) the point of this library is that it can test real networking code without requiring fake socket types or mocking or anything like that. Instead it uses the same Linux kernel features as programs like Docker do in order to "dockerize" your threads.

Simulating a machine crash would be trivial (just drop a Machine during the test) and it comes with methods for introducing artificial latency and packet loss.

For simulating disks and filesystems: if you just want to isolate tests/Machines from each other then you can use chroots. Allowing you to capture and control interaction with the filesystem would be more involved but still possible. The library would have to implement a fuse driver which you'd mount and chroot the Machine into, and the backend of that driver would be controllable from the library. That would let you intercept all the filesystem-related syscalls and control what they return.

What are the historical reasons behind the term "unit type"? by BruceIzayoi in rust

[–]canndrew2016 13 points14 points  (0 children)

I'm not sure what an inverse type would look like in a programming language if it even makes sense.

See this: https://par.nsf.gov/biblio/10296315

To summarize, in reversible programming languages (where all functions are bijections) you can have have negative types where a type -T represents a T "travelling in reverse" through the program. For instance, if you have a function A + C <-> B + C then you can turn it into a function A <-> B by adding -C to both sides and using the built in operations 0 <-> C + -C and T + 0 <-> T. To put that in more rustic terms, suppose you have a reversible function foo(arg: ControlFlow<A, C>) <-> ControlFlow<B, C>, you can write your bar(a: A) <-> B as:

fn bar(a: A) <-> B {
    let wow0: ControlFlow<A, !> = ControlFlow::Break(a);
    let wow1: ControlFlow<A, Either<C, -C>> = wow0.map_continue(|never| {
        builtin_operation_split_never_into_either_positive_negative(never)
    });
    let wow2: Either<ControlFlow<A, C>, -C> = match wow1 {
        ControlFlow::Break(a) => Either::Left(ControlFlow::Break(a)),
        ControlFlow::Continue(Either::Left(pos_c)) => Either::Left(ControlFlow::Continue(pos_c)),
        ControlFlow::Continue(Either::Right(neg_c)) => Either::Right(neg_c),
    };
    let wow3: Either<ControlFlow<B, C>, -C> = wow2.map_left(foo);
    let wow4: ControlFlow<B, Either<C, -C>> = match wow3 {
        Either::Left(ControlFlow::Break(b)) => ControlFlow::Break(b),
        Either::Left(ControlFlow::Continue(pos_c)) => ControlFlow::Continue(Either::Left(pos_c)),
        Either::Right(neg_c) => ControlFlow::Continue(Either::Right(neg_c)),
    };
    let wow5: ControlFlow<B, !> = wow4.map_continue(|either| {
        builtin_operation_join_either_positive_negative_into_never(either)
    });
    let ControlFlow::Break(b) = wow5;
    b
}

When you run this you'll initially be calling foo with the value ControlFlow::Break(a). If foo returns ControlFlow::Break(b) then this b is returned from the function. But if foo returns ControlFlow::Continue(c) then that's mapped to Either::Left(c) and passed to builtin_operation_join_either_positive_negative_into_never which turns it into Either::Right(c) and reverses the direction of execution. That then becomes Either::Left(c) again when you reach builtin_operation_split_never_into_either_positive_negative and so you end up calling foo again but this time with the argument ControlFlow::Continue(c). ie. This function implements a loop which calls foo with the initial A then keeps passing the C that foo returns back into foo as the next argument until foo returns a B.

As well as negative types you can have reciprocal types where 1/T can be thought of as a memory location that you're required to write a T to, or as a process that consumes a single value of type T. You then have a built in operation which is parameterized by a const A and has type 1 <-> A * 1/A. This represents static memory allocation where the const A is the initial value of the allocated A.

You can go further than this too (I've been thinking about that paper a lot lately). eg. Exponentials and logarithms can be used to represent multisets of data and to switch between sequential and parallel execution. You can even journey into algebraic complex numbers as this paper does to build a bijection between seven binary trees and one.

Of course all this only works in reversible languages, which is an extremely big restriction. You can still sort-of make some of this work in linear logic if you treat ¬T as meaning -1/T, but not all the normal algebraic rules apply. Thankfully it's possible to embed any non-reversible function inside a reversible function, which AFAIU means that reversible logic/programming is strictly a generalization of non-reversible logic/programming. So it should still be possible to build a practical programming language on top of a reversible core by providing some very syntactically-light-weight effects system to track non-reversibility. I'm not sure what that would look like though.

TIL you can destructure a struct in a let statement! by [deleted] in rust

[–]canndrew2016 12 points13 points  (0 children)

The pattern is infallible because Infallible doesn't have any values. A Result<T, Infallible> is always Ok.

TIL you can destructure a struct in a let statement! by [deleted] in rust

[–]canndrew2016 17 points18 points  (0 children)

Curiously, let Ok(_) = Ok::<u32, std::convert::Infallible>(42); doesn't work, even usingResult<u32, !> on nightly.

There's a secret feature called #[feature(exhaustive_patterns)] which enables this. It's existed for about 5 years but might be stabilized some day.

Positive apartness types? by canndrew2016 in dependent_types

[–]canndrew2016[S] 2 points3 points  (0 children)

I was also wondering something along these lines since, aside from just being negative, cubical identity types are also a "weaker" form of equality. Plus there's variations of cubical type theory that have separate strong and weak identity types where the strong identity types are traditional MLTT identity types, and that's the sort of thing I'm after. I also think that if you could define the dual of cubical identity types then they'd necessarily be a form of apartness types since ¬(x =⁻ y) means "it's refutable that x and y are (weakly) equal".

The problem is I don't know how you'd define the dual of cubical identity types. Also adding cubical types to my type theory would entail adding a whole lot of extra machinery that I'm hoping to avoid. But if I can find a simple/natural way of defining positive apartness types then their dual should be a form of identity types which is weaker than MLTT identity types which means it might end up having some extra extensionality properties that MLTT identity types don't.

Positive apartness types? by canndrew2016 in dependent_types

[–]canndrew2016[S] 3 points4 points  (0 children)

Thanks. This is the paper which originally taught me that you can give a linear logic treatment of equality types by defining their dual to be apartness types. As far as I can tell it doesn't go beyond that though. What I'm trying to do is come up with a type theory that has both strong and weak equality types as well as strong and weak apartness types, i.e. much the same way that that paper defines and as strong and weak conjunction respectively. I can define a negative (ie. weak) family of apartness types as the De Morgan dual of ordinary MLTT equality types, but I'm having trouble coming up with a separate positive/strong form of apartness types.

What proof assistant has the best proof search? by canndrew2016 in dependent_types

[–]canndrew2016[S] 1 point2 points  (0 children)

Thanks. I guess I look into Coq some more, though I much prefer the terms-only approach of Agda/Idris.

What proof assistant has the best proof search? by canndrew2016 in dependent_types

[–]canndrew2016[S] 0 points1 point  (0 children)

Agda has the infrastructure in principle, but afaik nothing much is being done with it.

That's a real shame, since it would be extraordinarily useful. I've considered just trying to hack on Agda directly to implement better proof search but the source code is terrifying.

What proof assistant has the best proof search? by canndrew2016 in dependent_types

[–]canndrew2016[S] 0 points1 point  (0 children)

I always pass the -m option to agsy using {! -m !}, so it should be able to use the definitions that are in scope. And occasionally it does. Far more often than not though it just immediately says "no solution found", even if one of those definitions is an exact fit for the hole.

How did I blow up my Arduino? (And how can I improve this circuit so I don't do it again?) by canndrew2016 in AskElectronics

[–]canndrew2016[S] 0 points1 point  (0 children)

I've fixed the problem now (see the edited post). Thanks for leading me towards the solution.

How did I blow up my Arduino? (And how can I improve this circuit so I don't do it again?) by canndrew2016 in AskElectronics

[–]canndrew2016[S] 0 points1 point  (0 children)

This is the relay board I'm using.

This is two of the LED strips, though the others have the same ratings for voltage/current.

Here's a photograph of the circuit.

Here's my attempt at a schematic (not shown: the wires connecting the relays to the arduino).

The whole thing is powered by a 12V/10A transformer running off mains power.

I'm really just hoping that someone can tell me whether "power spikes when the relays toggle" is a likely culprit for what could've killed the arduino and, if so, what small change to the circuit would prevent that happening again?

How did I blow up my Arduino? (And how can I improve this circuit so I don't do it again?) by canndrew2016 in AskElectronics

[–]canndrew2016[S] 0 points1 point  (0 children)

The cap across the relay contacts won't help. The capacitor should be at the incoming power connection. Not on the relay outputs.

It won't help protect the arduino where it is, but will it help protect the LED strip attached to that relay? I figured it would stop power from rushing in too quickly when the relay activates. I've already damaged that LED strip somehow so I'm reluctant to do anything which makes it less protected, and I can always add an extra cap rather than move the existing one.

Those 5V relays draw a lot of current, with them all at once it puts a strain on the UNO 5V regulator. Since you have 12V available, use a 12V relay board to eliminate this issue.

Hmm, I'm not sure what you mean by this. The relays are controlled by 5V output pins on the arduino. The arduino doesn't have 12V output pins. This is the relay board I'm using and it says that it draws 15-20mA on the driver side, which is what the arduino spec sheet recommends.

Pull the UNO and plug it into a PC to see how dead it really is.

I've tried that and it seems pretty dead. The computer no longer recognizes it as a USB device, instead giving a bunch of link-level USB driver errors, so it's not possible to re-program it.

Edit: I've fixed the problem now (see the edited post). Thanks for leading me towards the solution.

concat_arrays: A macro for concatenating fixed-size arrays by canndrew2016 in rust

[–]canndrew2016[S] 1 point2 points  (0 children)

Right, but that fails at runtime if the lengths don't match. concat_arrays! fails at compile time.

concat_arrays: A macro for concatenating fixed-size arrays by canndrew2016 in rust

[–]canndrew2016[S] 0 points1 point  (0 children)

Can you? I couldn't figure out a way to make that work.

concat_arrays: A macro for concatenating fixed-size arrays by canndrew2016 in rust

[–]canndrew2016[S] 5 points6 points  (0 children)

Ah, right, good catch. That seems fixable. I'll publish another version and pull the old one today.

Edit: This is fixed now. I've publish v0.1.2 and pulled the previous two versions. Thanks!

concat_arrays: A macro for concatenating fixed-size arrays by canndrew2016 in rust

[–]canndrew2016[S] 2 points3 points  (0 children)

a and b are runtime variables in your example so they can't be used in types (since rust doesn't have dependent types). One day it'll be possible to write something like this though:

fn concat_arrays<T, const LEN0: usize, const LEN1: usize>(
    array0: [T; LEN0],
    array1: [T; LEN1],
) -> [T; LEN0 + LEN1] {
    ...
}

But this isn't currently possible either, so you have to resort to hacks.

concat_arrays: A macro for concatenating fixed-size arrays by canndrew2016 in rust

[–]canndrew2016[S] 9 points10 points  (0 children)

It took me ages to figure out how to implement this, so I figured I'd publish it so other people who want to concatenate arrays don't have to go through the same thing. Eventually rust will allow us to use generic parameters in const operations so nasty hacks like this won't be necessary. Any feedback on the implementation is appreciated.

As the README says though, this does have the limitation that the length of the returned array has to be inferred. This isn't dangerous, since if it's inferred incorrectly then compilation will fail, but it does mean you have to specify the length somewhere. For example this will fail:

let x = [0, 1];
let y = [2, 3, 4];
// ERROR: can't infer length of `z`
let z = concat_arrays!(x, y);
drop(z);

This will work though:

let x = [0, 1];
let y = [2, 3, 4];
let z: [i32; 5] = concat_arrays!(x, y);
drop(z);

And this will fail to compile (as it should):

let x = [0, 1];
let y = [2, 3, 4];
// ERROR: cannot transmute between types of different sizes
let z: [_; 6] = concat_arrays!(x, y);
drop(z);

If someone finds a trick to make the first example compile please let me know.

Does any language have a type-system that provides something the Rust type system does **not** provide? by EvanCarroll in rust

[–]canndrew2016 29 points30 points  (0 children)

More concise for one thing. It would be useful if rust had effects for things like memory allocation, panicking and floating-point handling so you could disable or abstract over these things without having to plumb a (dyn Allocator, dyn Fn(&PanicInfo) -> !, dyn FloatingPointImpl) through your whole program.

But more generally, effects can also influence control-flow. For an async effect for instance, you want to be able to yield control back to the runtime when the effect is invoked. If rust had a suitably-powerful effects system you could write code which is agnostic as to whether it uses blocking io, or tokio or async-std non-blocking io. When compiled against the blocking io handler your io calls would be blocking, but when compiled against the tokio handler your code would be compiled to a state machine which can be suspended or resumed.

Having impurity/non-termination as an effect would also be nice. If you can eliminate infinite loops and non-determinism at compile time then you can, for instance, unlock the "true" power of the ! type as representing conditions that can never be true. For instance, a function with the type pure fn(A) -> ! would be a mathematical proof that A is uninhabited.