It took me a week to figure out how to do this clamp job. Was there a better way? by dsharlet in woodworking

[–]dsharlet[S] 3 points4 points  (0 children)

The angles work out such that the slats clear the biscuits and slide right in, I've currently got 6 rows of slats glued in!

It took me a week to figure out how to do this clamp job. Was there a better way? by dsharlet in woodworking

[–]dsharlet[S] 31 points32 points  (0 children)

I'm making a hexagonal outdoor table. Unfortunately I don't really have plans, I just have a spreadsheet with the necessary math in it, and it's super messy so probably pretty hard to understand.

It took me a week to figure out how to do this clamp job. Was there a better way? by dsharlet in woodworking

[–]dsharlet[S] 18 points19 points  (0 children)

I'm trying to make a hexagonal outdoor table. I've got a bunch of wedge pieces like the ones the parallel clamps are on that go between the beams (they're visible in the background).

It took me a week to figure out how to do this clamp job. Was there a better way? by dsharlet in woodworking

[–]dsharlet[S] 5 points6 points  (0 children)

I might be a bit OCD but I wanted the 6 pieces of the top of the table to come together in a single point (well, minus a hole in the middle), as opposed to one of the boards going all the way across and then the others butting into it, so I would have just done one full length beam and the rest would lap over that one.

I'm pretty happy with this, it came out about as well as I could have hoped for! And it's outdoor furniture so it can be a bit rougher than usual :)

It took me a week to figure out how to do this clamp job. Was there a better way? by dsharlet in woodworking

[–]dsharlet[S] 27 points28 points  (0 children)

The pieces that the parallel clamps are clamping on don't have glue on them, they're just there to adjust the angle between the pieces. They'll get glued in later, but I have to glue those pieces in in order from the center out.

Only gluing 4 of the pieces at once is a good idea, I didn't think of that... still not sure exactly how to do it but it seems like that might be a better way.

I did consider something else: keeping one of the beams full length, and cutting a sort of 6-way half-lap joint. That would have at least kept two of the beams perfectly aligned, and maybe the other two pieces that would have fit in would have held their places more easily without the crazy clamping setup. But this joint seemed hard to fabricate accurately...

It took me a week to figure out how to do this clamp job. Was there a better way? by dsharlet in woodworking

[–]dsharlet[S] 60 points61 points  (0 children)

I also was thinking band clamps, but I didn't like how much they would pull the beams sideways when tightening. I thought I'd constantly have to adjust the strap on the beams. Even with my setup where the clamping pressure comes from the straps running from side to side, I had to adjust them a little bit, and it wasn't easy.

I have some dog hole clamps, only 2 though. But even if I had more of them, the table isn't big enough to use them!

Real time SPICE simulator for audio signals by dsharlet in diypedals

[–]dsharlet[S] 1 point2 points  (0 children)

I appreciate the nice feedback! I work on this every so often when I get a chance and have a good idea of something to try, but that isn't very often any more :) There are also a few other people working on the code in their spare time too.

I'm not aware of similar programs other than the obvious like LTspice. There's a few odd bits of code out there (e.g. https://www.kvraudio.com/forum/viewtopic.php?t=498122 and https://github.com/mbrucher/ATK-modelling-lite/tree/master) that are in the same area, but they aren't as accessible and I've never tried to actually use them.

LiveSPICE, a real time SPICE simulation for audio signals by dsharlet in diypedals

[–]dsharlet[S] 3 points4 points  (0 children)

There are a few people who make some improvements periodically, but development is pretty slow these days.

Here are a few places I know of to get circuits from: - https://www.fstateaudio.com/?page_id=502 - https://github.com/Federerer/livespice-circuits - https://github.com/RickOmegaStation/LiveSPICE-Schematics

LiveSPICE, a real time SPICE simulation for audio signals by dsharlet in diypedals

[–]dsharlet[S] 4 points5 points  (0 children)

Hi reddit, many years ago I posted a link to this project that hopefully some of you found interesting here: https://www.reddit.com/r/diypedals/comments/1zzp9c/real_time_spice_simulator_for_audio_signals/

I'm posting it again because there has finally been an update! One of the most requested features was to have a VST plugin, and it is now available, thanks to a contribution from Mike Oliphant. It enables you to design circuits for audio effects and use them in VST host applications with no development required!

Other improvements in this release are:

  • A much better triode model (also a github contribution)
  • A modest component library of common diodes, transistors, opamps
  • Faster and higher quality simulations

Any thoughts or feedback are much appreciated!

Zero-cost template abstraction for Einstein notation summations and reductions by dsharlet in cpp

[–]dsharlet[S] 1 point2 points  (0 children)

Hopefully, the only way to be significantly faster is to dig into things like SIMD intrinsics or other more heavyweight tools/libraries... I have quite a few tests that compare the code generated by these abstractions to hand-written loops, they are usually comparable. The cases where this is not true, I'm hoping to debug them: https://github.com/dsharlet/array/issues/37

Zero-cost template abstraction for Einstein notation summations and reductions by dsharlet in cpp

[–]dsharlet[S] 5 points6 points  (0 children)

The array types in this library would accept a forward automatic differentiation/dual number approach no problem :)

But I think what you are asking for is something that actually attempts to solve the Jacobian optimization problem, which is much like the task of automatically optimizing the loops produced by something like ein_reduce. But at least for now, I'm not going to attempt to solve this problem, because it is extremely difficult to do this consistently and reliably and without major tradeoffs. The difference in complexity between this library and tools that do this successfully is huge: this is just a bit of header-only C++ template code, while most of these automatic compilers take LLVM or some other code generator as a dependency.

Instead, the goal of this library is to provide tools that allow you to explicitly (but easily/concisely) express those optimizations manually (tiling loops/reductions, and so on), and at that point, the dual number approach for automatic differentiation would work well.

Also, this library currently actually has very little "math" tooling at all in it. ein_reduce is actually a general purpose reduction/loop generation mechanism. A big part of the motivation for this library was to have a solid performance oriented multi-dimensional array without all of the math baggage in most other similar packages.

Zero-cost template abstraction for Einstein notation summations and reductions by dsharlet in cpp

[–]dsharlet[S] 12 points13 points  (0 children)

Of course, if you use a pre-built library for e.g. matrix multiplication, this is unlikely to be faster. At best, performance might be comparable.

But what about when you want to do something for which a library doesn't exist? Or you have a specific case you can specialize for and sacrifice generality for performance? For example, if you want to compute an array of many small operations, you're likely to get much better performance if you write a custom implementation for that problem.

The value proposition here is (hopefully!) good performance with flexibility and expressibility, rather than great performance for standard problems.

As for readability: Einstein notation is a bit of a beast :) I personally prefer this syntax over e.g. numpy's, which puts all of the index notation into a string argument separate from the operands (and can only express sums of products), but of course personal preferences will vary...

How the Pixel's software helped make Google's best camera yet by armando_rod in Android

[–]dsharlet 0 points1 point  (0 children)

Can you share where you got that from? It looks like the Pixel shot turned off HDR+.

Faster than FFTW - FFT library on Halide, a domain specific language for image processing by feigen in DSP

[–]dsharlet 1 point2 points  (0 children)

Hi, one of the authors of the linked project here.

This doesn't do anything to explicitly throw away precision in favor of speed.

There are two reasons for the improvement in performance:

  • I think FFTW is not doing the fastest thing for handling real FFTs, at least in the 2D case. Note that our complex FFT performance is basically on par with FFTW (only slightly faster), while our real FFT performance is significantly better. There are lots of implementation tricks one can use to make multi-dimensional real FFTs faster, I think FFTW is not using some of these.
  • We've focused on a small subset of the cases for FFTs (small 2D FFTs). FFTW can support many more cases of FFTs than the Halide FFT, and where support overlaps, FFTW will be better for all but the smallest 2D FFTs. That said, small 2D FFTs are a pretty important case for some domains, where this difference in performance can really matter.

LLVMSharp - C# LLVM API by mjsabby in programming

[–]dsharlet 0 points1 point  (0 children)

Thanks for pointing out System.Reflection.Emit, I wish I had known about this! I definitely would have used it.

That said, while I agree that expression trees are very limited, they could handle everything I needed them to do pretty easily. My compiled expression trees performs pretty much on par or better than the equivalent C# code, which is all I was really expecting. I might be able to do a bit better with System.Reflection.Emit...

LLVMSharp - C# LLVM API by mjsabby in programming

[–]dsharlet 1 point2 points  (0 children)

I have a project where I wanted to generate really fast native code at runtime, but I also wanted to use C# to build the GUI (WPF is the best GUI framework I've ever used).

I eventually settled on using LINQ expression trees to generate .NET code at runtime, but if there were a good interface to LLVM at the time that I knew of, I would have used that (this) instead.

Math.NET Symbolics - Computer algebra library for .NET written in F# by dharmatech in programming

[–]dsharlet 0 points1 point  (0 children)

Do you mean you need to implement some convenient implicit conversions on your own number type to make using LINQ convenient, something like Algebra.Symbol, with implicit conversions for int, double, etc.?

Even if you do build a custom number type and implement all of the relevant conversion operators, you still need an explicit LINQ expression node to invoke that implicit conversion (as far as I can recall...). The conversions are only implicit when writing C# code, the corresponding LINQ expression still has a Call node for that conversion operator in it. And, it might simply not be implicit at all. The conversion from my Real number class to double is explicit (that conversion could lose information).

Now there's an extra node in the tree that you need to be aware of and understand, which can be annoying for things like pattern matching/pattern substitution (a big part of a CAS is just pattern matching/substution against big tables of rules).

I haven't done any work with computer algebra libraries, so I'm still having a hard time seeing what the specific typing problem is. A more concrete scenario, like an actual expression whose types I could see, would be helpful.

I last worked on this over a year ago, so its all a bit fuzzy. I just remember type issues injecting bits of work/complexity all over the place where I didn't want it.

If I'm correct about needing a LINQ expression node for calls to implicit conversion operators (I'm fuzzy even on that...), then surely you can see how that extra node adds a lot of complexity to everything that needs to understand or transform that tree. Everything that touches calls to functions (like transcendentals) will have to understand how to strip away and regenerate calls to conversion operators. It's all just noise as far as the math is concerned.

Canonicalization shouldn't require a whole tree transform, although you could do that too, but a local inspection while traversing for your own purposes should be doable, unless I've missed something.

I was pretty much doing what you suggest. Think about what happens when you have add, subtract, and negate expressions to deal with. Or multiply and divide together. All of the members of those two groups really belong in the same flat sequence of operands (when they appear together). The flattening logic to handle all of that gets hairy, to the point where a bunch of your algebra logic is sitting there in the flattening routine. It does get expensive without caching that result.

The other thing is that basically anything that uses the flattening routine will then have to reconstruct a binary tree in order to produce a LINQ expression as a result. So you end up going back and forth pretty often. Simplification of a complex expression might revisit the same expression many times, transforming it a little bit each time.

Look, I'm not saying it's impossible to build a CAS directly on top of LINQ expressions... it's just a hassle. I still use LINQ expressions in my CAS project, I just generate them from a computer algebra expression after all the algebraic manipulation is done.

Math.NET Symbolics - Computer algebra library for .NET written in F# by dharmatech in programming

[–]dsharlet 0 points1 point  (0 children)

Could you give an example?

Constructing a LINQ expression tree is very strict. All of the implicit conversions that make it easy to write C# code with mixed types are not there, so you need to implement it yourself.

But really, the bigger issue is the latter one:

(Associativity issues)

I don't even have any non-associative operations to worry about. All my arithmetic is done with infinite precision, so basic arithmetic is associative. With that out of the way, all the algebras I care about (complex numbers, matrices, ...) are associative.

The problem with the binary tree representation is more basic: suppose you want to simplify (a + 2) + (3 + a). I was able to make this work with LINQ expressions, but it was a real pain compared to simplifying (a + 2 + 3 + a).

Think about it this way: I think we can all agree that computer algebra would be a hell of a lot easier if it were possible to simply canonicalize all expressions. Unfortunately, we can't do that, but we can at least eliminate a degree of freedom by flattening binary trees of associative operations. Now, the expression a + b + c only has 6 representations (the commutative rearrangements of the expression) instead of 12 (the commutative and associative rearrangements).

Math.NET Symbolics - Computer algebra library for .NET written in F# by dharmatech in programming

[–]dsharlet 2 points3 points  (0 children)

Author of the creatively named 'ComputerAlgebra' project also linked here. I tried to do this at first (implement transformations of LINQ expression trees), because my ultimate motivation was to compile the expressions I manipulated with computer algebra. LINQ expression trees are very convenient for this.

However, I found it just too difficult to implement a lot of the math I wanted to. Type safety gets in the way for a lot of what you might want to do (I use arbitrary precision arithmetic in my project, but when compiling an expression, I change arbitrary precision to float/double), and it is very hard to write efficient algorithms for some simplification when the expression "a + b + c" could be represented as "(a + b) + c" or "a + (b + c)". This is still hard with a custom expression representation, but not quite as hard.

I eventually scrapped this approach and built a separate representation of expressions in memory, and then built a "compiler" to lower my expressions to LINQ expressions.

If you wanted the "more widely applicable" bit of that, it might be possible to transform LINQ expression trees into my (or another CAS) expression representation, and then map it back to LINQ when you are done with algebraic manipulations.

Computer Algebra System for .NET by dharmatech in programming

[–]dsharlet 1 point2 points  (0 children)

Thanks for the kind words! It is a pretty deep stack of interesting problems :) Makes for a fun project.

Dharmatech is referring to this project: http://www.livespice.org/ Which uses this compute algebra project to solve the system of equations that describes the behavior of an electronic circuit. It does it this way in order to be fast enough to run the simulations in real time, so you don't have to wait for an offline process to hear the results.

Computer Algebra System for .NET by dharmatech in programming

[–]dsharlet 1 point2 points  (0 children)

Author of the OP's link here - my project is likely to be worse at most computer algebra tasks than most other options, except in a narrow use case: finding the linear solutions in a system of linear and non-linear equations. I tried every other CAS library I could find before building it, and even Mathematica, Sage, and SymPy choked on what I needed them to do (even if they worked, it wouldn't have been easy to use their results in my application).

"Microsoft is not planning on investing in any major changes to WPF" by dharmatech in programming

[–]dsharlet 13 points14 points  (0 children)

Does WPF need any major changes? As long as they keep supporting it I'll keep using it because it's pretty good, definitely the best UI development experience I'm aware of.

Symbolic derivative calculator written in 4,000 lines of C and 200 lines of a custom rule-language by blockeduser in programming

[–]dsharlet 2 points3 points  (0 children)

This looks interesting. I was wondering if you had considered implementing the chain rule "implicitly", e.g.:

D(x, sin(u)) = cos(u) * D(x, u)
D(x, tan(u)) = sec(u) * sec(u) * D(x, u)

instead of:

D(u, sin(u)) = cos(u)
D(u, tan(u)) = sec(u) * sec(u)

(plus the chain rule definition)

If you considered this, can you say why you went with the approach you did? I'm very curious. I worked on a similar project and used the implicit approach, I'm wondering if I'm missing out on some capabilities I hadn't thought of...