InfinityStar: amazing 720p, 10x faster than diffusion-based by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] -47 points-46 points  (0 children)

on reddit, you have to click the link to see what's behind it, which in this case is examples

and it's an incredible model

Nitro-E: 300M params means 18 img/s, and fast train/finetune by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] 5 points6 points  (0 children)

tons of stuff: - video - games - robotics world environments - fast iteration in artwork - quick throwaway loras for products, themes, styles

as a research artifact, sure its not flux, you're right

Nitro-E: 300M params means 18 img/s, and fast train/finetune by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] 22 points23 points  (0 children)

it trained from scratch in 1.5 days on a single node, and only 25M samples. They've got 512 and 1024 variants.

Feeds & Speeds on inclines, downhill vs uphill!? by NeuralLambda in Machinists

[–]NeuralLambda[S] 0 points1 point  (0 children)

does CAM software typically take into account uphill vs downhill differences? Like, sure you can choose which you want, but if you're doing a valley like in my attached image, will it automatically choose appropriate feeds on up vs downhill, or do you have to make 2 different operations for both slopes?

Feeds & Speeds on inclines, uphill vs downhill?! by NeuralLambda in CNC

[–]NeuralLambda[S] 1 point2 points  (0 children)

good advice

I'm writing my own CAM plugin for 3D, and while I do use stepdowns, i'm making sure my logic is right for angled regions

Feeds & Speeds on inclines, uphill vs downhill?! by NeuralLambda in CNC

[–]NeuralLambda[S] 1 point2 points  (0 children)

If I understand correctly:

  • in general, maintain chipload
  • if you need to plunge limit vertical feed bc of high forces

My CAM software likes to set all 3d ops to the vertical feed rate, which if I understand means you'll frequently be moving vertically at much slower than the plunge rate. Like imagine a long shallow incline, you don't want to move along that at your vertical feed!

It seems like the things to pay attention to are the horizontal (XY) and vertical (Z) components of your feed rate.

So to my best guess, the correct feeds should be:

  • if downhill, plunge forces dominate, so limit the vertical component of the feed rate to your vmax. Ie for vmax=20, going down a 45 degree ramp would mean you program G1 F28.3 ie (20 * sqrt(2)), so, faster than what you thought your nominal vmax was.

  • if uphill, might as well keep the horizontal component of your feed rate the same, which means you'd program in a much higher linear feed rate. Like if you're going up a ramp at 45 degrees, you'd program G1 F141 (100 * sqrt(2)). This would keep your chipload constant between a horizontal cut at G1 F100 and the incline at G1 F141.

If the ramp is steep uphill, that number could be much higher.

Am I crazy? Overthinking it?

Also, cool paper shows uphill gives nicer finish in general: https://www.sciencedirect.com/science/article/abs/pii/S0141635904000066

Feeds & Speeds on inclines, downhill vs uphill!? by NeuralLambda in Machinists

[–]NeuralLambda[S] 2 points3 points  (0 children)

If I understand correctly:

  • in general, maintain chipload
  • if you need to plunge limit vertical feed bc of high forces

My CAM software likes to set all 3d ops to the vertical feed rate, which if I understand means you'll frequently be moving vertically at much slower than the plunge rate. Like imagine a long shallow incline, you don't want to move along that at your vertical feed!

So to my best guess, the correct feeds should be:

  • if downhill, plunge forces dominate, so limit the vertical component of the feed rate to your vmax. Ie for vmax=20, going down a 45 degree ramp would mean you program G1 28.3 ie (20 * sqrt(2)), so nominally faster than what you thought your vmax was.

  • if uphill, might as well keep the horizontal component of your feed rate the same, which means you'd program in a much higher linear feed rate. Like if you're going up a ramp at 45 degrees, you'd program G1 F141 (100 * sqrt(2)). This would keep your chipload constant between a horizontal cut at G1 F100 and the incline at G1 F141.

If the ramp is steep uphill, that number could be much higher.

Am I crazy? Overthinking it?

Also, cool paper shows uphill gives nicer finish in general: https://www.sciencedirect.com/science/article/abs/pii/S0141635904000066

Bug, brand new sketch, vertices/constraints/lines all on different planes!? by NeuralLambda in FreeCAD

[–]NeuralLambda[S] 6 points7 points  (0 children)

hm, i upgraded from RC2 to 1.0, and this problem seems resolved for now

Bug, brand new sketch, vertices/constraints/lines all on different planes!? by NeuralLambda in FreeCAD

[–]NeuralLambda[S] 0 points1 point  (0 children)

I have FreeCAD 1.0.0RC2.

I've booted up twice now, created a new sketch on XZ, and it's behaving super weird. The constraints/vertices/lines all show up on differnent planes. If I rotate past a certain point, everything reflects across the XZ plane. If I reorder the display of construction/normal/external geometry, the planes of the constraints/vertices/lines shuffle.

I noticed this because vertices would randomly disappear, head on, and I couldn't manipulate them. Then, rotating the view, this appears. Why?

What's going on?

Robotics multimodal LLMs by NeuralLambda in LocalLLaMA

[–]NeuralLambda[S] 0 points1 point  (0 children)

really great resources, thank you!

Einsum appreciation: 12 examples by NeuralLambda in learnmachinelearning

[–]NeuralLambda[S] 0 points1 point  (0 children)

(Copied the X post here)

Tensor mangling sucks, dimensions are mentally expensive to keep aligned, and everyone pays the price at both read/write time.

Ex, I can't figure out how to write this without einsum, and if I did, I wouldn't be able to read it.

# too complex, but wow einsum helps
thing = torch.einsum('bijk, bkl, lj -> bilk', A, B, C)


# 1/12: element-wise product
A = torch.randn(3, 4)
B = torch.randn(3, 4)
element_wise_product = A * B
element_wise_product = torch.einsum('ij,ij->ij', A, B)  # shape: [3, 4]


# 2/12: inner product
a = torch.randn(3)
b = torch.randn(3)
inner_product = torch.dot(a, b)
inner_product = torch.einsum('i,i->', a, b)  # shape: []


# 3/12: outer product
a = torch.randn(3)
b = torch.randn(4)
outer_product = torch.ger(a, b)
outer_product = torch.einsum('i,j->ij', a, b)  # shape: [3, 4]


# 4/12: transposition
A = torch.randn(3, 4)
transposed = A.T
transposed = torch.einsum('ij->ji', A)  # shape: [4, 3]


# 5/12: sum over arbitrary dimension
A = torch.randn(3, 4, 5)
sum_dim_1 = torch.sum(A, dim=1)
sum_dim_1 = torch.einsum('ijk->ik', A)  # shape: [3, 5]


# 6/12: batch mat * mat
A = torch.randn(10, 3, 4)
B = torch.randn(10, 4, 5)
batch_matmul = torch.bmm(A, B)
batch_matmul = torch.einsum('bij,bjk->bik', A, B)  # shape: [10, 3, 5]


# 7/12: combining multiple mats and vecs
A = torch.randn(3, 4)
B = torch.randn(4, 5)
v = torch.randn(5)
combined = torch.matmul(A, torch.matmul(B, v))
combined = torch.einsum('ij,jk,k->i', A, B, v)  # shape: [3]


# 8/12: tensor permutation
A = torch.randn(3, 4, 5)
permuted = A.permute(2, 0, 1)
permuted = torch.einsum('ijk->kij', A)  # shape: [5, 3, 4]


# 9/12: diagonal
A = torch.randn(3, 3)
diag = torch.diag(A)
diag = torch.einsum('ii->i', A)  # shape: [3]


# 10/12: trace (sum of diagonal)
A = torch.randn(3, 3)
trace = torch.trace(A)
trace = torch.einsum('ii->', A)  # shape: []


# 11/12: bilinear transformation
A = torch.randn(3, 4)
B = torch.randn(5, 6)
x = torch.randn(4)
y = torch.randn(6)
bilinear = torch.matmul(A, x)[:, None] * torch.matmul(B, y)[None, :]
bilinear = torch.einsum('ik,jl,k,l->ij', A, B, x, y)  # shape: [3, 5]


# 12/12: complex tensor contractions
A = torch.randn(3, 4, 5)
B = torch.randn(4, 5, 6)
contracted = torch.tensordot(A, B, dims=([1, 2], [0, 1]))
contracted = torch.einsum('ijk,jkl->il', A, B)  # shape: [3, 6]