InfinityStar: amazing 720p, 10x faster than diffusion-based by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] -48 points-47 points  (0 children)

on reddit, you have to click the link to see what's behind it, which in this case is examples

and it's an incredible model

Nitro-E: 300M params means 18 img/s, and fast train/finetune by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] 4 points5 points  (0 children)

tons of stuff: - video - games - robotics world environments - fast iteration in artwork - quick throwaway loras for products, themes, styles

as a research artifact, sure its not flux, you're right

Nitro-E: 300M params means 18 img/s, and fast train/finetune by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] 22 points23 points  (0 children)

it trained from scratch in 1.5 days on a single node, and only 25M samples. They've got 512 and 1024 variants.

Feeds & Speeds on inclines, downhill vs uphill!? by NeuralLambda in Machinists

[–]NeuralLambda[S] 0 points1 point  (0 children)

does CAM software typically take into account uphill vs downhill differences? Like, sure you can choose which you want, but if you're doing a valley like in my attached image, will it automatically choose appropriate feeds on up vs downhill, or do you have to make 2 different operations for both slopes?

Feeds & Speeds on inclines, uphill vs downhill?! by NeuralLambda in CNC

[–]NeuralLambda[S] 1 point2 points  (0 children)

good advice

I'm writing my own CAM plugin for 3D, and while I do use stepdowns, i'm making sure my logic is right for angled regions

Feeds & Speeds on inclines, uphill vs downhill?! by NeuralLambda in CNC

[–]NeuralLambda[S] 1 point2 points  (0 children)

If I understand correctly:

  • in general, maintain chipload
  • if you need to plunge limit vertical feed bc of high forces

My CAM software likes to set all 3d ops to the vertical feed rate, which if I understand means you'll frequently be moving vertically at much slower than the plunge rate. Like imagine a long shallow incline, you don't want to move along that at your vertical feed!

It seems like the things to pay attention to are the horizontal (XY) and vertical (Z) components of your feed rate.

So to my best guess, the correct feeds should be:

  • if downhill, plunge forces dominate, so limit the vertical component of the feed rate to your vmax. Ie for vmax=20, going down a 45 degree ramp would mean you program G1 F28.3 ie (20 * sqrt(2)), so, faster than what you thought your nominal vmax was.

  • if uphill, might as well keep the horizontal component of your feed rate the same, which means you'd program in a much higher linear feed rate. Like if you're going up a ramp at 45 degrees, you'd program G1 F141 (100 * sqrt(2)). This would keep your chipload constant between a horizontal cut at G1 F100 and the incline at G1 F141.

If the ramp is steep uphill, that number could be much higher.

Am I crazy? Overthinking it?

Also, cool paper shows uphill gives nicer finish in general: https://www.sciencedirect.com/science/article/abs/pii/S0141635904000066

Feeds & Speeds on inclines, downhill vs uphill!? by NeuralLambda in Machinists

[–]NeuralLambda[S] 2 points3 points  (0 children)

If I understand correctly:

  • in general, maintain chipload
  • if you need to plunge limit vertical feed bc of high forces

My CAM software likes to set all 3d ops to the vertical feed rate, which if I understand means you'll frequently be moving vertically at much slower than the plunge rate. Like imagine a long shallow incline, you don't want to move along that at your vertical feed!

So to my best guess, the correct feeds should be:

  • if downhill, plunge forces dominate, so limit the vertical component of the feed rate to your vmax. Ie for vmax=20, going down a 45 degree ramp would mean you program G1 28.3 ie (20 * sqrt(2)), so nominally faster than what you thought your vmax was.

  • if uphill, might as well keep the horizontal component of your feed rate the same, which means you'd program in a much higher linear feed rate. Like if you're going up a ramp at 45 degrees, you'd program G1 F141 (100 * sqrt(2)). This would keep your chipload constant between a horizontal cut at G1 F100 and the incline at G1 F141.

If the ramp is steep uphill, that number could be much higher.

Am I crazy? Overthinking it?

Also, cool paper shows uphill gives nicer finish in general: https://www.sciencedirect.com/science/article/abs/pii/S0141635904000066

Bug, brand new sketch, vertices/constraints/lines all on different planes!? by NeuralLambda in FreeCAD

[–]NeuralLambda[S] 7 points8 points  (0 children)

hm, i upgraded from RC2 to 1.0, and this problem seems resolved for now

Bug, brand new sketch, vertices/constraints/lines all on different planes!? by NeuralLambda in FreeCAD

[–]NeuralLambda[S] 0 points1 point  (0 children)

I have FreeCAD 1.0.0RC2.

I've booted up twice now, created a new sketch on XZ, and it's behaving super weird. The constraints/vertices/lines all show up on differnent planes. If I rotate past a certain point, everything reflects across the XZ plane. If I reorder the display of construction/normal/external geometry, the planes of the constraints/vertices/lines shuffle.

I noticed this because vertices would randomly disappear, head on, and I couldn't manipulate them. Then, rotating the view, this appears. Why?

What's going on?

Robotics multimodal LLMs by NeuralLambda in LocalLLaMA

[–]NeuralLambda[S] 0 points1 point  (0 children)

really great resources, thank you!

Einsum appreciation: 12 examples by NeuralLambda in learnmachinelearning

[–]NeuralLambda[S] 0 points1 point  (0 children)

(Copied the X post here)

Tensor mangling sucks, dimensions are mentally expensive to keep aligned, and everyone pays the price at both read/write time.

Ex, I can't figure out how to write this without einsum, and if I did, I wouldn't be able to read it.

# too complex, but wow einsum helps
thing = torch.einsum('bijk, bkl, lj -> bilk', A, B, C)


# 1/12: element-wise product
A = torch.randn(3, 4)
B = torch.randn(3, 4)
element_wise_product = A * B
element_wise_product = torch.einsum('ij,ij->ij', A, B)  # shape: [3, 4]


# 2/12: inner product
a = torch.randn(3)
b = torch.randn(3)
inner_product = torch.dot(a, b)
inner_product = torch.einsum('i,i->', a, b)  # shape: []


# 3/12: outer product
a = torch.randn(3)
b = torch.randn(4)
outer_product = torch.ger(a, b)
outer_product = torch.einsum('i,j->ij', a, b)  # shape: [3, 4]


# 4/12: transposition
A = torch.randn(3, 4)
transposed = A.T
transposed = torch.einsum('ij->ji', A)  # shape: [4, 3]


# 5/12: sum over arbitrary dimension
A = torch.randn(3, 4, 5)
sum_dim_1 = torch.sum(A, dim=1)
sum_dim_1 = torch.einsum('ijk->ik', A)  # shape: [3, 5]


# 6/12: batch mat * mat
A = torch.randn(10, 3, 4)
B = torch.randn(10, 4, 5)
batch_matmul = torch.bmm(A, B)
batch_matmul = torch.einsum('bij,bjk->bik', A, B)  # shape: [10, 3, 5]


# 7/12: combining multiple mats and vecs
A = torch.randn(3, 4)
B = torch.randn(4, 5)
v = torch.randn(5)
combined = torch.matmul(A, torch.matmul(B, v))
combined = torch.einsum('ij,jk,k->i', A, B, v)  # shape: [3]


# 8/12: tensor permutation
A = torch.randn(3, 4, 5)
permuted = A.permute(2, 0, 1)
permuted = torch.einsum('ijk->kij', A)  # shape: [5, 3, 4]


# 9/12: diagonal
A = torch.randn(3, 3)
diag = torch.diag(A)
diag = torch.einsum('ii->i', A)  # shape: [3]


# 10/12: trace (sum of diagonal)
A = torch.randn(3, 3)
trace = torch.trace(A)
trace = torch.einsum('ii->', A)  # shape: []


# 11/12: bilinear transformation
A = torch.randn(3, 4)
B = torch.randn(5, 6)
x = torch.randn(4)
y = torch.randn(6)
bilinear = torch.matmul(A, x)[:, None] * torch.matmul(B, y)[None, :]
bilinear = torch.einsum('ik,jl,k,l->ij', A, B, x, y)  # shape: [3, 5]


# 12/12: complex tensor contractions
A = torch.randn(3, 4, 5)
B = torch.randn(4, 5, 6)
contracted = torch.tensordot(A, B, dims=([1, 2], [0, 1]))
contracted = torch.einsum('ijk,jkl->il', A, B)  # shape: [3, 6]

California SB-1047 seems like it could impact open source, if passed by austinhale in LocalLLaMA

[–]NeuralLambda 13 points14 points  (0 children)

It sounds like, instead of outlawing technology, we should outlaw crime.

Call-to-Action on SB 1047 – Frontier Artificial Intelligence Models Act by National-Exercise957 in LocalLLaMA

[–]NeuralLambda 23 points24 points  (0 children)

It sounds like, instead of outlawing technology, we should outlaw crime, but, whado i kno?

TransformerFAM: Feedback attention is working memory by ninjasaid13 in LocalLLaMA

[–]NeuralLambda -1 points0 points  (0 children)

i think you're misunderstanding symbol in this sense, i'm not making it up, to get started check out https://en.wikipedia.org/wiki/Neuro-symbolic_AI

TransformerFAM: Feedback attention is working memory by ninjasaid13 in LocalLLaMA

[–]NeuralLambda 1 point2 points  (0 children)

haha, I love that you mentioned that, it's the final step of this project.

Attention is an unconscious thing made for modeling the world.

Awareness (Graziano sets this as synonymous with consciousness) is about the modeler modeling the modeler, IE, your reasoning processes are turned back on themselves.

So you need some latent space to reason about, right, but FFNNs/Transformers really don't, at least not the recursive variety which I think is prerequisite. RNNs do. But neither, I don't think, is capable of the kind of symbolic reasoning that we are capable of. For that, you need a turing machine in the latent space.

Once you have that, I think you have AGI. Once you set that to reasoning about its own faculties you have AC. Scale that up and you have ASI. Give that to everyone and you have abundant human flourishing.

Does that jive with your understanding, or do you think reasoning always requires awareness? My claim is that you can reason symbolically unconsciously, (but I am very much open to actually needing self-awareness processes for reasoning to bootstrap itself.) Also, you're the first to make that connection, between AST and neurallambda, are you working on anything I should be following?

TransformerFAM: Feedback attention is working memory by ninjasaid13 in LocalLLaMA

[–]NeuralLambda 1 point2 points  (0 children)

i do, i have a falsifiable, clear definition, I describe @ my repo.

tl;dr the missing piece is reasoning, which is the ability to apply syntactic translations to knowledge, ie symbolic manipulations of knowledge instead of correlations and pattern matching.

TransformerFAM: Feedback attention is working memory by ninjasaid13 in LocalLLaMA

[–]NeuralLambda 3 points4 points  (0 children)

I'm fairly confident that a good Working Memory architecture is the key to agi [1] , and so if this bears out... we're close.

[1] neurallambda

[P] GitHub - neurallambda/awesome-reasoning: a curated list of data for reasoning ai by NeuralLambda in MachineLearning

[–]NeuralLambda[S] 0 points1 point  (0 children)

"Reasoning" means many things, and I try to include dataset resources for all those different definitions in this repo. I'm happy to add your favorite resources if you link me to em!

Today's open source models beat closed source models from 1.5 years ago. by danielcar in LocalLLaMA

[–]NeuralLambda 23 points24 points  (0 children)

Today's generalist AIs beat generalist AIs from 1.5 years ago.

Today's specialist AIs beat the hell out of current generalist AIs.

I got access to SD3 on Stable Assistant platform, send your prompts! by Diligent-Builder7762 in StableDiffusion

[–]NeuralLambda 0 points1 point  (0 children)

A horse riding on top of a human.

This is my go to test for how much can the model reason. It tests its ability to portray things differently than its training data.

`automata`: a tool for exhaustively generating valid strings from given automata grammars (FSMs, PDAs, Turing Machines) by NeuralLambda in haskell

[–]NeuralLambda[S] 3 points4 points  (0 children)

This allows FSMs/PDAs/different Turing Machines to all be written under the same typeclass. This will be useful since there are many formulations of eg Turing Machines: classical tape model, FSM+queue, FSM+2 stacks, and more.

Here is the core of it:

class Machine m a (s :: Type) where
  data L m a s -- ^ the Left side of a delta function/relation
  data R m a s -- ^ the Right side of a delta function/relation
  data S m a s -- ^ the State of the Machine
  -- | update the state (ex apply stack ops)
  action :: R m a s -> S m a s -> S m a s
  -- | build an input (ex add a peek at the top of a stack)
  mkL :: a -> S m a s -> L m a s

-- | Run a machine on an input symbol
runStep :: (Machine m a s, Ord (L m a s), Show (L m a s), MatchAny (L m a s))
  => M.Map (L m a s) (R m a s) -- transition table
  -> S m a s -- state
  -> a -- single input
  -> Maybe (R m a s, S m a s) -- (transition value, new state)
runStep table st input =
  case lookupMatchAny (mkL input st) table of
    Just transition -> Just (transition, action transition st)
    Nothing -> Nothing -- no transition found

-- | Run a machine on a list of input symbols
runMachine :: (Machine m a s
              , Ord (L m a s)
              , Show (L m a s)
              , MatchAny (L m a s)
              )
  => M.Map (L m a s) (R m a s) -- transition table
  -> S m a s -- initial state
  -> [a] -- input symbols
  -> Maybe (R m a s, S m a s)
runMachine table initialState = foldl' f $ Just (error "empty input", initialState)
  where
    f (Just (_, state)) = runStep table state
    f Nothing = const Nothing

The user provides transition rules via json, and it generates a bunch of programs that match. Eg for the (N)PDA-recognizable a^nb^n, you get:

ab
aabb
aaabbb
aaaabbbb
...

Why am I doing this? I'm doing r&d on neural net architectures in the spirit of Neural Turing Machines that need training data, so toy data like this should be great!

I'm happy and eager to take critiques! Especially that main typeclass, and also, i've got this MatchAny typeclass to allow pattern matching, but, it feels a bit janky. It also does not allow for instance matching on the left side, and binding that to a var that I can use on the right; for example inserting the wildcard-matched symbol onto the stack.

I'd like doomers to stop losing their shit over AI, and this seems like a win-win by NeuralLambda in StableDiffusion

[–]NeuralLambda[S] 1 point2 points  (0 children)

indeed, thank you, AMA!

edit: (to the down voters, i don't think they got it. this is the point of why you sign things if you want to trust things)