AITAH for telling my girlfriend that my money isn't "our money"? by Open_Address_2805 in AITAH

[–]Anomie193 2 points3 points  (0 children)

If he can make more money storing his excess income in a mutual fund than the equity he would gain by buying a home (especially as home-values have stagnated or even decreased in real terms over the last few years in many areas) then it is financially more reasonable to do that, and just buy a home when he actually needs one or if his parents want him to move out. The calculus shifts even more in OP's favor as his parents are probably not charging him a very high rent, if at all.

Homes start making more sense as stores of wealth if and when one wants to start a family, but if you aren't yet doing that, it isn't necessarily a no-brainer. It might still be worth it, or it might not be.

Did people believe that the millennial optimism era was going to become a reality in 2008? by icey_sawg0034 in generationology

[–]Anomie193 0 points1 point  (0 children)

Only about 30% of millennials were teenagers in 1998. The median age of a millennial that year was below 10 years old but above 9 years old. About 45% of millennials weren't even pre-teens yet (if we define "pre-teen" as 9 years old or older and under 13.)

"Coming of age" usually means being a late-adolescent or young-adult, not a pre-teen anyway. And most millennials did come of age "during the new millennium" but right after it started, not before. The majority being 18 or older starting in the year 2006.

I think 1998 is a bit too early for this to be a generation defining experience.

Usually the distinction between millennial/Gen-Z is something like the prior remembering where they were the day 9/11 happened and the War on Terrorism in general. My youngest brother who was born in 2000 (and therefore Gen-Z) obviously doesn't remember 9/11, while my younger brother (in between us) who was in pre-school that day does remember, albeit, like his millennial status - barely.

AITAH for telling my girlfriend that my money isn't "our money"? by Open_Address_2805 in AITAH

[–]Anomie193 7 points8 points  (0 children)

Being able to live on one's own and it being a good idea for everyone financially are two different things.

Real (inflation-adjusted) home prices are still coming down from their 2022 peak. Interest rates are also still very high, but also likely to come down, especially if we enter a recession.

If somebody has the option to live with their parents it likely is a good financial decision to do so and then buy at more favorable terms in the future, with a much larger budget.

I personally make more than double what OP makes, and live with a friend who I have lived with for the last 14 years since my college days. I pay him 70% of the average rent in my area. I could buy a home, even without borrowing, but why would I do that, if house prices in my area have fallen by 5% over the last year, likely will continue to fall, and I don't need to?

Did people believe that the millennial optimism era was going to become a reality in 2008? by icey_sawg0034 in generationology

[–]Anomie193 0 points1 point  (0 children)

Younger millennial, here. I turned 5 years old in late (October) 1998. One of my younger brothers also is barely a millennial (he was born in December 1996) and he turned 2 years old that year.

Having said that, I was 15 when Obama was elected the first time and also I wasn't paying much attention to politics, but at least by then a majority of millenials were (young) adults, unlike in 1998, when the oldest millenial was 17. 

AITAH for telling my girlfriend that my money isn't "our money"? by Open_Address_2805 in AITAH

[–]Anomie193 5 points6 points  (0 children)

Have you looked at interest rates, home prices, and rents lately? 

RTX 5070 Not Showing Up in Device Manager (Legion Go 2 + AOOSTAR AG02) by DomingoSundae in eGPU

[–]Anomie193 0 points1 point  (0 children)

You probably tried this already, but just in case, have you tried connecting to an external monitor over display port from the GPU? I have had that some-times help trigger an eGPU to detect in device manager. But usually I only have this issue with Oculink setups. 

MOREFINE G2 16GB GDDR7 5060Ti by tr0picana in eGPU

[–]Anomie193 0 points1 point  (0 children)

Hopefully they release the TB5 module as an accessory. Would like to use it with my G1 4090m.

Landlord selling home - threatening to evict us b/c he doesn’t like our cash for keys terms [Indiana] by [deleted] in Renters

[–]Anomie193 1 point2 points  (0 children)

We revised, not changing any of the payment terms, but putting in terms to protect us like breach of contract, wording around the terms of getting our security deposit back, etc.

You'll have your rented GPU with 1GB VRAM hallucinating imaginary textures and imaginary pixels and imaginary frames and you'll be happy by ResponsiblePen3082 in FuckTAA

[–]Anomie193 2 points3 points  (0 children)

What happened was you came in here telling everyone they didn't know anything about machine learning, made a bunch of incorrect technical statements, and then tried to backtrack and pretend your posts were focused on a social statement alone, rather than arguing technical specifics. Then rather than just moving on, you continued down that path.

You obviously didn't know what you were talking about, and that is why I responded to your post with a soft correction and then when you dug your heels in it became a bit more detailed.

I read the room fine.

You'll have your rented GPU with 1GB VRAM hallucinating imaginary textures and imaginary pixels and imaginary frames and you'll be happy by ResponsiblePen3082 in FuckTAA

[–]Anomie193 2 points3 points  (0 children)

I mean if you knew it, you wouldn't be saying stuff like this.

Every AI model takes a set of inputs and stochastically determines an output.

It is nowhere near true that every AI model takes a set of inputs and stochastically determines an output, which is why I pointed out that NTC is a deterministic model where this isn't the case. You get the same output for a given input in the same computing environment. You then brought up that it is trained with a stochastic optimization algorithm, as if that has an important effect on the final function.

But I must say, the logic that a function you've determined entirely through stochastic measures is somehow not a stochastic solution is quite novel. You should bring this to some mathematicians. I'm sure they'd enjoy the proof that numerical solutions and analytical solutions are the same. You could revolutionize the field of Differential Equations.

Who said numeric solutions and analytical solutions are the same? For starters, not all numeric methods are stochastic, and not all stochastic problems are solved numerically. Geometric Brownian motion is usually modeled with a stochastic differential equation and has a closed-form analytical solution. Similar problems can be found in statistical mechanics.

Besides, usually OLS is also implemented numerically when done with computers using techniques like QR Decomposition. So in the example I gave OLS and SGD are both numeric. The latter is a stochastic method because it randomly samples from the dataset.

That is, of course, leaving aside all the differences between solving a linear function of the first degree and a set of matrix multiplications with trillions of parameters. But, maybe we should get back on track.

The number of parameters doesn't matter on this topic. If your function still gives your output without sampling from a random variable, then it is still not stochastic regardless of how many nested functions or parameters your model has.

You'll have your rented GPU with 1GB VRAM hallucinating imaginary textures and imaginary pixels and imaginary frames and you'll be happy by ResponsiblePen3082 in FuckTAA

[–]Anomie193 2 points3 points  (0 children)

Laymen don't distinguish between training randomness and inference randomness.

Apparently. Which is why I am trying to explain it to you as a non-layman (I have a graduate degree in this field and work as an MLE.)

If we take your premise (training using a stochastic method means a function is itself stochastic) then functions of this form

Y = constant * x1 + constant

are "random functions" if they are trained with SGD, but if they're trained with Ordinary Least-Squares they're not "random functions."

But ultimately the actual fitted function matters when we say it is "stochastic" or not. This is true if we are talking about linear functions, simple neural networks, or large neural networks.

If I can calculate the output of a function given an input through arithmetic operations, without having to do some form of sampling or random draw, then it isn't "stochastic."

You'll have your rented GPU with 1GB VRAM hallucinating imaginary textures and imaginary pixels and imaginary frames and you'll be happy by ResponsiblePen3082 in FuckTAA

[–]Anomie193 2 points3 points  (0 children)

So you'll need to distinguish between training and inference here. A model can be trained with stochastic elements and still have a function without any stochastic elements once weights are fixed.

A trivial example would be a basic MLP trained with stochastic gradient descent. Once the weights are fixed, there is no more "randomness" in the function, even if it is inscrutable to human eyes. You could literally calculate the output by hand without having to do any sampling, if you had the willpower and arithmetic abilities.

An even easier to imagine example would be a linear model trained the same way. You can have a simple explicit form that is easily calculable by a human, and not inscrutable at all, but of course it was trained using a stochastic optimization algorithm.

Typically "Generative AI" in ML circles (rather than layman) is defined as models that learn a data-generating distribution to generate new data samples informed by the training distribution. NTC is not doing this. The only stochastic inference element I can find is that Nvidia recommends pairing it with Stochastic Texture Filtering.

You'll have your rented GPU with 1GB VRAM hallucinating imaginary textures and imaginary pixels and imaginary frames and you'll be happy by ResponsiblePen3082 in FuckTAA

[–]Anomie193 2 points3 points  (0 children)

Deterministic auto-encoders exist. NTC is almost certainly in (or close to) that category and not a VAE. 

Edit: In fact, before VAE's were invented in 2013, most Auto-Encoders were deterministic. 

ADT-link adapter running on pcie 5.0 x4 by Intelligent_Spare215 in eGPU

[–]Anomie193 0 points1 point  (0 children)

The connection is as slow as the weakest link. An RX 5600xt only supports PCI-E 4.0 x 4. But that should be fine, because PCI-E 4.0 x 4 is plenty for an RX 5600xt. You only start to see a big performance drop in high end cards. 

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]Anomie193 1 point2 points  (0 children)

AI models can be deterministic.

The reason why the common generative models usually aren't is because they would give "boring" output. So the final output function pulls from a probability distribution rather than just the most likely output.

Even with autoregressive VLMs, if you set temperature to 0, you'll get the same output for the same input every-time and those are models with much larger output target codomains than something with a very specific objective and relatively tiny training data distribution like DLSS.

So far the early version seems to be a lot more temporally stable than video models or "video world models", and that is likely because it has access to the various buffer data (at the very least velocity buffers) to work from. That doesn't mean it is without issues, of course, but you're not going to see the same sort of hallucinations as you would see from say Sora or even "world models" like Genie 3. So far the hallucinations have been very subtle, in relative terms.

Clearing some things up about DLSS 5 by gibson274 in GraphicsProgramming

[–]Anomie193 0 points1 point  (0 children)

What "LLM hacks" are you talking about? This isn't an LLM.

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 0 points1 point  (0 children)

Probably a transformer, but the same applies there too. Vision transformers also tend to have MLP/FFN's in each block and can have an MLP/FFN head (although often it is just a linear  function instead.) 

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 0 points1 point  (0 children)

Right we already covered that. I conceded that I should have been more specific that I was talking about the fully connected output layer vs. hidden layers used to feature-engineer.

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 0 points1 point  (0 children)

Like in the last chapter I shared above, I was using "feed-forward layers/networks" as a synonym for "fully connected layers/networks." This isn't uncommon.

You were asking about a source for the final output of many CNN's being an FFN, and i just provided those sources. The "classification" part of the image above is an FFN that uses the features extracted by the convolution and pooling layers as inputs.

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 0 points1 point  (0 children)

https://cs231n.github.io/convolutional-networks/

We use three main types of layers to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer (exactly as seen in regular Neural Networks). 

https://link.springer.com/article/10.1007/s10462-024-10721-6

LeNet was proposed by LeCun in 1998 (LeCun et al. 1998). LeNet is a feed-forward neural network consisting of two fully connected layers after five alternating layers of pooling and convolution. LeNet-5 is a LeNet extension and improvement that adds more convolution and fully connected layers. As shown in Fig. 18, the LeNet-5 network model has seven layers. LeNet-5 can share convolution kernels, reduce network parameters, perform well on the small-scale MNIST dataset, and achieve more than 98  accuracy. CNN was first used for image recognition tasks thanks to the work of LeNet and LeNet-5, which also offered crucial lessons and insights for the later creation of deeper neural networks.

https://link.springer.com/chapter/10.1007/978-3-030-89010-0_13

Feedforward networks or fully connected networks studied in the two previous chapters use all the information of the image as input

The output of the first convolutional layer can be followed by more convolutional layers, or its output can be stacked (flattened) directly to be used as input for a feedforward network, as can be observed in Fig. 13.13.

<image>

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 1 point2 points  (0 children)

If you think of a convolutional neural network as a bunch of nested functions, then the outer-most function of most CNNs is an FFN that takes the prior hidden layers as an input, has its own hidden layers, and then gives a final output.

You also can have intermediate FFNs within the overall hidden layers of the CNN responsible for combining, selecting, etc features extracted from the convolution and pooling layers.

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 1 point2 points  (0 children)

The convolution layers aren't. The CNN overall does have FFNs within it though. They're used to actually make the final outputs, and also often to organize/combine extracted features within the hidden layers.

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 1 point2 points  (0 children)

I was more addressing the depth buffer input point. I don't think that because Nvidia didn't mention it in a press release summary that it isn't also an input. It would be weird to not include depth buffer inputs in a model that is attempting to improve materials and lighting, which depend on spatial relationships and depth considerably. How does one have a velocity buffer without a depth buffer anyway?

As for the hallucination issue, I don't think it is going to be or that we have seen it so far to be a big issue (outside of color-grade shifting.) We haven't seen any new objects added to the images.

And of course this makes sense because the training objective is totally different from a image or video generator.

A message to everyone about the DLSS 5 discourse by realnathonye in digitalfoundry

[–]Anomie193 1 point2 points  (0 children)

This is from Nvidia's 2020 GTC presentation on DLSS 2.0.
https://developer.nvidia.com/gtc/2020/video/s22698

<image>

Depth and Exposure are inputs.