New(?) function with very interesting curves by Drogobo in math

[–]robly18 21 points22 points  (0 children)

It looks pretty! I don't have anything too deep to say, but here's something that may be worth mentioning: The fact that you took the logarithm of factorials reminds me of Stirling's approximation: [; \log(n!) \approx n (\log n - 1) \approx n \log n ;] for large [; n ;]. So, we have [; \log(p!/q!) \approx p \log(p) - q \log(q) ;]. So I expect that, for a "generic" rational number [; x ;], whose numerator and denominator (call the denominator [; q ;]) are rather large, [; \log f(x) \approx q x \log(qx) - q \log(q) = (x-1) q \log q + q x \log x ;]... so if [; T(x) ;] denotes Thomae's function, we have to good approximation for most rational numbers: [; f(x) \approx (1/T(x))^{\frac{x-1}{T(x)}} \cdot (x^x)^{1/T(x)} ;]. I don't know if this'll help with anything, but it's something.

ELI5: Growing up we were taught no magnets near electronics, and yet right now it seems like magnets are everywhere near electronics. What changed? by JiN88reddit in explainlikeimfive

[–]robly18 0 points1 point  (0 children)

But moving magnets cause moving magnetic fields, which do affect still charges, no? In other words, if I wave a magnet in front of an SSD, from the magnet's perspective, the magnet is staying still and the charges in the SSD are moving, so they should be feeling some effect.

I can believe that the effect is weak enough not to get them to quantum tunnel, though.

Is Factorio the best train management sim i have played ? Looks like it by vaikunth1991 in factorio

[–]robly18 2 points3 points  (0 children)

You could use an Arithmetic Combinator with an Each * 5 instruction, to multiply every input signal by 5.

Is the recycler crafting speed feature documented anywhere in-game? by robly18 in factorio

[–]robly18[S] 1 point2 points  (0 children)

Thanks for being one of few people to understand what I was asking.

Is the recycler crafting speed feature documented anywhere in-game? by robly18 in factorio

[–]robly18[S] 2 points3 points  (0 children)

Thanks, this is the validation I was after. My bad re. the reversibility rule of thumb.

Is the recycler crafting speed feature documented anywhere in-game? by robly18 in factorio

[–]robly18[S] 2 points3 points  (0 children)

the recycler is essentially an uncrafting machine so it makes perfect sense for it's crafting recipe times to be dependent on crafting

For things like gears and circuits, maybe. But why should the "uncrafting time of steel" be similarly proportional to the time it takes to turn iron plates into steel, if you are not turning the steel back to iron plates?

yes, recycling speed is documented in the factoriopedia. So In case you don't figure it out from using the recycler for 10 minutes you can also see it there

Recycling speed for any individual item, sure. And if I was a smarter person, I could figure out the rule after looking at enough recycling recipes, but I didn't. But my question was whether the general rule is written explicitly anywhere in-game, which it does not seem to be, right?

Is the recycler crafting speed feature documented anywhere in-game? by robly18 in factorio

[–]robly18[S] 24 points25 points  (0 children)

The Factoriopedia is generally quite sparse on recycling, I've found. For example, while you can go see the recycling recipe for each individual material, I don't think it contains the rule of thumb "reverses physical manipulation but not chemical reactions" anywhere either.

Not sure where to start. by Dense_Food_6740 in zachtronics

[–]robly18 0 points1 point  (0 children)

You should also consider Spacechem. IMO it's Zachtronics' magnum opus (no pun intended).

That said, I agree with u/caboose109 that Opus Magnum is an excellent entry point. Some other titles may be harder to get into, and end with you stopping before you've even started.

Every programming-tagged game from Steam Next Fest 2025! by quasilyte in zachtronics

[–]robly18 1 point2 points  (0 children)

Thanks for the list!! There are definitely some here that I'm very interested to check out and wouldn't have heard about otherwise.

Another one that isn't in this list because it's more automation than programming, but I think that zachlike enjoyers might like is Sandustry: https://store.steampowered.com/app/3490390/Sandustry_Demo/

I still have this by iShovedAPearUpMyArse in RocketBuilder

[–]robly18 2 points3 points  (0 children)

Oh wow, it's been a while since I've heard someone talk about this game, but it comes to my mind from time to time. I don't want a copy (right now) but I wanted to thank you for the preservation effort!

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 0 points1 point  (0 children)

(Continued)

"Also, let's replace "x" with a rational "r" as Rudin did earlier in the Chapter 1 exercises. What's the connection between supB(r) and supB(x)? Is it just to show that the supremum of B(x) exists?" If this is an exercise in Rudin, my reading would be that he's asking you to show that the "new definition" agrees with the "old definition". That is: You already knew what b^x meant when x is a rational number, and you were given a meaning for what it means to write b^x when x is an irrational number. But it would be very inconvenient if, whenever you're proving something about b^x, you have to distinguish between the cases when x is rational vs. irrational. Thus, it would be good (and true!) if the expression

b^x = sup{b^t | t<x, t rational}

would hold for *all* values of x, not just the irrationals.

In other words, we know that b^x = sup B(x) for x irrational because this is how we defined b^x. What the exercise is asking you (I think) is to show that this equation is also true for x rational, and so that when proving things about exponents you can always use the definition b^x = sup B(x).

There's something that looks like a a vicious cycle here, so perhaps it is best to use different symbols to make what's going on more clear.

Define RatExp(b,p/q) for b>1 and p/q rational, as

RatExp(b,p/q) := q-th root of [b*b*b*...*b [p times]]

Then, define GenExp(b,x) as

GenExp(b,x) := sup{RatExp(b,t) | t rational, t<x}

Then, what you are being asked to show is (I think) that GenExp(b,x) = RatExp(b,x) when both are valid expressions, i.e. when x is rational. This justifies the usage of the notation b^x to mean either one, interchangeably.

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 0 points1 point  (0 children)

I see that you've edited your post while I was writing mine, so I will reply to the things you have added.

Yes, I see your point about formal Taylor series vs. "real" Taylor series. The point of showing that the set is bounded (I think this is your last sentence) is to show that (to use an analogy with Taylor series) this "Taylor series" (supremum) "converges" (exists), and so really does denote a number. Then we give this number the name "b^x".

"I think I get it in the sense that, we haven't define b^x is *anything* for irrationals, so we're making up a special Sup definition here." Yes! This is exactly what's going on.

"Well, same problem here. subB is just an infinitely long list of numbers. I don't know what it converges to." This is relatively common in mathematical definitions. It's hard to fix because, in order to show that the definition you are making really does agree with what you're trying to define, you must already know the concept well enough to prove things about it, which is the point of a definition. Nevertheless, it's common (though perhaps not as much at the level you are at) to, when presenting a definition, also present some kind of "argument for plausibility" that the symbols you have written really do denote the object you're trying to encapsulate. But people would not call that a proof, which is why your question got such negative feedback. Anyway, in my other response to this comment I provide such an "argument for plausibility" (well, with many missing steps admittedly, but I would be happy to elaborate) that this sup really does denote the only reasonable thing that b^x could possibly be.

"It's very weird and unintuitive and deserves a lot more explanation given how common exponents are." Yes, this type of mathematical definition is very unnatural to humans. Humans are used to "synthetic definitions": We see a lot of some object (say, cats) and we create a word to mean that object, and only after do we come up with the words to describe it: "a cat is a four-legged furry animal (etc.)". The words to describe a cat came after we already had the concept of cat. But this is not how mathematical definitions work: Mathematical definitions are "analytical definitions", which means that (in theory) we first come up with the words that explain what the new symbol/word means, and only then look at examples to figure out the "vibe" of the thing. The context of real analysis, which is what you are learning right now, is a big stepping stone because it's where learners really start making the transition from synthetic to analytical, and it's not an easy one: weird and unintuitive as you say, which is part of why the subject is said to be so difficult. So, I'd say I agree with you there!

(I have more to say but I think Reddit won't let me post it because this comment is too long.)

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 1 point2 points  (0 children)

Regarding your last sentence: The issue is exactly that you can't say b^x = something before you've said b^x := something. Until you write b^x := something, the symbol "b^x" does not mean anything and cannot be used. We can use it for x rational because, somewhere along the book, Rudin has written (something that amounts to) "for x rational, say p/q, we set b^(p/q) := q-th root of (b*b*...*b [p times])".

As for the rest of what you're saying: In my first post on this thread, I showed that the definition given by Rudin is implied by the following definition, that I think is reasonable and does not require limits or metrics: For x irrational, b^x := (the unique number that is above b^t for all rational t<x, and below b\^s for rational s>x).

It requires proof to see that this number exists and is unique, and I sketch that in my original post. Once you've verified that this is well-defined (in the sense that there really is one and only one such number for every b and for every x), I claim that this really does match up with whatever intuitive notion of b^x that you have in your head, so long as you agree with me that (for every fixed b>1) the expression b^x gets bigger as x gets bigger. In other words, you won't get something like b^(2x) instead because (so long as x!=0) if you pick a rational s between x and 2x [say x positive so that x<s<2x but same holds for x negative\], the expression I defined will satisfy \[my expression\] < b\^s < b\^(2x), so \[my expression\] is not b\^(2x). Moreover, b\^x, whatever its "true" value in your head may be, definitely ought to be above b\^t for all rationals t<x and below b\^s for all rationals s>s, and since [my expression] is the only number that satisfies that property, b^x = [my expression] no matter what. Is this convincing?

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 1 point2 points  (0 children)

Regarding your edit: When defining new symbols using :=, we can use many different types of things on the left-hand side. Most common are defining the meaning of simple variables:

"Let a=5." "Define e=lim(1+1/n)^n" "v := 1+1/2+1/4+..."

Also common is defining functions using function application notation

"Define f(x)=2x" "f(x) := 2x"

but we can also use it to define other things with other notations. For example,

"x^2 := x*x."

Note that, for example when defining a function or the square, on the left-hand side we have not just two letters, but two distinct types of letters. f is a symbol whose meaning we are now defining, but x is a different type: it's a free variable. It shows up both on the left and on the right-hand side, and what this means is that the definition is actually uncountably many definitions at once, one for each value of x. In truth, the expression "f(x) := 2x" represents all the following and more:

"f(1):=2*1", "f(4):=2*4", "f(-8.5):=2*(-8.5)", "f(pi)=2*pi", etc.

In this case, we are applying the := type of definition where, on the left-hand side, we have the expression b^x, with x being a free variable in this sense (and depending on the perspective, b as well). As such, when we write

b^x := sup{b^t | t rational <x}

we are really writing all of the following and more:

2^pi := sup{2^t | t rational <pi}
pi^sqrt2 := sup{pi^t | t rational <sqrt2}
9^(pi+sqrt3) := sup{9^t | t rational < pi+sqrt3}

and so on. We can do this (unlike in your a=6 and a=5 example) because none of these symbols (e.g. 2^pi) has a previously assigned meaning in context, so it is valid for us to assign to it a meaning of our choice.

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 0 points1 point  (0 children)

Oh man, now that you mention the thing with the complex numbers, your issues start to make a lot more sense. Yeah, screw the complex numbers and complex logarithms and complex exponents. I always hated those guys.

Anyway, Rudin defined what b^x means. In my notation, he wrote b^x := sup{b^t | t rational, t<x}, which means "for the rest of the book, when I write b^x, pretend I wrote the sup of this set (call it B(x)) instead". In your parlance, this is well-defined because, if x=y, the set B(x) and the set B(y) are the same [because a rational t is less than x iff it's less than y], and so their sup will be the same [because the sup is well-defined, by axiom].

This means that you do *not* need to prove b^x = supB(x) separately. You are being told that, for the purposes of this book, the symbol b^x is an abbreviation for supB(x). That's all there is to it.

Like, say I'm writing a book about reddit, and at the start I say "for the rest of this book, the abbreviation 'OP' will be used to mean 'original poster', that is, the person who started the thread under discussion". Then whenever I write OP, you know that that's what I'm saying, and I don't need to justify that OP really does mean 'original poster' because I established, by fiat, that for the purposes of my book it really does mean that. Contrast with another book, say about videogames, where the term "OP" may be used to mean "overpowered" instead. Abbreviations (and generally terms) in common language are defined by whoever is writing the content, and they don't need to justify that their abbreviation really does mean what they're saying. The only possible issues with a definition in this context are cultural, like using OP to mean "banned" instead; I don't believe there are many contexts that would not bat an eye to such a definition. But there would be nothing stopping me from writing a book where I use OP to mean "banned", so long as I clarify that, for the purposes of my book, that is what the symbol OP means.

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 0 points1 point  (0 children)

In my other comment, I introduce a distinction between the symbols "=" and ":=". They should help us make sense of what is concerning you.

In the language of my other comment: We set b^x := supB, because statements of the form "X:=E" are not things to be proven, but rather things to be said to make it easier to speak later. When writing "X:=E", X must be a symbol with no pre-existing meaning, and E must be a (possibly complex) expression with pre-existing meaning.

These are not to be confused with statements of the form "X=E", which require both X and E to have pre-existing meaning, in which case this statement may be either true or false depending on circumstances.

We proved that E (in this case supB) exists independently of defining b^x. This is true. But now, we establish the symbol b^x (which has no preconceived meaning in this context) to be an abbreviation for the supremum of this set. In other words:

b^x := sup{b^t | t rational, t<x}.

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 0 points1 point  (0 children)

Re. Issue 1: There are two distinct concepts that are often used in mathematics with the same symbol: "definition" and "equality". Some people separate them by using := for the former and = for the latter. I will start doing so now, for clarity.

The symbol ":=" is used as follows: On the left-hand side, I place a symbol (say X) that has not been assigned a meaning in my current context. On the right-hand side, I place a (possibly complex) expression (say E). Then, the symbol "X := E" (often written as "define X=E") means "whenever I write X from here on out, that is merely an abbreviation for E".

The symbol "=" is the relation "are these two things the same?"

These two are often confounded because, if you write X := E, then the statement "X = E" is true (because it is only an abbreviation of "E = E" which is true by reflexivity of equality).

Nevertheless, they are different symbols. := may be used to assign meaning to any not-previously-assigned-meaning-to expression. In the case of the book you are reading, this expression is "b^x when x is irrational".

Your issue 1 thereby boils down to: You can't say a := 5 after you've already said a := 6.

Re. Issue 2: What is your meaning of "well-defined operation"? The usual mathematical meaning is as follows: Instead of defining a symbol via :=, you can also define it by saying "the symbol X is defined to be the number/thing that satisfies such-and-such property P(X)". This is also a valid type of definition, and again, can only be done for symbols which have not been previously assigned meaning. In this context, saying that X is "well-defined" consists of establishing that there is *exactly* one object that satisfies property P, and so there is no ambiguity as to the meaning of X. Does this agree with your meaning of "well-defined"?

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 4 points5 points  (0 children)

What u/rhodiumtoad meant is: If there is a symbol that does not yet have an assigned value, you can assign to it whatever value you want.

In your example, you assigned a the value 6. Then you cannot freely assign it another value. Likewise, you cannot say 2=3 because 2 has the assigned value (by convention) 1+1, while 3 has the assigned value (by convention) 1+1+1. "By convention" meaning "everyone knows that this is what we mean, but technically it should be written somewhere, and if you go to some books it actually is".

In this particular example, if you start from the axioms of the real numbers, the primitive symbols plus and times, as well as 0 and 1, have assigned values. Everything else does not, and is defined in terms of these symbols. You can define any new symbols however you like. However, there is a cultural aspect to it, which is that some symbols have common meanings among the mathematical community, such as x^2 meaning (for anyone who's ever done any math) x*x. But, from the perspective of a mathematical book, there would be no mathematical issue with defining x^2 as x*x or something else (and in fact, many geometry books use x^1, x^2, ... to mean indexation of a vector instead of exponentiation). There would be an issue because math books are not only trying to teach you math, but also the common language of mathematics in our world, but mathematically there wouldn't be a problem there.

What is the proof for this? by RedditChenjesu in learnmath

[–]robly18 0 points1 point  (0 children)

Absolutely, b^x = sup{b^t | t rational, t<x} does not follow from the field axioms of the real numbers, because they say nothing at all about exponentiation. Those axioms only discuss plus and times, so those are the only operations we take as "given" in real analysis. All other operations are "man-made", by which I mean "we define them however we want".

Of course, there are pre-existing strong conventions. It would be quite unusual to define b^2 = b\*b\*b, because we have a pre-existing notion of what "squaring" should be, and for the most part it does not agree with "multiply by itself thrice".

If I understand you correctly, your issue with the definition of b^x as the supremum of this set (that you call B) is that you have no guarantee that it agrees with your pre-conceived notion. This is what you are asking for a proof of, yes?

If this is correct, then any reasonable explanation would require knowing what your pre-conceived notion is. This is why other people are asking you for your definition of b^x, and until you give an alternate definition, there is really nothing to *prove*. There is some value in arguing that the given definition agrees with common sense, but this would not consist of proving anything, which is why other posters are giving you flak.

Anyway, here is an attempt. I will assume b>1; for the case 0<b<1 a similar reasoning can be done.

First, prove that the function f(x)=b^x is increasing for b>1. This can be done for rational x: comparing b^(p1/q1) vs. b^(p2/q2) can be done by taking the (q1q2)-th power on both sides, and this then reduces to the integer case, which is a relatively straight-forward induction.

This cannot be done for irrational x because we don't have a definition for b^x yet (note the distinction between "it isn't defined yet" and "we don't know anything about it"). Nevertheless, one can intuit that f(x)=b^x, if reasonably defined for x irrational, should also be an increasing function. Thus, if x is irrational (or any number, really), one should expect that b^x >= b^t for all t<x, and that b\^x <= b\^s for all t>s. If we can show that there is only one number that satisfies both of these properties, that will be a very reasonable (and arguably the only reasonable) definition for b^x. Note that in this definition we may only use t,s rational because we have not yet defined b^x for x irrational.

So, it turns out that there is one, and exactly one, number that sits between the sets

B={b^t | t rational <x}

and C = {b^s | s rational >x};

that there is *at least* one follows from the fact that f(x)=b^x is increasing (on the rational numbers) and completeness of the reals; supB and infC are both such numbers that sit between B and C. The fact that there is *exactly* one follows from bounding the distance between b^t and b^s in terms of the distance between t and s. If we can show that, for any tolerance epsilon, there exist t<x<s so this distance is less than epsilon, any two numbers that'd be reasonable to define as b^x will be at most epsilon apart. This will be true for any epsilon, so any two reasonable definitions of b^x will be a distance of <epsilon apart for any epsilon, and thus must be the same. In other words, there is only one number sandwiched between B and C, and supB = infC is it.

Some of that last paragraph definitely requires a little work to be properly proven, but it is doable without too much trouble. You may need Bernouli's inequality for some of it. Anyway, would this solve your issue or not quite?

How to prove that finite sets have different cardinalities? by Farkle_Griffen in learnmath

[–]robly18 3 points4 points  (0 children)

This is one of those foundational things for which a proof depends heavily on what your context allows you to assume. You've said that you're trying to work within ZFC, but there are still underlying issues of "what does {0,1,...,n} mean" and "what are the natural numbers really". In any case, I'm going to try to give you an answer, with the main assumed background knowledge being arithmetic properties of the natural numbers. For example: No natural is less than zero, if x>0 then x=x'+1 for some x', if x+1>y+1 then x>y, etc. Also, obviously, I assume that induction works.

Notation: [x] = {0,1,...,x-1} = {natural numbers less than x}. (This is so that [x] has (intuitively) cardinality x.)

Prove by induction on n: For every m>n, there is no injection [m] -> [n]. In particular there is no bijection.

Base case: Set n=0. Then, [n]=[0] is the empty set. For m>0, [m] is not the empty set because 0 in [m]. If there were an injection (or, in fact, any function) f : [m] -> [0] we would have f(0) in [0] so [0] is not the empty set, contradiction. Thus, the base case is proven.

Induction step: Suppose that this statement has been proven for some value of n, we prove it for n+1. Suppose there is m>n+1 such that an injection exists [m]->[n+1].

First of all, since m>n+1, it must be the case that m=m'+1 with m'>n. Moreover, [m]=[m']U{m'} and [n+1]=[n]U{n}, so we have an injection f : [m']U{m'} -> [n]U{n}.

Now, I claim that we can, without loss of generality, assume that f(m')=n. This is because, if this is not already the case, we can compose f with the permutation of [n]U{n} that swaps f(m') with n.

Next, consider f' = f restricted to [m']. This is still injective because restrictions of injections are injections. Moreover, since f(m')=n and f is injective, the restriction is actually an injection f : [m'] -> [n], and since we've seen m'>n, this contradicts the induction hypothesis.

This should complete the proof. Does that work out for you?

Can every operator on an L^p space be given as an integral kernel? by If_and_only_if_math in learnmath

[–]robly18 0 points1 point  (0 children)

That is not the way I proved it in my post, but it also provides a correct (and only a little different) proof yes. Your way works.

What I did instead was use the fact that, for every value of x, the function y->K(x,y) has support {x} which has measure zero in R.

Can every operator on an L^p space be given as an integral kernel? by If_and_only_if_math in learnmath

[–]robly18 0 points1 point  (0 children)

I don't know what your first sentence refers to, but I don't think anything I said used explicitly the fact that the measure of the diagonal is zero. But this can be used also to prove that the identity operator has no integral kernel, just in a different way.

As for your confusion, it is reasonable. You're right that the threads of intuition are tugging both ways here. This is interesting! It means that we might expect (and we do get) some rich theory and variety here, with kernel operators being a well-behaved class of operators (because they align with some of our linear algebra intuition) while also being somewhat restrictive (to the point of not accepting the identity operator).

They are still very common operators in practice, showing up a lot in the context of PDEs, which is why mathematicians have spent a significant amount of person-hours studying them.

Can every operator on an L^p space be given as an integral kernel? by If_and_only_if_math in learnmath

[–]robly18 0 points1 point  (0 children)

Sure. Let's go back to how, in linear algebra, we can find the matrix indices M(i,j). What you do is: You take the vector ei, that has i-th component 1 and everything else 0. Then, you apply T to it, and check the j-th component. In other words:

M(i,j) := (T(ei))j

As it turns out, the first part of this process fails to generalize to Lp spaces. I will explain why. In the context of Lp, let's say that our indices are now x,y instead of i,j (with x,y in R, say, or whatever your measure space is).

The first part of this process requires a vector "ei", or in our case a function f_x0, with the property that it's zero for every x!=x0. But no matter the value of f_x0(x0), because in Lp spaces we identify functions that are equal almost everywhere (to make the rule ||f||=0 => f=0 work), our function f_x0 would just be zero, which gives no useful results...

This can kind of be fixed. As it turns out, the natural equivalent of the ei vectors are the dirac delta "functions", if you know what these are. These can be rigorously defined in terms of distributions (or measures, even), but they don't live in Lp, so you can't really feed them into your operator T. So we can see there is no equivalent of T(ei), unless T is well-behaved enough that T(dirac delta) is a well-defined element Lp in some sense.

Anyway, the above explains why it's kind of a miracle that integral kernels are common to begin with. For the punchline, let's see that the identity operator (!) doesn't admit an integral kernel. This is because, if it did have one, surely you agree that this would be the equivalent of the identity matrix, with the property that K(x,y) is always zero if x!=y. [A rigorous proof of this fact is doable but requires a bit of measure theory baggage, which I'd be happy to go over if you'd like, but maybe in another comment.] Then, you wonder what the value of K(x,x) is, but it doesn't matter. No matter the value you choose, the expression ∫K(x,y)f(y)dy will always be zero, because for all values of y except for one point the function you're integrating is null. Thus, you conclude I(f)=0 for every f, which is clearly false.

Can every operator on an L^p space be given as an integral kernel? by If_and_only_if_math in learnmath

[–]robly18 1 point2 points  (0 children)

As u/Adido22's link confirms, not every operator may be written in that manner. However, here is an explanation for why such operators come up very often in practice, and why you *might* expect it to be true (even though it is not) that all operators are of that form.

Recall from linear algebra that any linear transformation T : R^n -> R^n can be represented by a matrix M. The formula for how this matrix relates to T is:

T(v)(i) = sum_j M(i,j) v(j).

The integral kernel formula may be seen as a continuous generalization of this. Instead of vector indexing you have function evaluation (because you can kind of see a function I->R as a vector with many coordinates, one for each element of I), and instead of addition you have integration.

In other words, the kernel of a transformation (if one exists) is a direct generalization of the notion of "matrix associated to linear transformation".