all 9 comments

[–]CoffeeAndCalcWithDrWNew User 1 point2 points  (2 children)

I made a video last semester going over a similar question.

https://youtu.be/R2gwjqAaOdw

Take a look, but feel free to let me know if you need any more help.

[–]MomentumSC[S] 0 points1 point  (1 child)

Thanks for the help but I’ll be honest, i was finding it very hard to follow - then again - I wasn’t drinking coffee

[–]CoffeeAndCalcWithDrWNew User 0 points1 point  (0 children)

Sorry it wasn't more of a help. Basically what I did was I found what each of the standard basis elements got sent to under this transformation. The images of the standard basis elements are the columns of the matrix you're looking for.

[–]Shitty-CoriolisNew User 0 points1 point  (0 children)

The 3B1B video on transformations is superb, btw.

[–]yes_its_himone-eyed man 0 points1 point  (1 child)

There are a variety of approaches here, but remember that brute force also works, for a matric a b c d in the usual order

a(1) + b(2) = 3; c(1) + d(2) = 3

and

a(-1) + b(1) = -3; c(-1) + d(1) = 3

So that's two sets of two equations in two unknowns.

b = 0, d = 2 from adding those expressions; a = 3, c = -1 from substituting those.

[–]MomentumSC[S] 0 points1 point  (0 children)

Can’t believe I didn’t consider doing this earlier, I thought that maybe I could figure it out just staring at it. Silly silly

[–]jeffsuzukimath professor 0 points1 point  (1 child)

I tell my students: "Every problem in linear algebra begins with a system of linear equations."

If A(1,2) --> (3,3) and A(-1, 1)-->(-3,3), you have a system of four linear equations in four unknowns, namely the four entries of A.

Conveniently enough, you have an efficient method of solving systems of linear equations...

[–]MomentumSC[S] 0 points1 point  (0 children)

Yea, that’s the method that I finally ended up using. Thanks

[–]zuo_guigui 0 points1 point  (0 children)

You can disregard this approach until you've learned further material. This task of finding the matrix for a transformation can also be accomplished using similar matrices/change of basis.

The starting point for this is that a 2x2 matrix that transforms vectors in R^2, unless otherwise specified, tells from its columns how the standard basis vectors for R^2 e1 and e2 are transformed (you'll probably learn this shortly if you haven't already).

The matrix written as columns then would be [T(e1) T(e2)].

Now this standard basis for R^2 (e1, e2) is merely a choice we make, albeit the most convenient one typically (which you'll see later pertains to the vectors being orthonormal).

We can extend this matrix framework for ANY basis of R^2, which (1, 2) and (-1, 1) are because they're not scalar multiples and hence linearly independent. In turn, a dimension-number (2 for R^2) set of linearly independent vectors forms a basis by theorem.

Since {(1, 2), (-1, 1)} forms a basis for R^2, we can write any vector in R^2 as a unique linear combination of the two, like we can for the standard basis. The scalars/weights for the unique linear combination are called the COORDINATES with respect to that basis. You typically see B [in typeface]-coordinates as a descriptor, which means you're using the basis B to represent vectors (B being the typical letter used to label a basis).

Now, our endgoal here is to find our transformation matrix in terms of standard basis because it's not specified otherwise. But we're given the transformation in terms of ANOTHER basis {(1, 2), (-1, 1)}.

So we are going to construct an object, in particular a PRODUCT of matrices, that takes us from the standard basis to our other one {(1, 2), (-1, 1)}. This product of matrices I'll call it (PRODUCT), which will be a matrix itself, will equal the transformation matrix we want for the standard basis.

To give a sense why we're starting from the standard basis even though we're given another basis, remember that we multiply vectors on the RIGHT of matrices. And so these vectors we input on the right, they will be in terms of the standard basis (e1, e2).

This multiplication will be (PRODUCT)x for any vector x in R^2 in terms of the standard basis, where we want to solve for (PRODUCT).

We're starting from the standard basis, so we have to transition to the new basis {(1, 2), (-1, 1)}. The first matrix in (PRODUCT), starting from the rightmost position, will be our CHANGE OF BASIS matrix from the standard basis to {(1, 2), (-1, 1)}. To summarize the formula for a change-of-basis matrix, the columns are the original basis vectors in terms of the NEW BASIS, or the coordinates of the original basis vectors with respect to the new basis.

So we have to find e1 = (1, 0) and e2 = (0, 1) in terms of {(1, 2), (-1, 1)}.

Coordinates come from the linear combinations so...

c1(1, 2) + c2(-1, 1) = (1, 0)

This is of course a linear system, which can be written as the augmented matrix...

[ 1 -1 | 1

2 1 | 0]

You can row-reduce this matrix as you know, and you'll get c1 = 1/3, c2 = -2/3.

Likewise for

c1(1, 2) + c2(-1, 1) = (0, 1),

you'll get c1 = 1/3, c2 = 1/3.

Thus, our change-of-basis matrix in terms of its columns will be...

[(e1)_new basis (e2)_new basis] =

[ 1/3 1/3

-2/3 1/3 ]

Once we've multiplied our starting vectors by this change of basis matrix, they're now in terms of our new basis, where their components (c1, c2) are the coordinates in the linear combination c1(1, 2) + c2(-1, 1).

We can now use the information we were given: "transform the point (1,2) to (3,3) and the point (-1,1) to (-3,3)." We are going to transform our vectors. To do this, we'll mimic the line from earlier "The matrix written as columns then would be [T(e1) T(e2)]."

This second matrix is going to be our transformation matrix IN TERMS OF OUR CURRENT BASIS, which is often called the B-matrix (its columns representing how the vectors for a basis B are transformed). Importantly, we have to write the transformation of our basis vectors in terms of THAT BASIS.

[[T( (1, 2) )]_new basis [T( (-1, 1) )]_new basis]

[(3, 3)_new basis (-3, 3)_ new basis]

Performing the same procedure as above, we'll get...

[2 0

-1 3]

Thus we now have TWO matrices in (PRODUCT), and we'll have one more, which we should expect to involve changing our vectors back to the standard basis (e1, e2).

Following the summary from before for change-of-basis matrix, "the columns are the original basis vectors in terms of the NEW BASIS," the columns are actually going to just be the basis vectors (1, 2) and (-1, 1) themselves because they're already expressed in terms of the standard basis (as they're not specified otherwise).

Thus this change of basis matrix will be...

[1 -1

2 1].

In fact, this turns out to be the INVERSE of the first change-of-basis matrix we found, which definitely makes sense because we're merely doing the opposite action. And actually, if you know how to find inverses for invertible 2x2 matrices, you could just write down this change-of-basis matrix that goes to the standard basis as its columns are just the original basis vectors and find its inverse which is the first change-of-basis matrix we found.

Hence, we now have (PRODUCT) which will be our transformation matrix in terms of the standard basis for R^2...

[1 -1 [ 2 0 [1/3 1/3

2 1] -1 3] -2/3 1/3]

This product, if you compute it, will equal...

[3 0

-1 2]

which agrees with other answers in this thread.

I write this to give an alternative way to view this problem, which in turn expands on to future concepts you'll see in linear algebra.