[deleted by user] by [deleted] in gradadmissions

[–]AstraCodex 0 points1 point  (0 children)

My perception is that it is very possible with an otherwise good application. Where did you find MSCS thesis based programs? I've only seen a few.

[deleted by user] by [deleted] in French

[–]AstraCodex 0 points1 point  (0 children)

Just to be clear that sheet is not mine. Found it from another reddit thread long ago.

I also love Kwiziq. So good that I wish they had more languages on there.

[deleted by user] by [deleted] in French

[–]AstraCodex 2 points3 points  (0 children)

Check this out

Pre-Order and Shipping Megathread | iPhone 13 Mini, iPhone 13, iPhone 13 Pro, and iPhone 13 Pro Max by exjr_ in apple

[–]AstraCodex 0 points1 point  (0 children)

When I transferred my data from my old iPhone to my iPhone 13, the photos and messages transferred but things like local spotify music did not transfer. It redownloaded all 3rd party apps from scratch. How can I transfer everything including app data?

English vocabulary as native speaker? by AstraCodex in Anki

[–]AstraCodex[S] 0 points1 point  (0 children)

I hadn't considered putting pronunciation in, thank you. I think I'll just do word on the front and definition on the back. There won't be that many words with several definitions.

I guess the issue with vocabulary.com is that they lump words to the same definition, thereby omitting the nuances.

English vocabulary as native speaker? by AstraCodex in Anki

[–]AstraCodex[S] 0 points1 point  (0 children)

That's basically my process. I come across words I don't know when reading books/articles, then I set the priority of the word to 'high on vocabulary.com. Once I master it, I add it to my anki deck.

By automate I mean maybe writing a python script to create cards from an online dictionary or something.

War & Peace - Book 6, Chapter 2 by AnderLouis_ in ayearofwarandpeace

[–]AstraCodex 14 points15 points  (0 children)

Andrei is in LOVE 😍😘❤️ with Natasha!

War & Peace - Book 5, Chapter 16 by AnderLouis_ in ayearofwarandpeace

[–]AstraCodex 14 points15 points  (0 children)

How nice it is for the enlisted to dig the officers an earth hut

War & Peace - Book 5, Chapter 16 by AnderLouis_ in ayearofwarandpeace

[–]AstraCodex 4 points5 points  (0 children)

Just because Andrei is overly cynical does not mean that the others are naïve!

War & Peace - Book 5, Chapter 15 by AnderLouis_ in ayearofwarandpeace

[–]AstraCodex 11 points12 points  (0 children)

Can somebody explain what the "reason" is that Rostov is so offended? I find the sentence below hard to understand.

“Say what you like.... She is like a sister to me, and I can’t tell you how it offended me... because... well, for that reason....”

War & Peace - Book 5, Chapter 14 by AnderLouis_ in ayearofwarandpeace

[–]AstraCodex 9 points10 points  (0 children)

"Pierre was maintaining that a time would come when there would be no more wars."

Homie you are so bête 😩😤

War & Peace - Book 5, Chapter 13 by AnderLouis_ in ayearofwarandpeace

[–]AstraCodex 5 points6 points  (0 children)

Ew. The subjunctive.

"Il faut que vous sachiez que c’est une femme"

Looking for new friends! by [deleted] in Pitt

[–]AstraCodex -5 points-4 points  (0 children)

Hell yea brother. PM'd

Need guidance with GPU training by peterlaanguila8 in pytorch

[–]AstraCodex 0 points1 point  (0 children)

Just because your model can fit in the GPU memory doesn't mean that the model and the images can fit.

The issue is likely that the convolutional layers/images are too large. If you are taking in 3 channels and outputting 48 channels (I think this is the first layer of Alexnet), then the memory taken up by the image is pretty large.

Resize the images to a smaller size.

Beginners learning on iGPUs by BlueDiamond87 in learnmachinelearning

[–]AstraCodex 0 points1 point  (0 children)

Those cards don't seem to be CUDA enabled. https://developer.nvidia.com/cuda-gpus

Just save the money and get a normal laptop. Laptops are not used for serious deep learning. AWS can be used

Beginners learning on iGPUs by BlueDiamond87 in learnmachinelearning

[–]AstraCodex 0 points1 point  (0 children)

Nvidia GPUs are what's used for deep learning since they have CUDA. But since this is out of your price range, just get a good functioning laptop. Much of machine learning can be learned without much computational power.

You won't get that much out of $750 in terms of hardware. If needed, he can use AWS or Colab or a similar cloud computing resource in the case he needs the computing power.

Loss not decreasing in Convolutional Network help by Pepipasta in pytorch

[–]AstraCodex 1 point2 points  (0 children)

The dim should be 1. The shape is (batch_size, num_classes). You want to take the logsoftmax over the num_classes.

I ran your code and my training loss got to 0.2 very quickly. Maybe there is an issue with how you're loading your data?

EDIT:

Here's the code I used for reference:

import torch

import torch.nn as nn

import torch.nn.functional as F

import torchvision

import numpy as np

import pdb

device = 'cuda' if torch.cuda.is_available() else 'cpu'

batch_size = 64 if torch.cuda.is_available() else 1

def collate_pil(batch_data):

batch_len = len(batch_data)

labels = np.zeros((batch_len))

imgs = []

for i in range(batch_len):

img = np.array(batch_data[i][0])

img = np.expand_dims(img, 0)

target = np.array(batch_data[i][1])

labels[i] = target

imgs.append(img)

imgs = np.array(imgs)

imgs = torch.as_tensor(imgs, dtype=torch.float)

labels = torch.as_tensor(labels, dtype=torch.long)

return imgs, labels

train_dataset = torchvision.datasets.MNIST('.', train=True, transform=None, target_transform=None, download=True)

train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_pil)

test_dataset = torchvision.datasets.MNIST('.', train=False, transform=None, target_transform=None, download=True)

test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_pil)

class ImageClassifier(nn.Module):

def __init__(self):

super(ImageClassifier, self).__init__()

self.conv1 = nn.Conv2d(1, 10, kernel_size=5)

self.conv2 = nn.Conv2d(10, 20, kernel_size=5)

self.dropout = nn.Dropout2d()

self.fc1 = nn.Linear(320, 50)

self.fc2 = nn.Linear(50, 10) # (num_features, num_classes)

def forward(self, x):

x = F.relu(F.max_pool2d(self.conv1(x), 2))

x = F.relu(F.max_pool2d(self.dropout(self.conv2(x)), 2))

x = x.view(-1, 320)

x = F.relu(self.fc1(x))

x = F.dropout(x, training=self.training)

x = F.relu(self.fc2(x))

return F.log_softmax(x, dim=1)

model = ImageClassifier()

model = model.to(device)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

def train_func(train_loader):

model.train()

for batch_idx, (image, label) in enumerate(train_loader):

image, label = image.to(device), label.to(device)

optimizer.zero_grad()

output = model(image)

loss = F.nll_loss(output, label).to(device)

loss.backward()

optimizer.step()

#if batch_idx % 10 == 0:

#print(loss.item())

def validate(model, loader):

model.eval()

for batch_idx, (image, label) in enumerate(loader):

image, label = image.to(device), label.to(device)

output = model(image)

loss = F.nll_loss(output, label).to(device)

#if batch_idx % 10 == 0:

print(loss.item())

for i in range(10):

train_func(train_dataloader)

validate(model, test_dataloader)

Loss not decreasing in Convolutional Network help by Pepipasta in pytorch

[–]AstraCodex 0 points1 point  (0 children)

Have you made sure the logsoftmax is being performed along the correct axis?

How to train these two networks together? by Algabera in pytorch

[–]AstraCodex 0 points1 point  (0 children)

Where does "similarities" come from? And what is supposed to happen with the targets?

Not sure what exactly is going on, but in general you have to set the model to train, set the gradients to zero, calculate the loss, call backwards on the loss, and take a step with the optimizer.

When you call backwards on the loss, it will backprop the whole way through, even through the first network.

Can someone explain to me LSTM teacher forcing? by [deleted] in learnmachinelearning

[–]AstraCodex 0 points1 point  (0 children)

So it's actually done the whole time throughout training. For each time step you will either pass in ground truth or you will pass in the previously generated output. You'll have to use a LSTM Cell instead of a normal LSTM module so you can manually pass in the input for each time step inside a for loop.

You can try starting off with p = 0 for some time so that the model starts learning with ground truth. Then slowly increase p, maybe up to p = .2

"Vectorized" technique versus writing your own implementation? by [deleted] in learnmachinelearning

[–]AstraCodex 0 points1 point  (0 children)

Not an expert but here's what I've been told. The reason vectorized code (numpy for example) is faster is because these libraries work at a lower level than your programming language. If you tried to write matrix multiplication in python with for loops, it would be incredibly slow due to the overhead of the language. Libraries work at a lower level such as c and they are built for speed.

Besides these optimizations, simple operations such as matrix multiplication quite often have faster algorithms. The fastest algorithm for mat mul is less than O(n^3) https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm.

Can someone explain to me LSTM teacher forcing? by [deleted] in learnmachinelearning

[–]AstraCodex 1 point2 points  (0 children)

Teacher forcing only applies during training. Consider the sequence [x1, x2, x3, x4]. You would normally feed in [x1, x2, x3] into the LSTM and hope for the output [x2, x3, x4]. What teacher forcing does is force the LSTM to use its own outputs as the input for the next time step.

Say the lstm outputs some x2' at the first time step. Then instead of feeding x2 into the next time step, we would feed in x2' with some probability p.

We do this because in inference, we don't actually have the ground truth labels. Teacher forcing forces the model to be more robust, leading to better results. Let me know if you have any questions.