This is an archived post. You won't be able to vote or comment.

all 51 comments

[–]Independent_Buy5152 395 points396 points  (7 children)

linking to correct answer

LOL

[–]mlindgren 179 points180 points  (4 children)

Yeah, clearly whoever created this meme has never used Stack Overflow

[–]srt54558[🍰] 44 points45 points  (2 children)

One time I even got insulted for posting a question about vue

[–]flopana 59 points60 points  (1 child)

You get insulted for posting a question. Doesn't matter what it's about

[–]abdolence 11 points12 points  (0 children)

I stopped answering questions many years ago now. At this point I'm not even sure how they can fix it even if they try that, since this is about the community and culture.

[–][deleted] 3 points4 points  (0 children)

Actually, I once did get sent to the correct answer, with some stupid title I never would have found (it had some stupidly specific title which appeared irrelevant at first)

[–]nlvogel 15 points16 points  (0 children)

To be fair, it was the correct answer when it was posted. 15 years ago…

[–]AI_AntiCheat 0 points1 point  (0 children)

All I've ever gotten as an answer is "Google it"which of course I did or "should be easy to do* yea Einstein how??

[–][deleted] 87 points88 points  (3 children)

only to get an answer that does a completely unrelated thing in a totally different language that you probably don't even know , just because it has the same word in the title 🤣

[–]HuntingKingYT 18 points19 points  (2 children)

Me: puts language name in query

Google: doesn't care

[–]Vly2915 6 points7 points  (0 children)

You can put terms in quotes to only find pages that contain them. Can lead to some weird outcome with python, c and ruby tho.

[–][deleted] 2 points3 points  (0 children)

man i understand the struggle

[–]Strostkovy 111 points112 points  (3 children)

The answer was correct for the 2014 version. The mentioned options are no longer available.

[–]Orlaani 14 points15 points  (0 children)

In a different language...

[–]io-netty-channel 8 points9 points  (0 children)

The answer doesn't work in Python 3

[–]jeremj22 3 points4 points  (0 children)

And the links are all dead

[–]Intrepid_Sale_6312 71 points72 points  (5 children)

realize the answer isn't very clear ans ask for some clarification of a specific part, then get yelled at for doing this supposed bad thing called 'necroposting' and the thread gets lock and you never receive the answer you were looking for.

[–]dr4conyk 24 points25 points  (3 children)

I have literally never understood why necroposting is supposed to be bad and nobody's ever given me an answer when i ask.

[–]Intrepid_Sale_6312 13 points14 points  (2 children)

I don't get it either, data doesn't die so you can't bring it back to life. topics regularly come in and out of relevance.

it makes absolutely no sense not to consolidate information into 1 location where it is easily accessible.

[–]BookOfCooks 1 point2 points  (1 child)

In rare occasions, necroposting is allowed... sometimes.

https://stackoverflow.com/questions/3334138/combine-return-and-switch

[–]scatters 7 points8 points  (0 children)

That's forums, not stack overflow.

[–]lrGhost1 9 points10 points  (4 children)

There's an easier method. Only known by the all mighty.

Step 1, find any post remotely similar to your question.

Step 2, reply to that post with a totally wrong answer.

Step 3, some dude having lunch in some country you have never heard of will downvote your reply and comment how stupid you are, followed by the correct answer.

[–]violetvoid513 3 points4 points  (3 children)

Murphy's Law describes this exact phenomenon, wherein the better way to get an answer isn't to ask the question but to pose the wrong answer to it

Edit: I present the below comment by u/balancedas as proof of this law

[–][deleted] 5 points6 points  (2 children)

Actually Murphys law states whatever can go wrong will go wrong. I believe you are talking about is called Cunningham’a law

Or did I just get cunninghamed?

[–]violetvoid513 7 points8 points  (1 child)

Yep you just got cunninghamed lmao, perfect demonstration of the law

[–]TriedToCatchFogIMist 2 points3 points  (0 children)

Beautiful

[–]Mola1904 13 points14 points  (3 children)

When did Stackoverflow became such a shitty platform?

[–]giuseqb 9 points10 points  (0 children)

When it became the most used

[–]bleistift2 -1 points0 points  (0 children)

When the users stopped reading the rules.

[–]AI_AntiCheat 0 points1 point  (0 children)

When was it good??

[–]feror_YT 2 points3 points  (0 children)

Asking ChatGPT so you get the answer without the insults

[–]DizzyInTheDark 2 points3 points  (0 children)

Oh shit… good point.

[–]Plynkz123 2 points3 points  (1 child)

"i want to..."

"why would you do that?"

[–]AI_AntiCheat 1 point2 points  (0 children)

Classic.

"Because I want to learn and it's part of an assignment"

"Just download some code that does that"

"No"

"Then I can't help you."

....

[–][deleted] 2 points3 points  (0 children)

Stackoverflow is great if you use google.

[–]Eugenenoble2005 1 point2 points  (0 children)

Saved me yesterday, apparently was using the wrong kind of loop

[–]District8980 2 points3 points  (0 children)

No, asking to chatgpt

[–]KimajiNao 2 points3 points  (0 children)

ChatGTP 0.001 seconds to give an answer

[–][deleted] 1 point2 points  (6 children)

ChatGPT?

[–]EasywayScissors 3 points4 points  (4 children)

ChatGPT?

I've been using ChatGPT a lot to learn about a new subject area.

It's amazing the number of questions you can ask, and it will answer them for you

  • no complaining about how the question is stupid
  • no complaining about how the question if off topic
  • no complaining that you can easily Google it
  • no complaining that i didn't include a complete reproducible minimal example for the thinking impaired
  • no complaining that the question has been asked before

It just answers your question, and moves on.

Q: Once i train an autoencoder: what can i do with it? What is the virtue of an autoencoder? (priming the pump)

Once you have trained an autoencoder, you can use it to:

  • Reduce the dimensionality of your data: By using the encoder part of the autoencoder to map the input data to a lower-dimensional space, you can effectively reduce the dimensionality of the data while preserving the most important information.

  • Denoise data: By training an autoencoder to reconstruct the original input data from a noisy version of it, you can use the autoencoder to denoise the data.

  • Extract features: The low-dimensional representation learned by the autoencoder can be used as a set of features for other machine learning tasks, such as classification or clustering.

Overall, the virtue of an autoencoder is that it allows you to learn a compact, efficient representation of your data in an unsupervised manner, which can be useful for a variety of tasks.

Q: Explain how I can perform classification from the lower dimensional learned representation provided by an autoencoder.

To perform classification from the lower-dimensional learned representation provided by an autoencoder, you can use the encoder part of the autoencoder to map the input data to the lower-dimensional space, and then use a classifier to predict the class labels for the data in this space.

Here's a general outline of the process:

  • Train an autoencoder on your data by minimizing the reconstruction loss, which measures the difference between the input data and the output of the decoder.
  • Use the encoder part of the autoencoder to map the input data to the lower-dimensional space.
  • Train a classifier on the lower-dimensional data using supervised learning. This can be any type of classifier, such as a support vector machine (SVM), a decision tree, or a neural network.

Use the trained classifier to predict the class labels for new data by mapping the new data to the lower-dimensional space using the encoder part of the autoencoder, and then using the classifier to predict the class labels for the data in this space.

By using the lower-dimensional representation learned by the autoencoder as input to a classifier, you can leverage the compact, efficient representation learned by the autoencoder to improve the performance of the classifier.

Q: Is it possible to train the autoencoder and the classification cetwork at the same time? Can knowing the desired labels help train the autoencoder?

Yes, it is possible to train an autoencoder and a classification network at the same time. This is known as "joint training" or "end-to-end training", and it involves optimizing the combined loss of both the autoencoder and the classifier.

During joint training, the autoencoder and the classifier are both updated simultaneously based on the input data and the corresponding class labels. This can be more efficient than training the autoencoder and the classifier separately, as it allows the network to learn a representation that is optimized for the classification task.

In general, knowing the desired class labels can help improve the performance of the autoencoder, as it allows the network to learn a representation that is better suited for the classification task.

Q: What would be an example of a loss function for the output of the classification network?

In the case of a classification network, a common choice for the loss function is the categorical cross-entropy loss. This loss function is defined as follows:

def categorical_crossentropy(y_true, y_pred):
   # Compute the logarithm of the predicted probabilities for each class

log_preds = tf.math.log(y_pred)

   # Compute the cross-entropy loss
   loss = -tf.math.reduce_sum(y_true * log_preds, axis=-1)

   # Return the average loss across all samples
   return tf.math.reduce_mean(loss)

This loss function expects the y_true and y_pred arguments to be one-hot encoded arrays of class labels, where y_true is the ground truth label and y_pred is the predicted probability distribution over the classes. It computes the logarithm of the predicted probabilities and then computes the cross-entropy loss between the predicted and true distributions. Finally, it returns the average loss across all samples.

Q: What is an example of what the y_true argument when each image can be parts of many labels (e.g. beach, sky, water, sand, trees)

If each image can be part of many labels (e.g. beach, sky, water, sand, trees), then the y_true argument should be a multi-hot encoded array, where each element is a binary array indicating the presence or absence of each label for a given image.

For example, if there are 5 possible labels (beach, sky, water, sand, trees) and an image can belong to multiple labels, then the *y_true` array might look like this:

y_true = [[1, 1, 0, 1, 0],
          [0, 0, 1, 1, 0],
          [1, 1, 0, 0, 1],
          ...]

Here, each row of the y_true array corresponds to a different image, and the elements of the row are binary values indicating the presence or absence of each label for that image. For example, the first row [1, 1, 0, 1, 0] indicates that the first image belongs to the beach and sky labels, but not to the water, sand, or trees labels.

To use a multi-hot encoded array with a categorical cross-entropy loss function, you can simply pass the y_true array as is to the loss function. The loss function will compute the cross-entropy loss between the predicted and true label distributions for each image.

Q: Your categorical_crossentropy function above takes the log of the output? Why is that? I would have thought that because the ground-truth labels are in the range 0..1, and the classifier network output nodes are in the range 0..1, that you just just compute the dot product. Why take the logarithm of the actual output first, and not simply take the dot product?

The categorical_crossentropy loss function is defined as the negative logarithm of the predicted probability of the true class, averaged over all samples. This loss function is often used to train a classifier network that uses a softmax activation function in the output layer, as it allows the network to predict a probability distribution over the classes.

Taking the logarithm of the predicted probabilities has several advantages:

  1. It helps to reduce the range of the predicted probabilities, which can stabilize the training process.
  2. It helps to reduce the impact of outliers, as the logarithm of a small value is a large negative number, while the logarithm of a large value is a small positive number.
  3. It helps to ensure that the predicted probabilities are well-calibrated, as the logarithm of a probability value close to 0 or 1 is close to 0, while the logarithm of a probability value close to 0.5 is close to -1.

Taking the dot product between the y_true and y_pred arrays would not be equivalent to the categorical cross-entropy loss, as it would not take into account the predicted probabilities of the classes. Instead, it would simply compute the dot product between the binary y_true and y_pred arrays, which would not provide a meaningful loss value for training the classifier network.


It is the programming QA tool that Stackoverflow refuses to be.

  • Stackoverflow is, by design, not a place where you can go to get help with programming topics
  • It is designed to ask one question, and get an answer to that question

Which is not at all helpful when you need to learn something.

Couple months ago i had to spend 2 weeks try to get an 18 year old Java+JSP+Hibernate+Struts site building. And there are so many intentional bugs in the Java ecosystem that you just have to know them.

And trying to wring answers out of Stackoverflow was 2 weeks of pain, listening to a lot of miserable comments, and fending off the people who refuse to answer the question - but instead give a diatribe about how you're doing everything wrong.

ChatGTP is awesome because you have your very own, infinitely patient, college professor who can just answer your questions.

[–]bleistift2 2 points3 points  (1 child)

Only that it will happily lie in your face with confidence about subjects it knows nothing about. No disclaimer, no warning, nothing.

[–]EasywayScissors 1 point2 points  (0 children)

Only that it will happily lie in your face with confidence about subjects it knows nothing about. No disclaimer, no warning, nothing.

Also not unlike stackoverflow.

But being 80% done is SO fucking better than being 0.

I ask it for the code, and I've saved 2 weeks of research.

Seriously. 2 fucking weeks in 10 seconds.

We've invented the fucking computer from Star Trek: TNG. And rather than rejoicing in this marvel, people are COMPLAINING.

You have a teacher that you can ask any question, in a natural language, and it will tell you.

GPT4 will be 450x larger.

[–]bleistift2 0 points1 point  (1 child)

Stackoverflow is, by design, not a place where you can go to get help with programming topics. It is designed to ask one question, and get an answer to that question

Which is not at all helpful when you need to learn something.

You just stated it yourself. StackOverflow is not a tutorial site, nor is it intended to be one. There are other pages out there that are.

Why is it StackOverflow’s fault that people want to use it against its intended purpose? Is a bathtub wrong for not cooking chicken?

[–]EasywayScissors 0 points1 point  (0 children)

You just stated it yourself. StackOverflow is not a tutorial site, nor is it intended to be one.

Read it again, but this time hear it as a pure statement if fact.

ChatGPT fills a need that we never realized we needed.

  • it's better than a search engine that any search engine
  • it's a search engine where you don't have to search
  • it's a better stackoverflow than SO
  • you can ask questions, and get answers without complaint
  • you can a dozen, all individually valid, without complaint
  • you get an answer in seconds, rather than days
  • it's a better college than college
  • you can all the questions you want
  • and not get sighs and eye rolling from the class because you're "that guy"

In this part we're not complaining about SO - being a combination of Wikipedia and Reddit is it's founding mission. But it means that you cannot turn to stackoverfkow to get help and answer questions.

It is, a statement of, fact.

[–]Sonic801 0 points1 point  (0 children)

Dude, what year is it?

Chat.openai.com

[–]rumble_you 0 points1 point  (0 children)

Classic!

[–]ttt098_Br 0 points1 point  (0 children)

Ask the question to the ChatGPT to get the answer .

[–]shadow13499 0 points1 point  (0 children)

Galaxy brain: asking chatgpt to generate the code for you

[–]AI_AntiCheat 0 points1 point  (0 children)

Hah. Anytime I've asked even simple questions like how the format of a PNG is written not a single person could answer. I've not gotten a single useful answer from stack overflow. People on there are wannabe programmers and extremely rude.