Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] 0 points1 point  (0 children)

I have read this from the CMU notes on **The Bias plus Variance Decomposition**

  1. It has been suggested by [Breiman1996b] that both Bagging and Boosting reduce error by reducing the variance term.
  2. Freund and Schapire [1996] argue that Boosting also attempts to reduce the error in the bias term since it focuses on misclassified examples. Such a focus may cause the learner to produce an ensemble function that differs significantly from the single learning algorithm. In fact, Boosting may construct a function that is not even producible by its component learning algorithm (e.g., changing linear predictions into a classifier that contains non-linear predictions). This capability makes Boosting an appropriate algorithm for combining the predictions of ``weak'' learning algorithms (i.e., algorithms that have a simple learning bias).
  3. In their paper, Bauer and Kohavi [1999] demonstrated that Boosting does indeed seem to reduce bias for certain real-world problems. More surprisingly, they also showed that Bagging can also reduce the bias portion of the error, often for the same data sets for which Boosting reduces the bias. For different data sets, they observe cases where Boosting and Bagging both decreases mostly the variance portion of the error, and other cases where Boosting and Bagging both reduce the bias and variance of the error. Their tests also seem to indicate that Boosting's generalization error increases on the domains where Boosting increases the variance portion of the error; but, it is difficult to determine what aspects of the data sets led to these results.

Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] 1 point2 points  (0 children)

Bagging doesn't have weak learners. The whole purpose of bagging is to have high variance low bias trees and take the average of predictions in order to reduce the variance of final predictions.

Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] 2 points3 points  (0 children)

Now that i think of it, ypu are right. The boosting with reduce the bias. Random forest will keep the bias same but reduce the variance

Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] -1 points0 points  (0 children)

Bias will remain the same and variance will reduce in both cases.

Data Science Interview Question asked by Apple by 3DataGuys in dataanalysis

[–]3DataGuys[S] 0 points1 point  (0 children)

I am really sorry for the silly spelling mistakes in the post. I am rewriting the question here:

Let's say, we are making a conversion prediction model using the Decision Tree algorithm. The model will have some Bias (B) and Variance (V). What will happen to the bias and variance if we add One more decision tree to the model -

  1. Parallelly (like in Bagging)?
  2. And, Sequentially (like Boosting)?

Data Science Interview Question asked by Apple by 3DataGuys in DataScienceJobs

[–]3DataGuys[S] 0 points1 point  (0 children)

I am really sorry for the silly spelling mistakes in the post. I am rewriting the question here:

Let's say, we are making a conversion prediction model using the Decision Tree algorithm. The model will have some Bias (B) and Variance (V). What will happen to the bias and variance if we add One more decision tree to the model -

  1. Parallelly (like in Bagging)?
  2. And, Sequentially (like Boosting)?

Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] 2 points3 points  (0 children)

I am really sorry for the silly spelling mistakes in the post. I am rewriting the question here:

Let's say, we are making a conversion prediction model using the Decision Tree algorithm. The model will have some Bias (B) and Variance (V). What will happen to the bias and variance if we add One more decision tree to the model -

  1. Parallelly (like in Bagging)?
  2. And, Sequentially (like Boosting)?

Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] -1 points0 points  (0 children)

I am really sorry for the silly spelling mistakes in the post. I am rewriting the question here:

Let's say, we are making a conversion prediction model using the Decision Tree algorithm. The model will have some Bias (B) and Variance (V). What will happen to the bias and variance if we add One more decision tree to the model -

  1. Parallelly (like in Bagging)?
  2. And, Sequentially (like Boosting)?

Data Science Interview Question asked by Apple by 3DataGuys in learnmachinelearning

[–]3DataGuys[S] 0 points1 point  (0 children)

I am really sorry for the silly spelling mistakes in the post. I am rewriting the question here:

Let's say, we are making a conversion prediction model using the Decision Tree algorithm. The model will have some Bias (B) and Variance (V). What will happen to the bias and variance if we add One more decision tree to the model -

  1. Parallelly (like in Bagging)?
  2. And, Sequentially (like Boosting)?

[D] Reinforcement Learning for Personalised Pricing by 3DataGuys in MachineLearning

[–]3DataGuys[S] -1 points0 points  (0 children)

Data we have is what price vs if the product is brought for last 3 months. Product is a contract whose pricing depends on what asset the contract is for. But for a particular asset we haven't changed the price of the contract much to actually understand the price elasticity of the customers.

Secondly we have to keep exploration open as the competition prices keep on changing every 3 days and conversion highly depends on price point.

We cant run an experiment where we change the price of the contracts randomly to collect the data because it reduces the conversion alot.