Scenario C: Decision Trees and EnsemblesYou train a decision…

Scenario C: Decision Trees and EnsemblesYou train a decision tree classifier for churn with different maximum depths.You observe the following test performance: Depth 2: Accuracy 0.78, Recall(churn) 0.30 Depth 6: Accuracy 0.82, Recall(churn) 0.40 Depth 20: Accuracy 0.80, Recall(churn) 0.28 In business terms, the biggest tradeoff when moving from a single tree to a random forest is:

Scenario C: Decision Trees and EnsemblesYou train a decision…

Scenario C: Decision Trees and EnsemblesYou train a decision tree classifier for churn with different maximum depths.You observe the following test performance: Depth 2: Accuracy 0.78, Recall(churn) 0.30 Depth 6: Accuracy 0.82, Recall(churn) 0.40 Depth 20: Accuracy 0.80, Recall(churn) 0.28 Which model is usually easiest to explain to a non-technical manager?

Scenario B: Customer Churn ClassificationA subscription busi…

Scenario B: Customer Churn ClassificationA subscription business wants to predict whether a customer will churn (cancel) next month.Target: churn (1 = churned, 0 = stayed).The business cares more about catching likely churners than about occasionally flagging a loyal customer.A confusion matrix on the test set (positive class = churn) is: Predicted 0 Predicted 1 Actual 0 720 80 Actual 1 150 50 Given the business goal, the most appropriate next step is: