In clаss, аn аnalоgy was used tо explain why relying оn a single decision tree can produce unstable predictions, and why aggregating many trees improves predictive reliability. Using the same analogy presented in class: (a) Explain why predictions from a single decision tree can vary substantially with small changes in the data. (2 points) (b) Explain how Random Forest reduces this instability. (2 points) (c) Suppose 100 identical trees were built using the exact same training data and same features. Would averaging their predictions improve stability? Explain why or why not. (1 point)