Ref: https://github.com/WegraLee/deep-learning-from-scratch-…

Ref: https://github.com/WegraLee/deep-learning-from-scratch-2 The Word2Vec (W2V) model has two different types: the Continuous Bag of Words (CBOW) or Skip-gram. CBOW predicts the center word condition on neighbor words in a given window during the training. Skip-gram predicts the neighbor words condition on the center word in a given window during the training. Which one is not true for the W2V model? ______________

Bonus question (5 points) An economist wants to study the ef…

Bonus question (5 points) An economist wants to study the effect of sentiment in online product reviews for wine on the stock price of the wine firm using generative AI, natural language processing (NLP), deep learning, and econometrics. Before starting the research project, the economist asks a generative AI about prior literature related to the topics in a table format as follows: Ref: OpenAI. ChatGPT 4 [Large language model]. https://chat.openai.com; this is the true example based on the result in 2024. However, there is no such research yet; therefore, the generative AI provides non-existent literature reviews. We call this type of problem (1)__________ (a. overfitting, b. copyright issue, c. privacy issue, d. hallucination, e. computational resource constraints; 3 points). In addition, the dataset for pre-training of LLMs often relies on web data. Biases in web data can influence Gen AI (2) ___________ (a. True b. False; 2 points). Ref: Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., … & Wen, J. R. (2023). A survey of large language models. arXiv preprint