ImageVerifierCode 换一换
格式:DOCX , 页数:17 ,大小:43.22KB ,
资源ID:21764829      下载积分:10 文币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.wenkunet.com/d-21764829.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录   微博登录 

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(MOOC 交通数据挖掘技术(Data Mining for Transportation)-东南大学 中国大学慕课答案.docx)为本站会员(小肥粒)主动上传,文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知文库网(发送邮件至13560552955@163.com或直接QQ联系客服),我们立即给予删除!

MOOC 交通数据挖掘技术(Data Mining for Transportation)-东南大学 中国大学慕课答案.docx

1、 MOOC 交通数据挖掘技术(Data Mining forTransportation)-东南大学 中国大学慕课答案Test 11、问题:Which one is not the description of Data mining?选项:A、Extraction of interesting patterns or knowledgeB、Explorations and analysis by automatic or semi-automatic meansC、Discover meaningful patterns from large quantities of dataD、Appr

2、opriate statistical analysis methods to analyze the data collected正确答案:【Appropriate statistical analysis methods to analyze the data collected】2、问题:Which one describes the right process of knowledge discovery?选项:A、Selection-Preprocessing-Transformation-Data mining-Interpretation/EvaluationB、Preproce

3、ssing-Transformation-Data mining- Selection- Interpretation/EvaluationC、Data mining- Selection- Interpretation/Evaluation- Preprocessing-TransformationD、Transformation-Data mining- election-Preprocessing- Interpretation/Evaluation正确答案:【Selection-Preprocessing-Transformation-Data mining-Interpretatio

4、n/Evaluation】3、问题:Which one is not belong to the process of KDD?选项:A、Data miningB、Data descriptionC、Data cleaningD、Data selection正确答案:【Data description】4、问题:Which one is not the right alternative name of data mining?选项:A、Knowledge extractionB、Data archeologyC、Data dredgingD、Data harvesting正确答案:【Data

5、 harvesting】5、问题:Which one is not the nominal variables?选项:A、Occupation B、EducationC、AgeD、Color正确答案:【Age】6、问题:Which one is wrong about classification and regression?选项:A、Regression analysis is a statistical methodology that is most often used for numericprediction.B、We can construct classification m

6、odels (functions) without some training examples.C、Classification predicts categorical (discrete, unordered) labels.D、Regression models predict continuous-valued functions.正确答案:【We can construct classification models (functions) without some trainingexamples.】7、问题:Which one is wrong about clustering

7、 and outliers?选项:A、Clustering belongs to supervised learning.B、Principles of clustering include maximizing intra-class similarity and minimizinginterclass similarity.C、Outlier analysis can be useful in fraud detection and rare events analysis.D、Outlier means a data object that does not comply with t

8、he general behavior of thedata.正确答案:【Clustering belongs to supervised learning.】8、问题:About data process, which one is wrong?选项:A、When making data discrimination, we compare the target class with one or a set ofcomparative classes (the contrasting classes).B、When making data classification, we predic

9、t categorical labels excluding unorderedone.C、When making data characterization, we summarize the data of the class under study(the target class) in general terms.D、When making data clustering, we would group data to form new categories.正确答案:【When making data classification, we predict categorical l

10、abels excludingunordered one.】9、问题:Outlier miningsuch as density based method belongs to supervised learning.选项:A、正确B、错误正确答案:【错误】 10、问题:Support vector machines can be used for classification and regression.选项:A、正确B、错误正确答案:【正确】Test 21、问题:Which is not the reason we need to preprocess the data?选项:A、to

11、save timeB、to make result meet our hypothesisC、to avoid unreliable outputD、to eliminate noise正确答案:【to make result meet our hypothesis】2、问题:Which is not the major tasks in data preprocessing?选项:A、CleanB、IntegrationC、TransitionD、Reduction正确答案:【Transition】3、问题:How to construct new feature space by PCA?

12、选项:A、New feature space by PCA is constructed by choosing the most important featuresyou think.B、New feature space by PCA is constructed by normalizing input data.C、New feature space by PCA is constructed by selecting features randomly.D、New feature space by PCA is constructed by eliminating the weak

13、 components toreduce the size of the data.正确答案:【New feature space by PCA is constructed by eliminating the weakcomponents to reduce the size of the data.】4、问题:Which one is wrong about methods for discretization?选项:A、Histogram analysis and Binging are both unsupervised methods.B、Clustering analysis o

14、nly belongs to top-down split.C、Interval merging by c2 Analysis can be applied recursively.D、Decision-tree analysis is Entropy-based discretization.正确答案:【Clustering analysis only belongs to top-down split.】 5、问题:Which one is wrong about Equal-width (distance) partitioning and Equal-depth (frequency)

15、 partitioning?选项:A、Equal-width partitioning is the most straightforward, but outliers may dominatepresentation.B、Equal-depth partitioning divides the range into N intervals, each containingapproximately same number of samples.C、The interval of the former one is not equal.D、The number of tuples is th

16、e same when using the latter one.正确答案:【The interval of the former one is not equal.】6、问题:Which one is wrong way to normalize data?选项:A、Min-max normalizationB、Simple scalingC、Z-score normalizationD、Normalization by decimal scaling正确答案:【Simple scaling】7、问题:Which are the right way to fill in missing va

17、lues?选项:A、Smart meanB、Probable valueC、IgnoreD、Falsify正确答案:【Smart mean#Probable value#Ignore】8、问题:Which are the right way to handle noise data?选项:A、RegressionB、ClusterC、WTD、Manual正确答案:【Regression#Cluster#WT#Manual】9、问题:Which one is right about wavelet transforms?选项:A、Wavelet transforms store large fr

18、actions of the strongest of the wavelet coefficients.B、The DWT decomposes each segment of time series via the successive use of low-pass and high-pass filtering at appropriate levels.C、Wavelet transforms can be used for reducing data and smoothing data.D、Wavelet transforms means applying to pairs of

19、 data, resulting in two set of data ofthe same length. 正确答案:【The DWT decomposes each segment of time series via the successive use oflow-pass and high-pass filtering at appropriate levels.#Wavelet transforms can be usedfor reducing data and smoothing data.】10、问题:Which are the common used ways to sam

20、pling?选项:A、Simple random sample without replacementB、Simple random sample with replacementC、Stratified sampleD、Cluster sample正确答案:【Simple random sample without replacement#Simple random sample withreplacement#Stratified sample#Cluster sample】11、问题:Discretization means dividing the range of a continu

21、ous attribute intointervals.选项:A、正确B、错误正确答案:【正确】Test 31、问题:Whats the difference between eager learner and lazy learner?选项:A、Eager learners would generate a model for classification while lazy learner wouldnot.B、Eager learners classify the turple based on its similarity to the stored training turplew

22、hile lazy learner not.C、Eager learners simply store data (or does only a little minor processing) while lazylearner not.D、Lazy learner would generate a model for classification while eager learner would not.正确答案:【Eager learners would generate a model for classification while lazy learnerwould not.】2

23、、问题:How to choose the optimal value for K?选项:A、Cross-validation can be used to determine a good value by using an independentdataset to validate the K values.B、Low values for K (like k=1 or k=2) can be noisy and subject to the effect of outliers.C、A large k value can reduce the overall noise so the

24、value for k can be as big aspossible.D、Historically, the optimal K for most datasets has been between 3-10.正确答案:【Cross-validation can be used to determine a good value by using an independent dataset to validate the K values.#Low values for K (like k=1 or k=2) can benoisy and subject to the effect o

25、f outliers.#Historically, the optimal K for most datasetshas been between 3-10.】3、问题:Whats the major components in KNN?选项:A、How to measure similarity?B、How to choose k?C、How are class labels assigned?D、How to decide the distance?正确答案:【How to measure similarity?#How to choose k?#How are class labelsa

26、ssigned?】4、问题:Which one of the following ways can be used to obtain attribute weight forAttribute-Weighted KNN?选项:A、Prior knowledge / experience.B、PCA, FA (Factor analysis method).C、Information gain.D、Gradient descent, simplex methods and genetic algorithm.正确答案:【Prior knowledge / experience.#PCA, FA

27、 (Factor analysismethod).#Information gain.#Gradient descent, simplex methods and genetic algorithm.】5、问题:At learning stage KNN would find the K closest neighbors and then decideclassify K identified nearest label.选项:A、正确B、错误正确答案:【错误】6、问题:At classification stage KNN would store all instance or some

28、typical of them.选项:A、正确B、错误正确答案:【错误】7、问题:Normalizing the data can solve the problem that different attributes havedifferent value ranges.选项:A、正确B、错误正确答案:【正确】 8、问题:By Euclidean distance or Manhattan distance, we can calculate the distancebetween two instances.选项:A、正确B、错误正确答案:【正确】9、问题:Data normalizati

29、on before Measure Distance can avoid errors caused bydifferent dimensions, self-variations, or large numerical differences.选项:A、正确B、错误正确答案:【正确】10、问题:The way to obtain the regression for a new instance from the k nearestneighbors is to calculate the average value of k neighbors.选项:A、正确B、错误正确答案:【正确】11

30、、问题:The way to obtain the classification for a new instance from the k nearestneighbors is to calculate the majority class of k neighbors.选项:A、正确B、错误正确答案:【正确】12、问题:The way to obtain instance weight for Distance-Weighted KNN is tocalculate the reciprocal of the distance squared between object and nei

31、ghbors.选项:A、正确B、错误正确答案:【正确】Test 41、问题:Which description is right about nodes in decision tree?选项:A、Internal nodes test the value of particular featuresB、Leaf nodes specify the classC、Branch nodes decide the resultD、Root nodes decide the start point 正确答案:【Internal nodes test the value of particular f

32、eatures#Leaf nodes specify theclass】2、问题:Computing information gain for continuous value attribute when using ID3consists of the following procedure:选项:A、Sort the value A in increasing order.B、Consider the midpoint between each pair of adjacent values as a possible split point.C、Select the minimum e

33、xpected information requirement as the split-point.D、Split.正确答案:【Sort the value A in increasing order.#Consider the midpoint between eachpair of adjacent values as a possible split point.#Select the minimum expectedinformation requirement as the split-point.#Split.】3、问题:Which is the typical algorith

34、ms to generate trees?选项:A、ID3B、C4.5C、CARTD、PCA正确答案:【ID3#C4.5#CART】4、问题:Which one is right about underfitting and overfitting?选项:A、Underfitting means poor accuracy both for training data and unseen samples.B、Overfitting means high accuracy for training data but poor accuracy for unseensamples.C、Under

35、fitting implies the model is too simple that we need to increase the modelcomplexity.D、Overfitting occurs too many branches that we need to decrease the modelcomplexity.正确答案:【Underfitting means poor accuracy both for training data and unseensamples.#Overfitting means high accuracy for training data

36、but poor accuracy for unseensamples.#Underfitting implies the model is too simple that we need to increase the modelcomplexity.#Overfitting occurs too many branches that we need to decrease the modelcomplexity.】5、问题:Which one is right about pre-pruning and post-pruning?选项:A、Both of them are methods

37、to deal with overfitting problem.B、Pre-pruning does not split a node if this would result in the goodness measure fallingbelow a threshold.C、Post-pruning removes branches from a “fully grown” tree. D、There is no need to choose an appropriate threshold when making pre-pruning.正确答案:【Both of them are m

38、ethods to deal with overfitting problem.#Pre-pruningdoes not split a node if this would result in the goodness measure falling below athreshold.#Post-pruning removes branches from a “fully grown” tree.】6、问题:Post-pruning in CART consists of the following procedure:选项:A、First, consider the cost comple

39、xity of a tree.B、Then, for each internal node, N, compute the cost complexity of the subtree at N.C、And also compute the cost complexity of the subtree at N if it were to be pruned.D、At last, compare the two values. If pruning the subtree at node N would result in asmaller cost complexity, the subtr

40、ee is pruned. Otherwise, the subtree is kept.正确答案:【First, consider the cost complexity of a tree.#Then, for each internal node,N, compute the cost complexity of the subtree at N.#And also compute the costcomplexity of the subtree at N if it were to be pruned.#At last, compare the two values. Ifpruni

41、ng the subtree at node N would result in a smaller cost complexity, the subtree ispruned. Otherwise, the subtree is kept.】7、问题:The cost complexity pruning algorithm used in CART evaluate costcomplexity by the number of leaves in the tree, and the error rate.选项:A、正确B、错误正确答案:【正确】8、问题:Gain ratio is use

42、d as attribute selection measure in C4.5 and the formula isGainRatio(A) = Gain(A)/ SplitInfo(A).选项:A、正确B、错误正确答案:【正确】9、问题:Rule is created for each part from its root to its leaf notes.选项:A、正确B、错误正确答案:【正确】10、问题:ID3 use information gain as its attribute selection measure. And the attributewith the lowe

43、st information gain is chosen as the splitting attribute for note N.选项:A、正确 B、错误正确答案:【错误】Test 51、问题:What the feature of SVM?选项:A、Extremely slow, but are highly accurate.B、Much less prone to overfitting than other methods.C、Black box model.D、Provide a compact description of the learned model.正确答案:【Ex

44、tremely slow, but are highly accurate.#Much less prone to overfittingthan other methods.#Provide a compact description of the learned model.】2、问题:Which is the typical common kernel?选项:A、LinearB、PolynomialC、Radial basis function (Gaussian kernel)D、Sigmoid kernel正确答案:【Linear#Polynomial#Radial basis fu

45、nction (Gaussian kernel)#Sigmoid kernel】3、问题:What adaptations can be made to allow SVM to deal with MulticlassClassification problem?选项:A、One versus rest (OVR).B、One versus one (OVO).C、Error correcting input codes (ECIC).D、Error correcting output codes (ECOC).正确答案:【One versus rest (OVR).#One versus

46、one (OVO).#Error correcting outputcodes (ECOC).】4、问题:Whats the problem of OVR?选项:A、Sensitive to the accuracy of the confidence figures produced by the classifiers.B、The scale of the confidence values may differ between the binary classifiers.C、The binary classification learners see unbalanced distri

47、butions.D、Only when the class distribution is balanced can balanced distributions attain.正确答案:【Sensitive to the accuracy of the confidence figures produced by theclassifiers.#The scale of the confidence values may differ between the binaryclassifiers.#The binary classification learners see unbalanced distributions.】 5、问题:Which one is right about the advantages of SVM?选项:

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:文库网官方知乎号:文库网

经营许可证编号: 粤ICP备2021046453号世界地图

文库网官网©版权所有2025营业执照举报