有人能解释一下数据挖掘中分类和聚类的区别吗?
如果可以,请给出两者的例子以理解主旨。
有人能解释一下数据挖掘中分类和聚类的区别吗?
如果可以,请给出两者的例子以理解主旨。
当前回答
首先,像这里的许多回答一样:分类是有监督的学习,聚类是无监督的。这意味着:
Classification needs labeled data so the classifiers can be trained on this data, and after that start classifying new unseen data based on what he knows. Unsupervised learning like clustering does not uses labeled data, and what it actually does is to discover intrinsic structures in the data like groups. Another difference between both techniques (related to the previous one), is the fact that classification is a form of discrete regression problem where the output is a categorical dependent variable. Whereas clustering's output yields a set of subsets called groups. The way to evaluate these two models is also different for the same reason: in classification you often have to check for the precision and recall, things like overfitting and underfitting, etc. Those things will tell you how good is the model. But in clustering you usually need the vision of and expert to interpret what you find, because you don't know what type of structure you have (type of group or cluster). That's why clustering belongs to exploratory data analysis. Finally, i would say that applications are the main difference between both. Classification as the word says, is used to discriminate instances that belong to a class or another, for example a man or a woman, a cat or a dog, etc. Clustering is often used in the diagnosis of medical illness, discovery of patterns, etc.
其他回答
通常,在分类中,您有一组预定义的类,并希望知道新对象属于哪个类。
聚类尝试将一组对象分组,并发现对象之间是否存在某种关系。
在机器学习的背景下,分类是监督学习,聚类是无监督学习。
也可以看看维基百科上的分类和聚类。
如果你试图将大量的文件归档到你的书架上(根据日期或文件的其他规格),你是在分类。
如果要从这组工作表创建集群,则意味着工作表之间有一些类似的东西。
分类
是根据从例子中学习,将预定义的类分配给新的观察结果。
这是机器学习的关键任务之一。
聚类(或聚类分析)
尽管被普遍认为是“无监督分类”,但它完全不同。
与许多机器学习者教你的不同,它不是将“类”分配给对象,而是没有预先定义它们。这是做了太多分类的人的有限观点;一个典型的例子,如果你有一个锤子(分类器),所有的东西对你来说都像钉子(分类问题)。但这也是为什么从事分类的人没有掌握聚类的诀窍。
相反,可以将其视为结构发现。聚类的任务是在你的数据中找到你以前不知道的结构(例如组)。如果您学习了一些新的东西,那么群集是成功的。如果你只知道你已经知道的结构,它就失败了。
聚类分析是数据挖掘的关键任务(也是机器学习中的丑小鸭,所以不要相信机器学习者对聚类的否定)。
“无监督学习”有点矛盾
这在文献中反复出现,但无监督学习是该死的。它并不存在,但它就像“军事情报”一样自相矛盾。
算法要么从例子中学习(那么它就是“监督学习”),要么不学习。如果所有的聚类方法都是“学习”,那么计算一个数据集的最小值、最大值和平均值也是“无监督学习”。然后任何计算“学习”它的输出。因此,术语“无监督学习”是完全没有意义的,它意味着一切和什么都不是。
Some "unsupervised learning" algorithms do, however, fall into the optimization category. For example k-means is a least-squares optimization. Such methods are all over statistics, so I don't think we need to label them "unsupervised learning", but instead should continue to call them "optimization problems". It's more precise, and more meaningful. There are plenty of clustering algorithms who do not involve optimization, and who do not fit into machine-learning paradigms well. So stop squeezing them in there under the umbrella "unsupervised learning".
有一些与集群相关的“学习”,但学习的不是程序。用户应该学习关于他的数据集的新东西。
分类 —预测类别标签 -根据训练集和类标签属性中的值(类标签)对数据进行分类(构造模型) —使用该模型对新数据进行分类
集群:数据对象的集合 —同一集群内彼此相似 —与其他集群中的对象不同