nexusstc/Pattern Recognition Algorithms for Data Mining (Chapman & Hall/CRC Computer Science & Data Analysis)/d366189bf37868a1ea9763395bea9582.pdf
Pattern Recognition Algorithms for Data Mining: Scalability, Knowledge Discovery and Soft Granular Computing (Chapman & Hall/CRC Computer Science & Data Analysis) 🔍
Sankar K Pal; Pabitra Mitra, PhD
Chapman and Hall/CRC, Chapman & Hall/CRC Computer Science & Data Analysis, 1, 2004
英语 [en] · PDF · 2.7MB · 2004 · 📘 非小说类图书 · 🚀/lgli/lgrs/nexusstc/zlib · Save
描述
Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks.
Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.
Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.
备用文件名
zlib/Mathematics/Sankar K. Pal, Pabitra Mitra/Pattern Recognition Algorithms for Data Mining_730564.pdf
备选作者
Sankar K. Pal and Pabitra Mitra
备选作者
Pal, Sankar K., Mitra, Pabitra
备用出版商
CRC Press LLC
备用版本
Chapman & Hall/CRC Computer Science & Data Analysis, Boca Raton, Fla, ©2004
备用版本
CRC Press (Unlimited), Boca Raton, Fla, 2004
备用版本
United States, United States of America
备用版本
Boca Raton [etc.], United States, 2004
备用版本
Boca Raton, Fla. ; London, ©2004
备用版本
Boca Raton, Fla, Florida, 2004
备用版本
May 27, 2004
备用版本
1, US, 2004
元数据中的注释
0
元数据中的注释
lg304719
元数据中的注释
{"edition":"1","isbns":["1584884576","2004043539","9781584884576","9782004043535"],"last_page":218,"publisher":"Chapman and Hall/CRC","series":"Chapman & Hall/CRC Computer Science & Data Analysis"}
元数据中的注释
Includes bibliographical references (p. 215-236) and index
元数据中的注释
Указ.
Includes bibliographical references (p. 215-236)
Includes bibliographical references (p. 215-236)
元数据中的注释
РГБ
元数据中的注释
Russian State Library [rgb] MARC:
=001 002708949
=003 OCoLC
=005 20051111142050.0
=008 040202s2004\\\\xxu\\\\\\b\\\\001\0\eng\\
=016 7\ $a 012875223 $2 Uk
=017 \\ $a И9770-05
=020 \\ $a 1584884576 (alk. paper)
=040 \\ $a RuMoRKP $b rus $d RuMoRGB $e rcr
=041 0\ $a eng
=044 \\ $a xxu
=084 \\ $a З973.235-018,0 $2 rubbk
=084 \\ $a З813.4,0 $2 rubbk
=084 \\ $a В173.2,0 $2 rubbk
=100 1\ $a Pal, Sankar K.
=245 00 $a Pattern recognition algorithms for data mining : $b scalability, knowledge discovery and soft granular computing $c Sankar K. Pal a. Pabitra Mitra
=260 \\ $a Boca Raton [etc.] $b Chapman & Hall/CRC $c cop. 2004
=300 \\ $a xxix, 244 p. : $b ил., портр. $c 25 см
=500 \\ $a Указ.
=504 \\ $a Includes bibliographical references (p. 215-236)
=650 \7 $a Вычислительная техника -- Вычислительные машины электронные цифровые -- Распознавание образов -- Программирование. Алгоритмы $2 rubbk
=650 \7 $a Радиоэлектроника -- Кибернетика -- Искусственный интеллект -- "Интеллектуализация" компьютеров $2 rubbk
=650 \7 $a Физико-математические науки -- Математика -- Теория игр. Исследование операций -- Линейное программирование $2 rubbk
=700 1\ $a Mitra, Pabitra
=852 4\ $a РГБ $b FB $j 5 05-8/166 $x 90
=001 002708949
=003 OCoLC
=005 20051111142050.0
=008 040202s2004\\\\xxu\\\\\\b\\\\001\0\eng\\
=016 7\ $a 012875223 $2 Uk
=017 \\ $a И9770-05
=020 \\ $a 1584884576 (alk. paper)
=040 \\ $a RuMoRKP $b rus $d RuMoRGB $e rcr
=041 0\ $a eng
=044 \\ $a xxu
=084 \\ $a З973.235-018,0 $2 rubbk
=084 \\ $a З813.4,0 $2 rubbk
=084 \\ $a В173.2,0 $2 rubbk
=100 1\ $a Pal, Sankar K.
=245 00 $a Pattern recognition algorithms for data mining : $b scalability, knowledge discovery and soft granular computing $c Sankar K. Pal a. Pabitra Mitra
=260 \\ $a Boca Raton [etc.] $b Chapman & Hall/CRC $c cop. 2004
=300 \\ $a xxix, 244 p. : $b ил., портр. $c 25 см
=500 \\ $a Указ.
=504 \\ $a Includes bibliographical references (p. 215-236)
=650 \7 $a Вычислительная техника -- Вычислительные машины электронные цифровые -- Распознавание образов -- Программирование. Алгоритмы $2 rubbk
=650 \7 $a Радиоэлектроника -- Кибернетика -- Искусственный интеллект -- "Интеллектуализация" компьютеров $2 rubbk
=650 \7 $a Физико-математические науки -- Математика -- Теория игр. Исследование операций -- Линейное программирование $2 rubbk
=700 1\ $a Mitra, Pabitra
=852 4\ $a РГБ $b FB $j 5 05-8/166 $x 90
备用描述
Pattern Recognition Algorithms for Data Mining: Scalability, Knowledge Discovery, and Soft Granular Computing......Page 1
Contents......Page 4
Foreword......Page 9
Preface......Page 14
List of Tables......Page 17
List of Figures......Page 19
References......Page 0
1.1 Introduction......Page 22
1.2 Pattern Recognition in Brief......Page 24
1.2.2 Feature selection/extraction......Page 25
1.2.3 Classification......Page 26
1.3 Knowledge Discovery in Databases (KDD)......Page 28
1.4.1 Data mining tasks......Page 31
1.4.3 Applications of data mining......Page 33
1.5.1 Database perspective......Page 35
1.5.3 Pattern recognition perspective......Page 36
1.5.4 Research issues and challenges......Page 37
1.6.1 Data reduction......Page 38
1.6.2 Dimensionality reduction......Page 39
1.6.4 Data partitioning......Page 40
1.6.6 Efficient search algorithms......Page 41
1.7 Significance of Soft Computing in KDD......Page 42
1.8 Scope of the Book......Page 43
2.1 Introduction......Page 49
2.2.1 Condensed nearest neighbor rule......Page 52
2.2.2 Learning vector quantization......Page 53
2.3 Multiscale Representation of Data......Page 54
2.4 Nearest Neighbor Density Estimate......Page 57
2.5 Multiscale Data Condensation Algorithm......Page 58
2.6 Experimental Results and Comparisons......Page 60
2.6.2 Test of statistical significance......Page 61
2.6.3 Classification: Forest cover data......Page 67
2.6.4 Clustering: Satellite image data......Page 68
2.6.5 Rule generation: Census data......Page 69
2.7 Summary......Page 72
3.1 Introduction......Page 79
3.2 Feature Extraction......Page 80
3.3 Feature Selection......Page 82
3.3.1 Filter approach......Page 83
3.4 Feature Selection Using Feature Similarity (FSFS)......Page 84
3.4.1 Feature similarity measures......Page 85
3.4.1.2 Least square regression error (e)......Page 86
3.4.1.3 Maximal information compression index (λ2)......Page 87
3.4.2 Feature selection through clustering......Page 88
3.5.1 Supervised indices......Page 91
3.5.2 Unsupervised indices......Page 92
3.5.3 Representation entropy......Page 93
3.6.1 Comparison: Classification and clustering performance......Page 94
3.6.2 Redundancy reduction: Quantitative study......Page 99
3.6.3 Effect of cluster size......Page 100
3.7 Summary......Page 102
4.1 Introduction......Page 103
4.2 Support Vector Machine......Page 106
4.3 Incremental Support Vector Learning with Multiple Points......Page 108
4.4 Statistical Query Model of Learning......Page 109
4.4.2 Confidence factor of support vector set......Page 110
4.5 Learning Support Vectors with Statistical Queries......Page 111
4.6.1 Classification accuracy and training time......Page 114
4.6.3 Margin distribution......Page 117
4.7 Summary......Page 121
5.1 Introduction......Page 123
5.2 Soft Granular Computing......Page 125
5.3 Rough Sets......Page 126
5.3.2 Indiscernibility and set approximation......Page 127
5.3.3 Reducts......Page 128
5.3.4 Dependency rule generation......Page 130
5.4 Linguistic Representation of Patterns and Fuzzy Granulation......Page 131
5.5 Rough-fuzzy Case Generation Methodology......Page 134
5.5.1 Thresholding and rule generation......Page 135
5.5.2 Mapping dependency rules to cases......Page 137
5.5.3 Case retrieval......Page 138
5.6 Experimental Results and Comparison......Page 140
5.7 Summary......Page 141
6.1 Introduction......Page 143
6.2 Clustering Methodologies......Page 144
6.3.2 BIRCH: Balanced iterative reducing and clustering using hierarchies......Page 146
6.3.3 DBSCAN: Density-based spatial clustering of applications with noise......Page 147
6.3.4 STING: Statistical information grid......Page 148
6.4 CEMMiSTRI: Clustering using EM, Minimal Spanning Tree and Rough-fuzzy Initialization......Page 149
6.4.1 Mixture model estimation via EM algorithm......Page 150
6.4.2 Rough set initialization of mixture parameters......Page 151
6.4.3 Mapping reducts to mixture parameters......Page 152
6.4.4 Graph-theoretic clustering of Gaussian components......Page 153
6.5 Experimental Results and Comparison......Page 155
6.6 Multispectral Image Segmentation......Page 159
6.6.4 Experimental results and comparison......Page 161
6.7 Summary......Page 167
7.1 Introduction......Page 168
7.2 Self-Organizing Maps (SOM)......Page 169
7.2.1 Learning......Page 170
7.3 Incorporation of Rough Sets in SOM(RS OM)......Page 171
7.3.2 Mapping rough set rules to network weights......Page 172
7.4.1 Extraction methodology......Page 173
7.4.2 Evaluation indices......Page 174
7.5 Experimental Results and Comparison......Page 175
7.5.1 Clustering and quantization error......Page 176
7.5.2 Performance of rules......Page 181
7.6 Summary......Page 182
8.1 Introduction......Page 183
8.2 Ensemble Classifiers......Page 185
8.3.1.1 Apriori......Page 188
8.3.1.4 Dynamic itemset counting......Page 190
8.4 Classification Rules......Page 191
8.5.1.2 Output representation......Page 193
8.5.2 Rough set knowledge encoding......Page 194
8.6.1 Algorithm......Page 196
8.6.1.1 Steps......Page 197
8.6.2.1 Chromosomal representation......Page 200
8.6.2.4 Choice of fitness function......Page 201
8.7.1 Rule extraction methodology......Page 202
8.7.2 Quantitative measures......Page 206
8.8 Experimental Results and Comparison......Page 207
8.8.1 Classification......Page 208
8.8.2 Rule extraction......Page 210
8.8.2.1 Rules for staging of cervical cancer with binary feature inputs......Page 215
8.9 Summary......Page 217
Contents......Page 4
Foreword......Page 9
Preface......Page 14
List of Tables......Page 17
List of Figures......Page 19
References......Page 0
1.1 Introduction......Page 22
1.2 Pattern Recognition in Brief......Page 24
1.2.2 Feature selection/extraction......Page 25
1.2.3 Classification......Page 26
1.3 Knowledge Discovery in Databases (KDD)......Page 28
1.4.1 Data mining tasks......Page 31
1.4.3 Applications of data mining......Page 33
1.5.1 Database perspective......Page 35
1.5.3 Pattern recognition perspective......Page 36
1.5.4 Research issues and challenges......Page 37
1.6.1 Data reduction......Page 38
1.6.2 Dimensionality reduction......Page 39
1.6.4 Data partitioning......Page 40
1.6.6 Efficient search algorithms......Page 41
1.7 Significance of Soft Computing in KDD......Page 42
1.8 Scope of the Book......Page 43
2.1 Introduction......Page 49
2.2.1 Condensed nearest neighbor rule......Page 52
2.2.2 Learning vector quantization......Page 53
2.3 Multiscale Representation of Data......Page 54
2.4 Nearest Neighbor Density Estimate......Page 57
2.5 Multiscale Data Condensation Algorithm......Page 58
2.6 Experimental Results and Comparisons......Page 60
2.6.2 Test of statistical significance......Page 61
2.6.3 Classification: Forest cover data......Page 67
2.6.4 Clustering: Satellite image data......Page 68
2.6.5 Rule generation: Census data......Page 69
2.7 Summary......Page 72
3.1 Introduction......Page 79
3.2 Feature Extraction......Page 80
3.3 Feature Selection......Page 82
3.3.1 Filter approach......Page 83
3.4 Feature Selection Using Feature Similarity (FSFS)......Page 84
3.4.1 Feature similarity measures......Page 85
3.4.1.2 Least square regression error (e)......Page 86
3.4.1.3 Maximal information compression index (λ2)......Page 87
3.4.2 Feature selection through clustering......Page 88
3.5.1 Supervised indices......Page 91
3.5.2 Unsupervised indices......Page 92
3.5.3 Representation entropy......Page 93
3.6.1 Comparison: Classification and clustering performance......Page 94
3.6.2 Redundancy reduction: Quantitative study......Page 99
3.6.3 Effect of cluster size......Page 100
3.7 Summary......Page 102
4.1 Introduction......Page 103
4.2 Support Vector Machine......Page 106
4.3 Incremental Support Vector Learning with Multiple Points......Page 108
4.4 Statistical Query Model of Learning......Page 109
4.4.2 Confidence factor of support vector set......Page 110
4.5 Learning Support Vectors with Statistical Queries......Page 111
4.6.1 Classification accuracy and training time......Page 114
4.6.3 Margin distribution......Page 117
4.7 Summary......Page 121
5.1 Introduction......Page 123
5.2 Soft Granular Computing......Page 125
5.3 Rough Sets......Page 126
5.3.2 Indiscernibility and set approximation......Page 127
5.3.3 Reducts......Page 128
5.3.4 Dependency rule generation......Page 130
5.4 Linguistic Representation of Patterns and Fuzzy Granulation......Page 131
5.5 Rough-fuzzy Case Generation Methodology......Page 134
5.5.1 Thresholding and rule generation......Page 135
5.5.2 Mapping dependency rules to cases......Page 137
5.5.3 Case retrieval......Page 138
5.6 Experimental Results and Comparison......Page 140
5.7 Summary......Page 141
6.1 Introduction......Page 143
6.2 Clustering Methodologies......Page 144
6.3.2 BIRCH: Balanced iterative reducing and clustering using hierarchies......Page 146
6.3.3 DBSCAN: Density-based spatial clustering of applications with noise......Page 147
6.3.4 STING: Statistical information grid......Page 148
6.4 CEMMiSTRI: Clustering using EM, Minimal Spanning Tree and Rough-fuzzy Initialization......Page 149
6.4.1 Mixture model estimation via EM algorithm......Page 150
6.4.2 Rough set initialization of mixture parameters......Page 151
6.4.3 Mapping reducts to mixture parameters......Page 152
6.4.4 Graph-theoretic clustering of Gaussian components......Page 153
6.5 Experimental Results and Comparison......Page 155
6.6 Multispectral Image Segmentation......Page 159
6.6.4 Experimental results and comparison......Page 161
6.7 Summary......Page 167
7.1 Introduction......Page 168
7.2 Self-Organizing Maps (SOM)......Page 169
7.2.1 Learning......Page 170
7.3 Incorporation of Rough Sets in SOM(RS OM)......Page 171
7.3.2 Mapping rough set rules to network weights......Page 172
7.4.1 Extraction methodology......Page 173
7.4.2 Evaluation indices......Page 174
7.5 Experimental Results and Comparison......Page 175
7.5.1 Clustering and quantization error......Page 176
7.5.2 Performance of rules......Page 181
7.6 Summary......Page 182
8.1 Introduction......Page 183
8.2 Ensemble Classifiers......Page 185
8.3.1.1 Apriori......Page 188
8.3.1.4 Dynamic itemset counting......Page 190
8.4 Classification Rules......Page 191
8.5.1.2 Output representation......Page 193
8.5.2 Rough set knowledge encoding......Page 194
8.6.1 Algorithm......Page 196
8.6.1.1 Steps......Page 197
8.6.2.1 Chromosomal representation......Page 200
8.6.2.4 Choice of fitness function......Page 201
8.7.1 Rule extraction methodology......Page 202
8.7.2 Quantitative measures......Page 206
8.8 Experimental Results and Comparison......Page 207
8.8.1 Classification......Page 208
8.8.2 Rule extraction......Page 210
8.8.2.1 Rules for staging of cervical cancer with binary feature inputs......Page 215
8.9 Summary......Page 217
备用描述
Annotation Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks. Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm
备用描述
This valuable text addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. Organized into eight chapters, the book begins by introducing PR, data mining, and knowledge discovery concepts. The authors proceed to analyze the tasks of multi-scale data condensation and dimensionality reduction. Then they explore the problem of learning with support vector machine (SVM), and conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.
备用描述
Pattern Recognition Algorithms for Data Mining covers the topic of data mining from a pattern recognition perspective. This unique book presents real life data sets from various domains, such as geographic information systems, remote sensing imagery, and population census, to demonstrate the use of innovative new methodologies. Classical approaches are covered along with granular computation by integrating fuzzy sets, artificial neural networks, and genetic algorithms for efficient knowledge discovery. The authors then compare the granular computing and rough fuzzy approaches with the more classical methods and clearly demonstrate why they are more efficient.
开源日期
2010-08-30
🚀 快速下载
成为会员以支持书籍、论文等的长期保存。为了感谢您对我们的支持,您将获得高速下载权益。❤️
如果您在本月捐款,您将获得双倍的快速下载次数。
🐢 低速下载
由可信的合作方提供。 更多信息请参见常见问题解答。 (可能需要验证浏览器——无限次下载!)
- 低速服务器(合作方提供) #1 (稍快但需要排队)
- 低速服务器(合作方提供) #2 (稍快但需要排队)
- 低速服务器(合作方提供) #3 (稍快但需要排队)
- 低速服务器(合作方提供) #4 (稍快但需要排队)
- 低速服务器(合作方提供) #5 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #6 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #7 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #8 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #9 (无需排队,但可能非常慢)
- 下载后: 在我们的查看器中打开
所有选项下载的文件都相同,应该可以安全使用。即使这样,从互联网下载文件时始终要小心。例如,确保您的设备更新及时。
外部下载
-
对于大文件,我们建议使用下载管理器以防止中断。
推荐的下载管理器:JDownloader -
您将需要一个电子书或 PDF 阅读器来打开文件,具体取决于文件格式。
推荐的电子书阅读器:Anna的档案在线查看器、ReadEra和Calibre -
使用在线工具进行格式转换。
推荐的转换工具:CloudConvert和PrintFriendly -
您可以将 PDF 和 EPUB 文件发送到您的 Kindle 或 Kobo 电子阅读器。
推荐的工具:亚马逊的“发送到 Kindle”和djazz 的“发送到 Kobo/Kindle” -
支持作者和图书馆
✍️ 如果您喜欢这个并且能够负担得起,请考虑购买原版,或直接支持作者。
📚 如果您当地的图书馆有这本书,请考虑在那里免费借阅。
下面的文字仅以英文继续。
总下载量:
“文件的MD5”是根据文件内容计算出的哈希值,并且基于该内容具有相当的唯一性。我们这里索引的所有影子图书馆都主要使用MD5来标识文件。
一个文件可能会出现在多个影子图书馆中。有关我们编译的各种数据集的信息,请参见数据集页面。
有关此文件的详细信息,请查看其JSON 文件。 Live/debug JSON version. Live/debug page.