Hierarchical cluster analysis on a set of dissimilarities and
methods for analyzing it.
Usage
hclustglad(d, method = "complete", members=NULL)
Arguments
d
a dissimilarity structure as produced by dist.
method
the agglomeration method to be used. This should
be (an unambiguous abbreviation of) one of
"ward", "single", "complete",
"average", "mcquitty", "median" or
"centroid".
members
NULL or a vector with length size of d.
Details
This function performs a hierarchical cluster analysis
using a set of dissimilarities for the n objects being
clustered. Initially, each object is assigned to its own
cluster and then the algorithm proceeds iteratively,
at each stage joining the two most similar clusters,
continuing until there is just a single cluster.
At each stage distances between clusters are recomputed
by the Lance–Williams dissimilarity update formula
according to the particular clustering method being used.
A number of different clustering methods are provided.
Ward's minimum variance method aims at finding compact,
spherical clusters. The complete linkage method finds
similar clusters. The single linkage method
(which is closely related to the minimal spanning tree)
adopts a ‘friends of friends’ clustering strategy.
The other methods can be regarded as aiming
for clusters with characteristics somewhere between
the single and complete link methods.
If members!=NULL, then d is taken to be a
dissimilarity matrix between clusters instead of dissimilarities
between singletons and members gives the number of observations
per cluster. This way the hierarchical cluster algorithm can be “started
in the middle of the dendrogram”, e.g., in order to reconstruct the
part of the tree above a cut (see examples). Dissimilarities between
clusters can be efficiently computed (i.e., without hclustglad
itself) only for a limited number of distance/linkage combinations,
the simplest one being squared Euclidean distance and centroid
linkage. In this case the dissimilarities between the clusters are
the squared Euclidean distances between cluster means.
In hierarchical cluster displays, a decision is needed at each merge to
specify which subtree should go on the left and which on the right.
Since, for n observations there are n-1 merges,
there are 2^{(n-1)} possible orderings for the leaves
in a cluster tree, or dendrogram.
The algorithm used in hclustglad is to order the subtree so that
the tighter cluster is on the left (the last, i.e. most recent,
merge of the left subtree is at a lower value than the last
merge of the right subtree).
Single observations are the tightest clusters possible,
and merges involving two observations place them in order by their
observation sequence number.
Value
An object of class hclust which describes the
tree produced by the clustering process.
The object is a list with components:
merge
an n-1 by 2 matrix.
Row i of merge describes the merging of clusters
at step i of the clustering.
If an element j in the row is negative,
then observation -j was merged at this stage.
If j is positive then the merge
was with the cluster formed at the (earlier) stage j
of the algorithm.
Thus negative entries in merge indicate agglomerations
of singletons, and positive entries indicate agglomerations
of non-singletons.
height
a set of n-1 non-decreasing real values.
The clustering height: that is, the value of
the criterion associated with the clustering
method for the particular agglomeration.
order
a vector giving the permutation of the original
observations suitable for plotting, in the sense that a cluster
plot using this ordering and matrix merge will not have
crossings of the branches.
labels
labels for each of the objects being clustered.
call
the call which produced the result.
method
the cluster method that has been used.
dist.method
the distance that has been used to create d
(only returned if the distance object has a "method"
attribute).
Author(s)
The hclustglad function is based an Algorithm
contributed to STATLIB by F. Murtagh.
References
Everitt, B. (1974).
Cluster Analysis.
London: Heinemann Educ. Books.
Hartigan, J. A. (1975).
Clustering Algorithms.
New York: Wiley.
Sneath, P. H. A. and R. R. Sokal (1973).
Numerical Taxonomy.
San Francisco: Freeman.
Anderberg, M. R. (1973).
Cluster Analysis for Applications.
Academic Press: New York.
Gordon, A. D. (1999).
Classification. Second Edition.
London: Chapman and Hall / CRC
Murtagh, F. (1985).
“Multidimensional Clustering Algorithms”, in
COMPSTAT Lectures 4.
Wuerzburg: Physica-Verlag
(for algorithmic details of algorithms used).
Examples
data(USArrests)
hc <- hclustglad(dist(USArrests), "ave")
plot(hc)
plot(hc, hang = -1)
## Do the same with centroid clustering and squared Euclidean distance,
## cut the tree into ten clusters and reconstruct the upper part of the
## tree from the cluster centers.
hc <- hclustglad(dist(USArrests)^2, "cen")
memb <- cutree(hc, k = 10)
cent <- NULL
for(k in 1:10){
cent <- rbind(cent, colMeans(USArrests[memb == k, , drop = FALSE]))
}
hc1 <- hclustglad(dist(cent)^2, method = "cen", members = table(memb))
opar <- par(mfrow = c(1, 2))
plot(hc, labels = FALSE, hang = -1, main = "Original Tree")
plot(hc1, labels = FALSE, hang = -1, main = "Re-start from 10 clusters")
par(opar)
Results
R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
> library(GLAD)
######################################################################################
Have fun with GLAD
For smoothing it is possible to use either
the AWS algorithm (Polzehl and Spokoiny, 2002,
or the HaarSeg algorithm (Ben-Yaacov and Eldar, Bioinformatics, 2008,
If you use the package with AWS, please cite:
Hupe et al. (Bioinformatics, 2004, and Polzehl and Spokoiny (2002,
If you use the package with HaarSeg, please cite:
Hupe et al. (Bioinformatics, 2004, and (Ben-Yaacov and Eldar, Bioinformatics, 2008,
For fast computation it is recommanded to use
the daglad function with smoothfunc=haarseg
######################################################################################
New options are available in daglad: see help for details.
> png(filename="/home/ddbj/snapshot/RGM3/R_BC/result/GLAD/hclust.Rd_%03d_medium.png", width=480, height=480)
> ### Name: hclustglad
> ### Title: Hierarchical Clustering
> ### Aliases: hclustglad
> ### Keywords: multivariate cluster
>
> ### ** Examples
>
> data(USArrests)
> hc <- hclustglad(dist(USArrests), "ave")
> plot(hc)
> plot(hc, hang = -1)
>
> ## Do the same with centroid clustering and squared Euclidean distance,
> ## cut the tree into ten clusters and reconstruct the upper part of the
> ## tree from the cluster centers.
> hc <- hclustglad(dist(USArrests)^2, "cen")
> memb <- cutree(hc, k = 10)
> cent <- NULL
> for(k in 1:10){
+ cent <- rbind(cent, colMeans(USArrests[memb == k, , drop = FALSE]))
+ }
> hc1 <- hclustglad(dist(cent)^2, method = "cen", members = table(memb))
> opar <- par(mfrow = c(1, 2))
> plot(hc, labels = FALSE, hang = -1, main = "Original Tree")
> plot(hc1, labels = FALSE, hang = -1, main = "Re-start from 10 clusters")
> par(opar)
>
>
>
>
>
> dev.off()
null device
1
>