• Keine Ergebnisse gefunden

A Generic Aggregation Structure for Correlation Clustering

Im Dokument Unsupervised learning on social data (Seite 51-57)

data distribution.

In our experiments section, we investigate the runtime as well as the clustering quality of our approach and show that despite its improved per-formance compared to the static counterpart algorithms, the loss of accuracy is insignificant since the subspaces are mostly detected correctly. Further-more, when considering the throughput of our approach we can state that the algorithm can cope with high-velocity data streams that push new data objects within milliseconds to the application.

4.2 A Generic Aggregation Structure for

normal-Algorithm 1Incremental PCA

Input: Data Stream S, Weight parameterτ

Output: Current Eigensystem eigi, composed of eigenvector matrixVi and eigenvalue matrix Ei

1: V0, E0 :=initial PCA from the first init observations

2: µ0 := mean of the first init observations

3: while S does not end do

4: x0i := next incoming observation from S

5: xi =x0i−µi−1

6: µi =µ+ (1−τ)·xi

7: for j ∈range(0, col(Vi−1))do

8: yi =p

τ Ei−1(j, j)·Vi−1(:, j)

9: end for

10: ycol(Vi−1)=√

1−τ·xi

11: A= [y0, y1, ..., ycol(Vi−1)]

12: B =ATA

13: U, Ei = Eigen-decomposeB

14: for j ∈range(0, col(U))do

15: vj =A·U(:, j)

16: end for

17: Vi = [v1, v2, ..., vcol(U)]

18: end while

ized, i.e. xi, and the current meanµiis determined. The parameterα ∈[0,1]

is used as a weight that denotes the importance of a new observation com-pared to the previously seen ones. The largerα, the less important is a new observation xi. Next, a d×col(Vi−1) + 1 matrix A is defined, with col(Vi−1) denoting the number of columns of the eigenvector matrix Vi−1. The first col(Vi−1)columns of A are constructed from the weighted previous principal components and the weighted current observation forms the last column. Us-ing matrixA, we can reconstruct the newd×dcovariance matrixCexpressed by C = AAT. Since a high dimension d leads to high computational costs, a smaller (col(Vi−1) + 1)×(col(Vi−1) + 1) matrix B = ATA is constructed and then decomposed on the rank ofcol(Vi−1) + 1. The eigen-decomposition retrieves the eigenvalue matrix Ei and the eigenvector matrix U. Multiply-ing each eigenvector of U with matrix A finally retrieves the eigenvectors of the covariance matrix C with the eigenvalues contained in Ei. For the mathematical derivations of the single steps, we refer to [163].

The Correlation Clustering Microcluster Structure CCMi-cro

As this part of the thesis concentrates on streaming data, we use a common definiton of data streams.

Definition 1. A data stream S is an ordered and possibly infinite sequence of data objects x1, x2, ..., xi, ... that must be accessed in the order they arrive and can be read only in one linear scan.

Another concept used for our purpose is thedamped window model. Since recent data is typically more important than old data objects, especially if an up-to-date clustering model is desired, it is useful to “forget” stale data.

Therefore, a widely used approach in applications dealing with temporal data is the utilization of theexponential fading function for data aging. This technique assigns a weight to each data object which decreases exponentially with timet by using the fading functionf(t) = 2−λ·t. λ >0is the decay rate and determines the impact of stale data to the application. A high value of λ means low importance of old data and vice versa.

As we rely on the concept of microclusters, we need to define a data struc-ture that encapsulates the necessary information and simultaneously allows update procedures. To fulfill these properties, we define our microcluster structure, which is calledCCMicro, as follows:

Definition 2. A microcluster CCMicro at time t for a set of d-dimensional points C = {p1, p2, ..., pn} arriving at different points in time is defined as CCMicro(C, t) = (V(t), E(t), µ(t), ts) with

• V(t)being the eigenvector matrix of the covariance matrix of C at time t,

• E(t) being the corresponding eigenvalues of the eigenvectors in V(t),

• µ(t) being the mean of the data points contained in C at time t, and

• ts being the timestamp of the last incoming object assigned to this mi-crocluster.

Let us note that we generally differ between strong eigenvectors and weak eigenvectors in the eigenvector matrix V. The strength of an eigenvector is given by the variance along the corresponding axis in the eigenspace. We definestrong and weak eigenvectors as follows [5]:

(a) Microcluster model. (b) Macrocluster model.

Figure 4.1: Micro- and macrocluster models on a toy dataset as retrieved by CorrStream. Clusters are depicted by plotting their point sets. Differently colored and shaped point sets describe different micro- resp. macroclusters.

Definition 3. Given α ∈ [0,1] and some microcluster mc. Let Emc be the microcluster’s d×d eigenvalue matrix having the eigenvalues in descending order on the diagonal. We call the first

minr∈{1,...,d}{r|

Pr

i=1Emc(i, i) Pd

i=1Emc(i, i) ≥α}

eigenvectors strong eigenvectors resp. preference vectors and the remaining eigenvectors are called weak eigenvectors. The space spanned by the prefer-ence vectors is called correlation subspace.

Online Maintenance of CCMicro Structures

The generic CorrStream framework generally consists of two phases, i.e., an online phase in which microclusters are generated, maintained and/or dis-carded due to temporal expiration, and an offline phase to extract on demand clustering models of the current state of the stream. During the continuous online phase, which is outlined in Algorithm 2, the data stream is consumed and for each data objectoarangeNN query is performed to detect the closest microcluster. The rangeNN query retrieves the closest microcluster with a maximum distance of. If such a microcluster exists, it absorbs the current

data objecto, otherwise a new microcluster is created. Beside of the compo-nents that fulfill the maintenance properties, each microcluster has an initial buffer. This buffer is a small collection of data objects that serves as a basis for an internal initialization step. The intuition behind that is to collect a bunch of spatially close data objects for which an initial PCA is performed.

The PCA retrieves the eigenspace of those data objects. Applying Defini-tion 3, we can define the strong eigenvectors of the microcluster which span the correlation subspace.

Algorithm 2Online

Input: Data Stream S, range parameter, buffer sizebuff_size, decay parameter λ

1: for incoming data objecto fromS at timetdo 2: MicroclustermcN N = rangeNN(o, )

3: if mcN N 6=null then 4: add ho, tito mcN N 5: else

6: create new microcluster with parameters buff_size,λand addho, ti 7: end if

8: end for

Note that the rangeNN query uses two distance measures, i.e., the Eu-clidean distance and the correlation distance. The reason for that is the period of grace that we establish for each newly constructed microcluster for the initialization. If the initial PCA has not been done for a microclus-ter yet, the correlation measure cannot be applied due to the lack of the microcluster’s eigenspace. Therefore, we determine the Euclidean distance between the micocluster’s mean point and the incoming data object instead of the correlation distance in such cases. However, to define the correlation distance, which is used in all other cases, we need to define the notion of similarity matrix beforehand.

Definition 4. Let Vmc be an eigenvector matrix with Emc being the corre-sponding eigenvalue matrix of a microclustermchaving onto[0; 1]normalized eigenvalues on the diagonal. Given a threshold value α ∈ [0; 1] and a con-stant valueκ∈Rwithκ 1, the eigenvalue matrix Emcis adopted by setting those eigenvalues to κwhose value is below the threshold value α. The values of the resulting matrix Eˆmc are computed according to the following rule:

mc(i, i) =

(1 if Emc(i, i)≥α κ else.

Having the adopted eigenvalue matrix Eˆmc, the similarity matrix of mc is defined as

mc=VmcmcVmcT .

The constant value κ specifies the allowed degree of deviation from the correlation subspace. Following [42], we set this value to κ = 50. The correlation distance can finally be computed as follows.

Definition 5. Given a microcluster mc with mean point µmc and a data object o, the correlation distance between both is defined as

distancecorr(mc, o) = q

mc−o)·Mˆmc·(µmc−o)T

with Mˆmc being the similarity matrix of mc.

After determining the closest microclustermcof the incoming data object o, the latter must be incorporated into mcproperly. Our proposed algorithm basically differentiates three cases of how to insert a new data object into an existing microcluster. The first two cases are considered if the microcluster mc has not been initialized so far. In that cases, the object is inserted into the buffer and the mean as well as the current timestamp of the microcluster are updated. If the microcluster’s buffer still has capacity, the insertion terminates by retrieving the updated microcluster. Otherwise, if the buffer is filled, the initial PCA is performed on the data objects contained in the buffer and the eigensystem is retrieved. After setting the corresponding components of the microcluster structure, mc is marked as initialized. The third option of inserting a new data object is used if the microcluster already has been initialized. In this case, the existing components of the microcluster are reused and the incremental PCA procedure is invoked to generate the new eigenvectors and -values as well as an updated mean vector. As mentioned above, the degree of influence of the new object on the existing eigensystem can be regularized by the weight parameter.

Due to the possibility of expiring microclusters, i.e., microclusters that have not absorbed any data object for a while, it might happen that this microcluster should be deleted since stale data should not sophisticate an up-to-date clustering model. Deleting old microclusters also has the advantage to safe storage space. As a straightforward solution, we propose to scan the set of microclusters sequentially from time to time and delete those microclusters whose timestamp tsmc is older than a user specified threshold value ∆t, i.e., iftsmc< tscurr−∆twithtscurrdenoting the current point in time. Note that the choice of an appropriate threshold value ∆t depends on the application at hand.

4.3 Application of CCMicro Structures for

Im Dokument Unsupervised learning on social data (Seite 51-57)