Free Essay

Hyper Geomatric

In:

Submitted By usmankhalid
Words 4950
Pages 20
A LINEAR-TIME ALGORITHM FOR EVALUATING SERIES OF SCHUR FUNCTIONS

CY CHAN, VESSELIN DRENSKY, ALAN EDELMAN, AND PLAMEN KOEV

Abstract. We present a new algorithm for computing all Schur functions sλ(x1, x2, . . . , xn) for all partitions λ of integers not exceeding N in time O(n2K ), where K ≡ #{λ| |λ| ≤ N} N N is the number of those partitions. In particular, this algorithm has optimal complexity for evaluating truncated series of Schur functions such as the hypergeometric function of a matrix argument.

1. Introduction We present a new highly efficient algorithm for computing the finite truncation (for k ≤ N) of the hypergeometric function of a matrix argument X in the complex case: X XN 1 (a1)κ · · · (ap)κ (1) pFq(a , . . . , a1 p; b , . . . , b1 q; X) ≡ · · sκ(x , x , . . . , x1 2 n), Hκ (b1)κ · · · (bq)κ k=0 κ`k where Y (2) (a)κ ≡ (a − i + j) (i,j)∈κ Q is the generalized Pochammer symbol, sκ is the Schur function [12], Hκ ≡ (i,j)∈κ hκ(i, j) is 0 the product of all hook lengths hκ(i, j) ≡ κi + κj − i − j + 1 of κ, and x , x , . . . , x1 2 n are the eigenvalues of X. The efficient evaluation of (1) is a central research problem in multivariate statistical analysis [15] with a wealth of applications in wireless communications [1, 6, 7, 10, 13, 14, 16, 17, 21] as well as signal processing [20]. The challenge in evaluating (1) involves the evaluation of the Schur function sκ and has proven extremely challenging primarily since, as a multivariate symmetric polynomial, it has

|κ| exponential number of terms—O n [4, 8, 11]. Currently, the best algorithm for evaluating (1) is mhg from [11] whose complexity is 2 O(nKN), 2000 Mathematics Subject Classification. Primary 05E05. Secondary 65F50; 65T50. Key words and phrases. Schur functions, Fast Fourier Transforms. The research of the second author was partially supported by Grant MI-1503/2005 of the Bulgarian National Science Fund. The research of the third and fourth authors was supported by NSF Grant DMS–0608306. 1

where KN ≡ #{κ| |κ| ≤ N} is the number of terms in (1). √ π 2N/3 An estimate of KN is Ramanujan’s asymptotic formula [9, p. 116] KN ∼ O e , which is subexponential in N. In this paper, we present a new algorithm for computing (1) whose complexity is only 2 O(n KN), 2 i.e., it takes only O(n ) per term instead of the O(nKN) cost per term of mhg. To achieve that complexity, we follow the idea in [11]: The recursive formula [12] X |λ|−|µ| (3) sλ(x , x , . . . , x1 2 n) = sµ(x , x , . . . , x1 2 n−1)xn µ allows each Schur function in (1) to be computed at a cost not exceeding O(nKM). The summation in (3) is over all partitions µ = (µ , . . . , µ1 n−1) such that λ/µ is a horizontal strip, i.e., (4) λ1 ≥ µ1 ≥ λ2 ≥ µ2 ≥ · · · ≥ λn−1 ≥ µn−1 ≥ λn. In this paper we improve on the result of [11] by observing that (3) represents a vector- matrix multiplication (n) (n−1) (5) s = s · Zn(xn) (i) where s is an (appropriately indexed) row-vector of all Schur functions sκ(x , x , . . . , x1 2 i), |λ|−|µ| |κ| ≤ N and Zn(xn) ≡ εµ,λxn is a matrix with entries indexed with the pairs of partitions (µ, λ), with εµ,λ = 1 if λ/µ is a horizontal strip and 0 otherwise. 2 2 Since the matrix Zn is dense, (5) costs O(KM), explaining the O(nKM) complexity of mhg [11]. The key contribution of this paper is to recognize and exploit the structure of Zn to 2 2 perform the multiplication (5) in linear O(n KM) time instead of quadratic O(nKM) time. This work was inspired by the idea of Cookey and Tukey [3] (and later generalized [2, 5, 18, 19]) that a matrix-vector multiplication by the character table of the cyclic group √ (i−1)(j−1)2π −1 of size n (i.e., by the Vandermonde matrix V ≡ e n ) can be performed in 2 O(nlog n) instead of O(n ) time by exploiting the structure of the cyclic group to decompose V recursively into a product of simpler structured matrices.

2. Theoretical grounds

In this section we fix also the positive integer N. For a fixed k = 1, . . . , n, we extend the set of all partitions λ = (λ , . . . , λ1 k) with |λ| ≤ N, and consider the set Pk of partitions λ satisfying the conditions (6) λ1 − λ2 ≤ N, λ2 − λ3 ≤ N, . . . , λk−1 − λk ≤ N, λk ≤ N. k Clearly, the number of the λs from the class Pk is (N + 1) . We order λ ∈ Pk in the reverse lexicographic order with respect to the k-tuple (λ1 − λ , . . . , λ2 k−1 − λk, λk) and assume that 2

λ < ν, λ, ν ∈ Pk, if and only if λk < νk or, when λk = νk, λl−1 − λl = νl−1 − νl for l = p + 1, . . . , k and λp−1 − λp < νp−1 − νp for some p. We build inductively the row vectors Fk(x , . . . , x1 k), k = 1, . . . , n, (7) Fk(x , . . . , x1 k) = (fλ(x , . . . , x1 k) | λ ∈ Pk), where the λs are ordered with respect to the above order. We define 2 N F1(x1) = (1, s(1)(x1), s(2)(x1), . . . , s(N)(x1)) = (1, x , x , . . . , x1 1 1 ), and, for k > 1, (8) Fk(x , . . . , x1 k) = Fk−1(x , . . . , x1 k−1)Yk(xk), where

|λ/µ| (9) Yk(xk) = εµ,λxk , λ ∈ Pk, µ ∈ Pk−1,

and εµ,λ = 1 if λ/µ is a horizontal strip and εµ,λ = 0 otherwise. (We denote the matrix by Yk, celebrating Young because (5) expresses the Young rule which is a partial case of the Littlewood-Richardson rule for calculating the product of two Schur functions.) Lemma 2.1. If λ1 ≤ N for λ ∈ Pk, then (10) fλ(x , . . . , x1 k) = sλ(x , . . . , x1 k). Proof. If we delete the columns and the rows of the matrix Yk(xk) from (9) which correspond, respectively, to partitions λ and µ with |λ| ≥ N and |µ| ≥ N, we shall obtain the matrix Zk(xk) from (5). Since F1(x1) = S1(x1), the proof is completed by easy induction on k. Remark 2.2. Once we have an algorithm for a fast multiplication by Yk we will use it to multiply by only those elements of Fk−1 corresponding to partitions of size not exceeding N, which by the preceding lemma are exactly the Schur functions we want. Therefore even k k−1 2 though the size of Yk is exponential ((N + 1) × (N + 1) ) it will only cost O(n KN) to multiply by Yk. In what follows we denote by Im the identity m × m matrix and by Eij the matrix units with 1 at (i, j)-position and zeros at the other places. The size of Eij will be clear from the context. If P, Q are two matrices, then we denote by P ⊗ Q their Kronecker product. By definition, if P = (pij) is an l × m matrix, then P ⊗ Q is an l × m block matrix with the block pijQ at (i, j)-position. In particular, Im ⊗ Q is a diagonal block matrix with m copies of Q on the diagonal. We fix the (N + 1) × (N + 1) matrix   0 1 0 . . . 0  0 0 1 . . . 0     .. .. .. .. ..  (11) A = E12 + E23 + · · · + EN,N+1 = . . . . . ,     0 0 0 . . . 1 0 0 0 . . . 0 3

and define the matrices   0 0 . . . 0 0  1 0 . . . 0 0    T  .. .. .. .. ..  (12) B2 = A = . . . . . ,     0 0 . . . 0 0 0 0 . . . 1 0 N N (13) C2(x2) = IN+1 + x A2 + · · · + x2 A −1 = (IN+1 − x A2 )   N 1 x2 . . . . . . x2  .. N−1   0 1 . x2   . .  =  . .. .. .. .  ,  . . . . .   0 0 . . . 1 x  2 0 0 . . . 1 (14) K2(x2) = IN+1 ⊗ C2(x2)   C2(x2) . . . 0  .. .. ..  = . . . . 0 . . . C2(x2)

2 Finally, we consider the (N + 1) × (N + 1) matrix

N N (15) Q2(x2) = IN+1 | x B2 2 | . . . | x2 B2 with entries indexed by pairs (µ, λ), where µ = (µ1) ∈ P1, λ = (λ , λ1 2) ∈ P2.

Lemma 2.3. The following equation holds

(16) Y2(x2) = Q2(x2)K2(x2). Proof. The matrix Y2(x2) consists of N + 1 blocks of size (n + 1) × (N + 1). The entry (p, q) of the rth block, p, q, r = 0, 1, . . . , N, is indexed with the pair of partitions (µ, λ), where q+2r−p µ = (p), λ = (q + r, r) and is equal to x2 if r ≤ p ≤ q + r and to 0 otherwise. On the other hand, the corresponding entry of the matrix Q2(x2)K2(x2) is equal to the (p, q)-entry r r of the matrix x B2 2C2(x2). The equations (11), (12) and (13) give that

r B2 = E1+r,1 + E2+r,2 + · · · + EN+1,N+1−r,

r r (B2C2(x2))pq = 0 if p < r and (B2C2(x2))pq is equal to the (p − r, q)-entry (C2(x2))p−r,q of r q−(p−r) r C2(x2). Hence (B2C2(x2))pq = x2 if q ≥ p − r and (B2C2(x2))pq = 0 if q < p − r. In this way, all entries of Y2(x2) and Q2(x2)K2(x2) coincide and this completes the proof. 4

Now, for n ≥ 3 we define inductively the matrices

N N N (17) Un(xn) = I(N+1)n−1 + xn(A ⊗ Bn−1) + · · · + xn (A ⊗ Bn−1) −1 = I(N+1)n−1 − xn(A ⊗ Bn−1) , (18) Vn(xn) = Kn−1(xn) = IN+1 ⊗ Cn−1(xn), (19) Cn(xn) = Un(xn)Vn(xn), (20) Kn(xn) = IN+1 ⊗ Cn(xn), (21) Bn = Bn−1 ⊗ IN+1 = B2 ⊗ I(N+1)n−2,

N N (22) Qn(xn) = I(N+1)n−1 | xnBn | . . . | xn Bn . The following theorem generalizes Lemma 2.3 for any n ≥ 2. Theorem 2.4. The following equation holds for any n ≥ 2 (23) Yn(xn) = Qn(xn)Kn(xn).

Proof. We mimic the proof of Lemma 2.3. Consider the partitions λ = (a1 + · · · + an, a2 + · · · + an, . . . , an); µ = (b1 + · · · + bn−1, b2 + · · · + bn−1, . . . , bn−1). Then the entry in Yn corresponding to (µ, λ) should equal a1+2a2+···+nan−b1−2b2−···−(n−1)bn−1 xn

if λ/µ is a horizontal strip, i.e., if a1 + · · · + an ≥ b1 + · · · + bn−1 ≥ a2 + · · · + an ≥ · · · ≥ an−1 + an ≥ bn−1 ≥ an,

and 0 otherwise.

N N Since Yn(xn) = Qn(xn)Kn(xn) = I(N+1)n−1 | xnBnCn(xn) | · · · | xn Bn Cn(xn) , the (µ, λ) entry of Yn(xn) is in the (1, an) block of Yn, i.e., an an xn Bn Cn(xn) an an Call this matrix Tn. Since Bn = B2 ⊗ I(N+1)n−2, we have Bn = B2 ⊗ I(N+1)n−2 and

an an Tn = xn Bn Un(xn)Vn(xn) an an = xn Bn Un(xn)(IN+1 ⊗ Cn−1(xn))

an an = xn Bn IN+1 ⊗ I(N+1)n−2 + xnA ⊗ (Bn−1Cn−1(xn))

2 2 2 N N N +xnA ⊗ (Bn−1Cn−1(xn)) + · · · + xNA ⊗ (Bn−1Cn−1(xn))

an an an = xn B2 ⊗ I(N+1)n−2 + xnB2 A ⊗ (Bn−1Cn−1(xn))

2 an 2 2 N an N N +xnB2 A ⊗ (Bn−1Cn−1(xn)) + · · · + xn B2 A ⊗ (Bn−1Cn−1(xn)) . 5

The (µ, λ) entry of Yn is thus in the (bn−1, an−1) block of Tn, which equals the (bn−1, an−1) block of an an−1+an−bn−1 an an+an−1−bn−1 an−1+an−bn−1 xn xn B2 A ⊗ (Bn−1 Cn−1(xn)) , i.e., an−1+2an−bn−1 an−1+an−bn−1 xn Bn−1 Cn−1(xn), if an−1 +an ≥ bn−1 ≥ an and 0 otherwise. The inductive step is thus clear and we continue by a1+2a2+···+nan−b1−2b2−···−(n−1)bn−1 induction to conclude that that the (µ, λ) entry of Yn equals xn if λ/µ is a horizontal strip and 0 otherwise.

3. The Algorithm

3.1. Computing only the desired Schur functions. As mentioned in Remark 2.2, a straightforward implementation of Yn(xn) would yield Schur functions corresponding to the n entire set of partitions Pn. The number of such functions is (N + 1) . However, we only wish to compute the Schur functions over the partitions of size at most N, and Pn contains many more partitions than those. In the algorithm presented in this section, we devise a method that leverages the struc- ture explored thus far, but only to compute the Schur functions on the desired partitions. The way we do this is by keeping track of the indices of the desired partitions within the vectors Fk(x , . . . , x1 k) for k = 1, . . . , n, and only maintaining the Schur functions over those partitions. Whenever we do a multiplication by a matrix, we do only the work necessary to compute the next set of desired Schur functions. The key reason we are able to do so efficiently is that the structured matrix Yk(xk) requires us only to reference partitions that are smaller than the ones being processed during com- putation. Thus, by maintaining all partitions up to a certain size, we guarantee the ability to proceed in computing the solution.

3.2. Pseudocode notation. We will use psedo-code based on MATLAB notation. Data arrays are indexed with parentheses “()” and are 1-indexed. Note that this is different from the indexing of partitions in the set Pn, which will be 0-indexed, as this is more elegant mathematically. Further, the use of the colon “:” in the notation array(index1:index2) indicates taking all values in array between index1 and index2, inclusive.

3.3. Algorithm and analysis.

3.3.1. Helper functions. Let Φ(N, n) be the set of partitions of size at most N that use at most n rows. Let φ(N, n) = |Φ(N, n)|. We define a function computeIndices(N,n) that finds the (0-indexed) indices of Φ(N, n) within the list of partitions in Pn (sorted, as before, in reverse lexicographic order). Note that all indices generated by computeIndices(N,n) n must be less than (N + 1) since that is the number of partitions in Pn.

computeIndices(N, n) 6

partitions ← enumeratePartitions(N,n,0) for m ← 1 to length(partitions) do indices(m) ← partitionToIndex(partitions(m),N) end for Return indices

enumeratePartitions(N,n,min) for m ← min to bN/nc do if n = 1 then Add (m) to partitions else subPartitions ← enumeratePartitions(N − m,n − 1,m) th Add m to the n position of all partitions in subPartitions Add subPartitions to partitions end if end for Return partitions

partitionToIndex(partition,N,n) index ← partition(n) for m ← (n − 1) down to 1 do

index ← index · (N + 1) + partition(m) − partition(m + 1) end for Return index

The function enumeratePartitions enumerates all partitions in the set Φ(N, n) in reverse lexicographic order. It works by stepping through possible values for the last element and recursively enumerating the rest of the partition. To analyze its complexity, we simply observe that a constant amount of work is done per recursion level per partition enumerated. Thus, the running time of enumeratePartitions(N,n,0) is simply O(n · φ(N, n)). The function partitionToIndex takes a partition and returns its index within the sorted set Pn. It simply takes the difference between consecutive elements and interprets the result as an (N + 1)-ary number with the elements increasing in significance from first to last. This function clearly takes O(n) time per partition, so its running time on φ(N, n) partitions is O(n · φ(N, n)). Thus, computeIndices(N,n) also takes O(n · φ(N, n)) time. Note that the output of computeIndices is sorted in ascending order.

3.3.2. Matrix functions. In this section we will describe five matrix functions mulY, mulQ, mulK, mulC, and mulU, which simulate multiplication with the matrices Yn(xn), Qn(xn), Kn(xn), Cn(xn), and Un(xn), respectively. 7

We first present an algorithm mulY that simulates multiplication by Yn(xn) = Qn(xn)Kn(xn), but that only computes the Schur functions corresponding to partitions in Φ(N, n). Suppose th Yn(xn) contains a non-zero element at (i, j). Recall from (9) that this implies the i parti- th tion in Pn−1, call it µ, is a subpartition of the j partition in Pn, call it λ. Thus, |µ| ≤ |λ|, and if λ is in Φ(N, n), then, µ must be in Φ(N, n − 1). From the above argument, we see that the partitions in Φ(N, n) will only depend on par- titions in Φ(N, n − 1), so we only need to simulate a very sparse part of the original Yn(xn) matrix. mulY takes as input the Schur functions over Φ(N, n − 1), the indices of Φ(N, n − 1) in Pn−1 (as computed by computeIndices(N,n − 1)), and xn. mulY outputs the Schur functions over Φ(N, n) (as well as their indices).

mulY(input,inputIndices,x,n,N) (output,outputIndices) ← mulQ(input,inputIndices,x,n,N) output ← mulK(output,outputIndices,x,n,N) Return (output,outputIndices)

mulQ(input,inputIndices,x,n,N) n−1 blockLength ← (N + 1) n−2 offset ← (N + 1)

# compute the indices in the output outputIndices ← computeIndices(N,n) H ← constructHashTable(outputIndices)

for m ← 1 to length(outputIndices) do curIndex ← outputIndices(m) if curIndex < blockLength then n−1 # this works because the input and output indices less than (N + 1) match output(m) ← input(m) else if curIndex (mod blockLength) < blockLength − offset then output(m) ← x · output(H(curIndex − blockLength + offset)) end if end for Return (output,outputIndices)

mulY simply runs mulQ and mulK. From (22), we designed mulQ to process its input in blocks n−1 of length (N +1) . The first block is simply copied from the input. Note that since Φ(N, n) is a superset of Φ(N, n − 1), and the partitions are ordered in reverse lexicographic order, the first φ(N, n − 1) entries of outputIndices are exactly equal to inputIndices. For the remaining blocks, we note that each block is a shifted (by offset) version of the previous block multiplied by xn. Since we need to lookup values in output based on their indices,

8

we place all of the indices into a hash table so we can do constant time lookups within the output array. The function constructHashTable(outputIndices) just constructs a hash table that maps each index to its location within the array outputIndices. For a partition λ = (λ , λ , . . . , λ1 2 n) at curIndex in Φ(N, n), we know we will never miss on a hash table lookup in the for loop because the partition located at curIndex † − blockLength + offset is just λ = (λ , λ , . . . , λ1 2 n−1, λn − 1). This fact can be derived † † from the reverse lexicographic ordering. Since |λ | < |λ|, we know that λ is also in Φ(N, n). As argued before, computeIndices(N, n) takes O(n·φ(N, n)) time. Constructing the hash table costs linear time in the number of entries, or O(φ(N, n)). The for loop of mulQ takes time O(φ(N, n)) since hash table look-ups take constant time, therefore the total time to multiply by Qn(xn) using mulQ is O(n · φ(N, n)). The following function simulates multiplying its input by Kn(xn) = IN+1 ⊗ Cn(xn), which n−1 simply multiplies each of its input’s (N + 1) blocks of size (N + 1) by Cn(xn). The values in each block are found by scanning pointers across the indices array, which is sorted in ascending order.

mulK(input,indices,x,n,N) n−1 blockLength ← (N + 1) minPointer ← 1 maxPointer ← 1 for blockIndex ← 0 to b max(indices) / blockLength c do # figure out which indices are in the current block curIndicesMin ← blockIndex · blockLength curIndicesMax ← curIndicesMin + blockLength − 1

# scan pointers forward Scan minPointer to point at the smallest index in indices that is at least curIndicesMin Scan maxPointer to point at the largest index in indices that is at most curIndicesMax

# extract the data for the current block curData ← input(minPointer:maxPointer) # extract the indices for the current block and subtract the block offset curIndices ← indices(minPointer:maxPointer) − curIndicesMin

# run mulC on block of data output(minPointer:maxPointer) ← mulC(curData,curIndices,x,n,N) end for Return output

Since mulK, mulC, and mulU are called recursively on inputs of varying length, we will analyze their complexity in terms of the number of input elements. Let l be the length of the input

9

th to a call to mulK, and let li be the number of input indices in the i block of the for loop. PN Note i=0 li = l. Excluding calls to mulC, multiplying by Kn(xn) using mulK takes O(l) time for pointer scanning and data manipulation. If we let TC(n, l) denote the time taken to multiply a vector of length l by Cn(xn) using mulC, then the time to multiply by Kn(xn) PN is O(l) + i=0 TC(n, li).

We define mulC to simulate multiplying by Cn(xn) = Un(xn)Kn−1(xn) by calling mulU and mulK. Note that K1 is the identity matrix, so we can skip the mulK step when n = 2.

mulC(input,indices,x,n,N) output ← mulU(input,indices,x,n,N) if n > 2 then output ← mulK(output,indices,x,n − 1,N) end if Return output

Finally, we define mulU:

mulU(input,indices,x,n,N) n−2 blockLength ← (N + 1) if n = 2 then offset ← 0 else n−3 offset ← (N + 1) end if H ← constructHashTable(indices) for m ← 1 to length(input) do curIndex ← indices(m) if curIndex ≥ blockLength AND curIndex (mod blockLength) < blockLength − offset then output(m) ← x · output(H(curIndex − blockLength + offset)) + input(m) else output(m) ← input(m) end if end for Return output

The function mulU simulates multiplication by Un(xn) by computing a linear time backsolve −1 using Un(xn) . Suppose we are given vector v and wish to compute w = v · Un(xn) ⇐⇒ −1 v = w · Un(xn) . Let w = (w , w , . . . , w1 2 (N+1)n−1) and v = (v , v , . . . , v1 2 (N+1)n−1), where each n−2 vector is split into (N + 1) blocks of length (N + 1) . Recalling (17), we see that the first

10

block of v is equal to the first block of w, and then within each remaining block, we have the n−2 n−3 relation vi = wi − x · wi−(N+1)n−2+(N+1)n−3 if i is in the first (N + 1) − (N + 1) elements of the block, and vi = wi otherwise. Rearranging terms, we have

x · wi−(N+1)n−2+(N+1)n−3 + vi , if property A is satisfied (24) wi = , vi , otherwise n−2 where property A is satisfied if and only if i is not in the first block, and i (mod (N +1) ) < n−2 n−3 (N + 1) − (N + 1) . The above pseudocode for mulU precisely implements (24), yielding a linear time algorithm for multiplication by Un(xn). Again, we know that the hash table will always hit on lookup because it will always be looking for a partition smaller than the current one being processed in the for loop. If a partition is in the set being processed, all partitions smaller than it must also be in the set. The complexity analysis for mulU is similar to that for mulQ. Assuming the length of the input is l, constructHashTable takes O(l) time, and the for loop takes O(l) time since it does constant work per input element. Thus, the total running time of mulU is O(l), which, combined with the running time for mulK, implies that the running time of mulC is

O(l) , if n = 1 (25) TC(n, l) = PN . O(l) + i=0 TC(n − 1, li) , otherwise Clearly, for each level of recursion (each value of n), a linear amount of work is done in the size of the original input. Thus, solving this recurrence yields: (26) TC(n, l) = O(nl). Finally, this implies that multiplying by Yn(xn) takes O(n · φ(N, n)) + O(φ(N, n)) + O(n · φ(N, n)) = O(n · φ(N, n)) time. To compute the Schur functions for (1) from scratch, note that Φ(N, 1) is just the set of partitions {(0), (1), . . . , (N)}, and the vector of Schur functions over those partitions is just 2 N (1, x , x , . . . , x1 1 1 ), which takes O(N) time to compute. Then, computing the Schur functions over Φ(N, k) for k = {2, 3, . . . , n} requires multiplication by Y2(x2), Y3(x3), . . . , Yn(xn), which takes time at most Xn

2 (27) O (k · φ(N, k)) < O n · φ(N, n) . k=2 √ π 2N/3 As mentioned in the introduction, φ(N, n) ≤ φ(N, N) = KN = O(e ) [9, p. 116], so √ 2 2 π 2N/3 the running time can also be bounded by O(n KN) = O(n e ).

References

1. G.J. Byers and F. Takawira, Spatially and temporally correlated MIMO channels: modeling and capacity analysis, IEEE Transactions on Vehicular Technology 53 (2004), 634–643. 2. Michael Clausen and Ulrich Baum, Fast Fourier transforms for symmetric groups: theory and imple- mentation, Math. Comp. 61 (1993), no. 204, 833–847. MR MR1192969 (94a:20028) 11

3. James W. Cooley and John W. Tukey, An algorithm for the machine calculation of complex Fourier series, Math. Comp. 19 (1965), 297–301. MR MR0178586 (31 #2843) 4. J. Demmel and P. Koev, Accurate and efficient evaluation of Schur and Jack functions, Math. Comp. 75 (2006), 223–239. 5. Persi Diaconis and Daniel Rockmore, Efficient computation of the Fourier transform on finite groups, J. Amer. Math. Soc. 3 (1990), no. 2, 297–332. MR MR1030655 (92g:20024) 6. H. Gao, P.J. Smith, and M.V. Clark, Theoretical reliability of MMSE linear diversity combining in Rayleigh-fading additive interference channels, IEEE Transactions on Communications 46 (1998), 666– 672. 7. A.J. Grant, Performance analysis of transmit beamforming, IEEE Transactions on Communications 53 (2005), 738–744. 8. R. Guti´errez, J. Rodriguez, and A. J. S´aez, Approximation of hypergeometric functions with matricial argument through their development in series of zonal polynomials, Electron. Trans. Numer. Anal. 11 (2000), 121–130. 9. G. H. Hardy, Ramanujan: Twelve lectures on subjects suggested by his life and work., AMS Chelsea, New York, 1999. 10. M. Kang and M.-S. Alouini, Largest eigenvalue of complex Wishart matrices and performance analysis of MIMO MRC systems, IEEE Journal on Selected Areas in Communications 21 (2003), no. 3, 418–431. 11. Plamen Koev and Alan Edelman, The efficient evaluation of the hypergeometric function of a matrix argument, Math. Comp. 75 (2006), no. 254, 833–846 (electronic). MR MR2196994 (2006k:33007) 12. I. G. Macdonald, Symmetric functions and Hall polynomials, Second ed., Oxford University Press, New York, 1995. 13. M.R. McKay and I.B. Collings, Capacity bounds for correlated rician MIMO channels, 2005 IEEE In- ternational Conference on Communications. ICC 2005., vol. 2, 16-20 May 2005, pp. 772–776. 14. , General capacity bounds for spatially correlated Rician MIMO channels, IEEE Transactions on Information Theory 51 (2005), 3121–3145. 15. R. J. Muirhead, Aspects of multivariate statistical theory, John Wiley & Sons Inc., New York, 1982. 16. A. Ozyildirim and Y. Tanik, Outage probability analysis of a CDMA system with antenna arrays in a cor- related Rayleigh environment, IEEE Military Communications Conference Proceedings, 1999. MILCOM 1999, vol. 2, 31 Oct.–3 Nov. 1999, pp. 939–943. 17. , SIR statistics in antenna arrays in the presence of correlated Rayleigh fading, IEEE VTS 50th Vehicular Technology Conference, 1999. VTC 1999 - Fall, vol. 1, 19-22 September 1999, pp. 67–71. 18. Markus P¨uschel, Cooley-Tukey FFT like algorithms for the DCT, Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 2, 2003, pp. 501–504. 19. Markus P¨uschel and Jos´e M. F. Moura, The algebraic approach to the discrete cosine and sine transforms and their fast algorithms, SIAM J. Comput. 32 (2003), no. 5, 1280–1316. MR MR2001274 (2004j:42023) 20. Hyundong Shin and Jae Hong Lee, Capacity of multiple-antenna fading channels: spatial fading corre- lation, double scattering, and keyhole, IEEE Transactions on Information Theory 49 (2003), 2636–2647. 21. V. Smidl and A. Quinn, Fast variational PCA for functional analysis of dynamic image sequences, Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis, 2003. ISPA 2003., vol. 1, 18-20 September 2003, pp. 555–560.

Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, U.S.A. E-mail address: cychan@mit.edu 12

Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria E-mail address: drensky@math.bas.bg

Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, U.S.A. E-mail address: edelman@math.mit.edu

Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, U.S.A. E-mail address: plamen@math.mit.edu

13

Similar Documents

Free Essay

Ipd Capstone

...Infrastructure Planning and Design Dynamic Datacenter Version 1.2 Published: April 2010 Updated: November 2011 For the latest information, please see www.microsoft.com/ipd Copyright © 2011 Microsoft Corporation. All rights reserved. Complying with the applicable copyright laws is your responsibility.  By using or providing feedback on this documentation, you agree to the license agreement below. If you are using this documentation solely for non-commercial purposes internally within YOUR company or organization, then this documentation is licensed to you under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/2.5/ or send a letter to Creative Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA. This documentation is provided to you for informational purposes only, and is provided to you entirely "AS IS".  Your use of the documentation cannot be understood as substituting for customized service and information that might be developed by Microsoft Corporation for a particular user based upon that user’s particular environment. To the extent permitted by law, MICROSOFT MAKES NO WARRANTY OF ANY KIND, DISCLAIMS ALL EXPRESS, IMPLIED AND STATUTORY WARRANTIES, AND ASSUMES NO LIABILITY TO YOU FOR ANY DAMAGES OF ANY TYPE IN CONNECTION WITH THESE MATERIALS OR ANY INTELLECTUAL PROPERTY IN THEM.    Microsoft may have patents, patent applications, trademarks...

Words: 15668 - Pages: 63

Free Essay

Organizational Units (Ous) and Group Structure

...Organizational Units (OUs) and Group Structure name POS 421 date University of Phoenix Organizational Units (OUs) and Group Structure Option 1: Each location contains two or more departments. Each department holds its own printer and computers allocated to a specific department and the main company officials. Communication will be set between each user, computer, data, and printer to ensure documents stay safe in within independent departments. User login and password will be assigned to each person inside the department’s group. Group privileges will be set according to job descriptions in each department and store in each department’s OU file. The Organization Unit (OU) will hold the hierarchy for users, data, and equipment permission for each location. The UO will ensure only the departments in each location have permissions for computers and printers for each department. CompanyA.com users will have unlimited access across the departments as shown in fig 1.1. Domain Fig. 1.1 The group structurer allows administors to set groups and prived the nessary permission setting according to the resources needed for daily acitivies within the orginazation departmental structure. For example a manager group in one department might have different privlege setting to devices and data then the average employee group for the department. However people who work in the human resource department will have more abilitites to view vital data infromation then a person in...

Words: 345 - Pages: 2

Free Essay

Distribution of Domain Controllers

...Distribution of Domain Controllers With the fresh install of Windows 2008 R2 both Domain Controllers and Read Only Domain Controllers will need to be deployed in strategic areas of the redesigned network. Since the corporate offices are in Ohio this is where the main domain controller will be located. It has been determined that the Huffman network will have between 50 - 100 users throughout the site which means that a minimum of two Domain Controllers, with universal grouping caching, is recommended to be placed in the network (Morimoto, Noel, Droubi, , Mistry, & Amaris, 2010). “Universal Group Cashing is a process by which an Active Directory site caches all universal group members locally so that the next time clients log on, information is more quickly provided to the client and they are able to log on faster” (Morimoto, Noel, Droubi, Mistry, & Amaris 2010, p. 371). This would be fine if the network was in one location, but Huffman’s network has four locations around the United States which means that only two regular Domain Controllers is not practical for the topography of the business. Domain Controller deployment will be as follows. Each satellite location, including California, Missouri, and New Jersey, will each have a Read-Only Domain Controller in place. The Read Only Domain Controllers is used instead of a normal controller to enhance security throughout the entire network. Read only prevents changes made to sensitive information throughout the entire...

Words: 495 - Pages: 2

Free Essay

Vmware

...Features and Benefits Comparison KEY FEATURES AND BENEFITS VSPHERE 5 ENTERPRISE PLUS WINDOWS SERVER 2008 MICROSOFT HYPER-V R2 SP1 CITRIX XENSERVER 5.6 SP2 RED HAT ENTERPRISE VIRTUALIZATION 2.2 Infrastructure Services: Virtualize and Aggregate Hardware Resources VMware vSphere™ infrastructure services transform discrete hardware resources into a shared mainframe-like computing platform that is incredibly resilient and capable of running the most demanding applications with near-native performance. VMware Compute: Infrastructure services that efficiently virtualize server resources and aggregate them into logical pools that can precisely be allocated to applications. VMware ESXi - virtualize server resources comprehensively • Bare-metal architecture VMware ESXi inserts a robust virtualization layer directly on the server hardware for near-native virtual machine performance, reliability and scalability. 3 Yes, but Hyper-V requires Windows Server 2008. The stand-alone Hyper-V Server R2 requires a large portion of Windows Server 2008 Yes, but XenServer requires a Linux OS in the Domain 0 management partition No, RHEV requires a Red Hat Enterprise Linux host OS • Small footprint. VMware ESXi is a compact, 144MB hypervisor. It is a fraction of the size of a general purpose operating system for unparalleled security and reliability. 3 No Microsoft Hyper-V R2 SP1 with Server Core has a 3.6 GB footprint No XenServer 5.6 including its Linux Dom 0 has a >1GB disk footprint ...

Words: 289 - Pages: 2

Free Essay

Nt1110 Uni2 Hyper-Threading

...Paul Gogue Unit 2 Hyper-Threading NT1110 Introduction: Hyper-threading or HT technology is used by some Intel microprocessor's which allows a single microprocessor to act like two seperate processors to the operating system and the application programs that use it. Hyper-threading enables the "core" processor to execute two concurrent threads of instruction sent by the OS. This simultaneous multi-thread implementation (SMT) is used to improve the parellization of computation, which means more work to be done by the processor during each clock cycle. To the OS the HT micro-processors appear to be seperate processors, because the HT technology uses two virtual core's for every physical core. A Multi-core processor is a single computing component with two or more independent core's, which usually means two or more CPUs/core's working together on the same chip. Multi-core technolgy adds physical CPUs/core's which are the units that read and execute program instructions, adding multiple core's enables multiple instructions to be executed. The multiple core's improves performance, reduces power consumption, an more efficient simultaneous processing of multiple tasks amenable to parellel computing. Summary: Hyper threading stopped with P4 because there wasn't enough spare capacity in the core for the second thread. Each thread gets less cache to work with, and if they simultaneously need memory or the same resources, switching back and forth between them...

Words: 547 - Pages: 3

Premium Essay

Student

...Assignment 1: Installing Server Roles with a Batch File Assignment: After careful consideration, it was decided by the corporate IT team that in order to increase security and lower your attack surface, Windows 2008 Server Core would be installed on a group of your servers. However, a few roles need to be installed on the server and your Senior Administrator is not familiar with installing roles on a Server Core machine. Using Internet research and use notepad to create a batch file to install the following roles: DHCP, DNS, Print Services and Hyper-V. Students will complete a notepad file that will successfully install all of the roles mentioned. Save the file using the name Roles.bat. 1. DHCP: start /w ocsetup dhcpservercore. 2. DNS: start /w ocsetup DNS-Server-Core-Role 3. PRINT: start /ocsetup Printing_ServerCore-Role. 4. SERVICES: start /w ocsetup MediaServer, 5. HYPER- Start /ocsetup Microsoft-Hyper-V. Questions 1. What does the /w switch do? Why is it used? Specifies the warning level, an integer in the range 0 through 4. There is no space between the /W switch and the digit indicating the warning-level value. 2. What switch is used to remove a role? To remove a role you would need to commas to separate command names. 3. What is the command to start the DHCP Server service? To open DHCP, click Start, click Settings, click Control Panel, double-click Administrative Tools, and then double-click DHCP. 4. What command is used...

Words: 267 - Pages: 2

Free Essay

2014

...Licencias por Volumen de Microsoft Derechos de Uso de los Productos Español Internacional | Julio de 2014 Tabla de Contenido Introducción 4 Términos de Licencia Universales 7 Definiciones 7 Derechos de uso 9 Derechos para utilizar otras versiones 9 Software de Terceros 9 Código de Versión Preliminar 9 Actualizaciones y Complementos 9 Prohibición de Servicios de Hosting Comercial 9 Limitaciones Técnicas 9 Otros Derechos 9 Documentación 9 Administración de software de outsourcing 9 Reasignación de licencias 9 Activación del Producto 10 Funcionalidad adicional/Servicio opcional 11 Uso de Más de un Producto o Funcionalidad al Mismo Tiempo 11 Componentes de Fuente 11 Componentes del software de Windows 11 Pruebas Comparativas 11 Productos que Incluyen Tecnología SQL Server 11 Elemento de informe de asignación de SQL Server Reporting Services 12 Multiplexación 12 Módulos de System Center 12 Código distribuible 12 Software Plus Services 13 Creación y almacenamiento de instancias 14 Indivisibilidad del software 14 Aplicaciones de Escritorio (Por Dispositivo) 15 Access 2013 16 AutoRoute 2013 16 Excel 2013 16 Excel para Mac 2011 16 InfoPath 2013 16 Lync 2013 16 Lync para Mac 2011 17 MapPoint 2013, Edición Fleet 17 MapPoint 2013 edición Standard 17 Office para Mac Standard 2011 18 Derechos de Uso Comercial de Office Home & Student 2013 RT 18 Office Multi Language Pack 2013 18 Office Professional Plus 2013 18 Office, edición Standard...

Words: 46164 - Pages: 185