xuebaunion@vip.163.com
3551 Trousdale Rkwy, University Park, Los Angeles, CA
留学生论文指导和课程辅导
无忧GPA:https://www.essaygpa.com
工作时间:全年无休-早上8点到凌晨3点
微信客服:xiaoxionga100
微信客服:ITCS521
Homework of ECE 594BB/194BB: Tensor Methods for Machine
Learning and Big Data
Due Date: online submission due at 23:59pm 11/10/2020 Tuesday. No late submission
Submission requirement:
1. Please submit your zipped Matlab or python codes
2. Please submit a PDF report to explain the key ideas of your code implementation and
your results.
3. The report should be typed (either in word or in latex).
Guideline:
In this homework, you will be implementing at least two tensor factorization algorithms and use
them to compress the model parameters of deep neural networks:
You can choose one problem from Problems 1 & 2.
Problem 3 is mandatary.
You may get some bonus points if you implement and test all three tensor factorizations.
Problem 1: CP compression of fully-connected (FC) layer
In this problem, you will need to implement CP factorization and use it to compress the
tensorized data array of a 784X512 FC layer, which is part of a classification model for the
MNIST data set.
(a) Implement a CP decomposition with alternating least square. For a general d-way tensor
with size I1-by-I2-by…-by-Id and give a for a specified rank R, your algorithm should return
the d factor matrices required in a CP format.
(b) Test your algorithm with the dataset “cp_fc_layer.mat”. We have already converted the
FC weight matrix to a 4-way tensor of size [28,28,16,32]. Please increase the rank R from
20 to 50, and plot the Frobenius-norm error of your CP factorization as a function of R.
Problem 2: Tucker compression of a convolution filter.
In this problem, you will need to implement Tucker factorization and use it to compress the
tensorized format of a 9X64X64 convolution filter, which is part of a CNN classification model for
the CIFAR-10 data set.
(a) Implement Tucker decomposition with high-order orthogonal iteration (HOOI) for a general
d-way tensor with size I1-by-I2-by…-by-Id for a specified multilinear rank (R1, R2, …, Rd)
(b) Test your algorithm with the dataset “tucker_conv_tensor.mat”. Please choose a set of
multilinear ranks (R1, R2, …, Rd), and use a table to report the Frobenius-norm errors of your
Tucker decomposition for each choice of multilinear rank.
Problem 3: Tensor-train compression of an embedding table.
In this problem, you will need to implement the tensor-train factorization and use it to compress
a 25,000X256 embedding table, which is the first layer of a natural language processing model.
Embedding table is also widely used in the recommendation system of Facebook and Amazon.
(a) Implement tensor-train decomposition for a general d-way tensor. The function should
input the original d-way tensor and a truncation error bound (see our lecture note), and
output the resulting d TT cores.
(b) Test your algorithm with the dataset “tt_embedding.mat”. We have already folded the
2D embedding table to a 7-way tensor of size [5,8,25,25,4,8,8]. When you change the
truncation error bound, you will get 7 TT cores with different size (and thus different number
of variables in the TT cores). Then you can calculate both the actual Frobenious-norm error
ε and a compression ratio S. Plot S as a function of ε.
References:
1) Lecture notes 3-6
2) T. Kolda and B. Bader, “Tensor decompositions and applications,” SIAM Review, 2009
3) De Lathauwer, Lieven, Bart De Moor, and Joos Vandewalle. "A multilinear singular value
decomposition." SIAM J. Matrix Analysis and Applications 21.4 (2000): 1253-1278.
4) De Lathauwer, Lieven, Bart De Moor, and Joos Vandewalle. "On the best rank-1 and
rank-(r1, r2,..., rn) approximation of higher-order tensors." SIAM journal on Matrix Analysis
and Applications 21.4 (2000): 1324-1342
5) Oseledets, Ivan V. "Tensor-train decomposition." SIAM J. Scientific Computing 33.5
(2011): 2295-2317.