xuebaunion@vip.163.com

3551 Trousdale Rkwy, University Park, Los Angeles, CA

留学生论文指导和课程辅导

无忧GPA：https://www.essaygpa.com

工作时间：全年无休-早上8点到凌晨3点

扫码添加客服微信

扫描添加客服微信

Python代写-2MMS80

时间：2021-02-27

2DI70/2MMS80 - Statistical Learning Theory

Nearest neighbor classification and

handwritten digit classification

1 Introduction

Sometimes simple ideas can be surprisingly good. This is the case with one of the oldest,

but still rather popular learning rule, known as the k-nearest neighbor rule (abbreviated

k-NN in this document). Consider the setting of supervised learning. Suppose you have a

training data set {Xi, Yi}ni=1, where Xi ∈ X and Yi ∈ Y, where X should be a metric space

(that is, a space endowed with a way to measure distances). As usual, our goal is to learn

a prediction rule f : X → Y that is able to do “good” predictions on unseen data.

The idea of k-NN is remarkably simple. Given a point x ∈ X for which we want a

prediction, we simply look for the k “closest” points in the training set and make a prediction

based on a majority vote (classification) or average (regression) of the neighbor labels. That

is as simple as that. Computationally this might seem cumbersome, particularly for large

datasets. But one can use clever computational tricks to ensure this can be done quickly.

In this assignment, which is divided in two parts, you will: (i) get a first-hand experience

with this method by implementing it and choosing a good set of tunable parameters in a

sound way; (ii) analyze the performance of this method in some generality and get a better

understanding why it is sensible.

To make the explanation more concrete let us consider the problem of handwritten digit

classification (which is the topic of part I): given a low resolution image of a handwritten

digit we would like to classify it as one of the digits in {0, 1, . . . , 9}. More specifically our

images have 28x28 pixels, each pixel taking values in {0, 1, 2, . . . , 255}. Therefore X =

{0, 1, . . . , 255}28×28 and Y = {0, 1, . . . , 9}.

2 The k-NN rule

Let d : X × X → [0,+∞) be a metric1 in X . Let x ∈ X be an arbitrary point in X and

consider the re-ordering of each pair of the training data as(

X(1)(x), Y(1)(x)

)

,

(

X(2)(x), Y(2)(x)

)

, . . . ,

(

X(n)(x), Y(n)(x)

)

,

1A metric or distance is a function that must satisfy the following properties: (i) ∀x ∈ X d(x, x) = 0; (ii)

∀x, y ∈ X d(x, y) = d(y, x) (symmetry); (iii) ∀x, y, z ∈ X d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality).

1

so that

d(x,X(1)(x)) ≤ d(x,X(2)(x)) ≤ · · · ≤ d(x,X(n)(x)) .

Note that the ordering depends on the specific point x (hence the cumbersome notation)

and might not be unique. In that case we can break ties in some pre-defined way (e.g., if

two points are at equal distance from x the point that appears first in the original dataset

will also appear first in the ordered set). The k-NN rule (for classification) is defined as

fˆn(x) = arg max

y∈Y

{

k∑

i=1

1

{

Y(i)(x) = y

}}

. (1)

In other words, just look among the k-nearest neighbors and choose the class that is rep-

resented more often. Obviously, there might be situations where two (or more) classes

appear an equal number of times. In such situations one can break these ties according to

a pre-specified rule.

The performance of the method described above hinges crucially on the choice of two

parameters: k, the number of neighbors used for prediction and; d : X×X → R, the distance

metric used to define proximity of two points in the feature space. There are many possible

choices for d, and a na¨ıve but sensible starting point is to consider the usual Euclidean

distance: if x, y ∈ Rl then the Euclidean distance is simply given by

√∑l

i=1(xi − yi)2.

3 The MNIST dataset

This MNIST dataset2 is a classical dataset frequently used to demonstrate machine learn-

ing methods, and is still often used as a benchmark to demonstrate methodologies. This

dataset is provided as comma-separated value (csv) files in CANVAS. The training set

MNIST train.csv consists of 60000 images of handwritten digits and the corresponding la-

bel (provided by a human expert). The test set MNIST test.csv consists of 10000 images

of handwritten digits and the corresponding labels (this file will be provided only closer to

the assignment deadline). In addition, in CANVAS you will also find two smaller training

and test sets, MNIST train small.csv (3000 examples) and MNIST test small.csv (1000

examples). These will be used for a large part of the assignment, to avoid the struggles

associated with large datasets and to test out your implementations.

The format of the data is as follows: each row in the .csv file has 785 entries and

corresponds to a single example. The first entry in the row is the “true” label, in Y =

{0, 1, . . . , 9} and the 784 subsequent entries encode the image of the digit – each entry

corresponding to a pixel intensity, read in a lexicographical order (left-to-right then top-to-

bottom). Pixel intensities take values in {0, 1, . . . , 255}. The Matlab function showdigit.m

in CANVAS will take as input a row of this data and display the corresponding digit image.

Figure 3 shows several examples from MNIST train.csv.

2See http://yann.lecun.com/exdb/mnist/ for more details and information.

2

True label is 5 True label is 0 True label is 4 True label is 1 True label is 9

Figure 1: First five examples from MNIST train.csv and the corresponding labels provided

by a human expert.

Ultimately the goal is to minimize the probability of making errors. For the purposes of

this assignment we will use simply the 0/1 loss. This means that the empirical risk is simply

the average number of errors we make. If {X ′i, Y ′i }mi=1 denotes the pairs of features/labels

in a test set and {Yˆ ′i }mi=1 denotes the corresponding inferred labels by the k-NN rule then

the empirical risk on the test set is given by 1m

∑m

i=1 1

{

Yˆ ′i 6= Y ′i

}

.

PART I - Computational Assignment

The goal of the first part of the assignment is to implement “from scratch” a nearest neighbor

classifier. This means you will not be allowed to use existing libraries and implementations of

nearest neighbors, and should only make use of standard data structures and mathematical

operations3. The only “exception” is that you are allowed to use a sorting subroutine (i.e.,

a function that, given a vector of numerical values, can sort them in ascending order and

give the correspondent reordered indexes). The rational for the above restrictions is for

you to experience what are the critical aspects of your implementation, and understand if

it is scalable to big datasets. For this assignment you are allowed to use any language or

command interpreter (preferably a high-level language, but not necessarily so). You will

not be judged on your code, but rather on your choices and corresponding justification.

You should prepare a report (in English) and upload it via CANVAS. The report should

be self-contained, and you should pay attention to the following points:

• The report should feature an introduction, explaining the problem and methodology.

• Use complete sentences: there should be a coherent story and narrative - not simply

numerical answers to the questions without any criticism or explanation.

• Pay attention to proper typesetting. Use a spelling checker. Make sure figures and

tables are properly typeset.

3This means that, libraries encoding useful data-structures are allowed, as long as these are not specifically

targeting nearest neighbors.

3

• It is very important that for you to have a critical attitude, and comment on the your

choices and results.

The report for part I should be submitted as a single .pdf file. In addition, submit a

separate .pdf file with the code/script you used (you will not be graded on this, but if

needed we might look at it to better understand the results in your report). In your report

you should do the following experiments and answer the questions below.

a) Write down your implementation of k-NN neighbors (using as training data

MNIST train small.csv) and report on its accuracy to predict the labels

in both the training and test sets (respectively MNIST train small.csv and

MNIST test small.csv). For this question use the simple Euclidean distance. Make

a table of results for k ∈ {1, . . . , 20}, plot your the empirical training and test loss

as a function of k, and comment on your results. Explain how ties are broken in

Equation 1.

b) Obviously the choice of the number of neighbors k is crucial to obtain good per-

formance. This choice must be made WITHOUT LOOKING at the test dataset.

Although one can use rules-of-thumb, a possibility is to use cross-validation. Leave-

One-Out Cross-Validation (LOOCV) is extremely simple in our context. Implement

LOOCV to estimate the risk of the k-NN rule for k ∈ {1, . . . , 20}. Report these

LOOCV risk estimates4 on a table and plot them as well the empirical loss on the test

dataset (that you obtained in (a)). Given your results, what would be a good choice

for k? Comment on your results.

c) Obviously, the choice of distance metric also plays an important role. Consider a

simple generalization of the Euclidean distance, namely `p distances (also known as

Minkowski distances). For x, y ∈ Rl define

dp(x, y) =

(

l∑

i=1

|xi − yi|p

)1/p

,

where p ≥ 1. Use leave-one-out cross validation to simultaneously choose a good value

for k ∈ {1, . . . , 20} and p ∈ [1, 15].

d) (this question is more open) Building up on your work for the previous questions

suggest a different distance metric or some pre-processing of the data that you consider

appropriate to improve the performance of the k-NN method. Note that, any choices

you make should be done solely based on the training data (that is, do not clairvoyantly

optimize the performance of your method on the test data). Clearly justify ALL the

choices made and describe the exact steps you took. Someone reading your report

should be able to replicate your results.

4Recall that these estimates use only the information on the training dataset.

4

Now that you implemented and tested your methodologies in a smaller scale, let us see

how these methods scale to the full datasets. For the remaining questions you will use the

full MNIST training and test sets.

e) Make use of either the Euclidean distance or dp with your choice of p in part (c)

(use only one or the other). Determine a good value for k using leave-one-out cross

validation when considering the full training set (60000 examples). Was your imple-

mentation able to cope with this large amount of data? Did you have to modify it

in any way? If so, explain what you did. What is the risk estimate you obtain via

cross-validation?

f) (it is only possible to answer this question after I provide you the file

MNIST test.csv) Using the choice of k in part (e) compute the loss of your method

on the test set provided. How does this compare with the cross-validation estimate

you computed in (e)? Would you choose a different value for k had you been allowed

to look at the test dataset earlier?

g) Bonus question: each training example is currently a high-dimensional vector. A

very successful idea in machine learning is that of dimensionality reduction. This is

typically done in an unsupervised way - feature vectors are transformed so that most

information is preserved, while significantly lowering their dimension. A possibility in

our setting is to use Principal Component Analysis (PCA) to map each digit image

to a lower dimensional vector. There is an enormous computational advantage (as

computing distances will be easier) but there might be also an advantage in terms

of statistical generalization. Use this idea in our setting, and choose a good number

of principal components to keep in order to have good accuracy (again, this choice

should be solely based on the training data). Document clearly all the steps of your

procedure. In this question you are allowed to use an existing implementation of PCA

or related methods.

IMPORTANT: if for some reason you are unable to make things work for the large

datasets, use instead for the training data the first 30000 rows of MNIST train.csv and

for testing the first 5000 rows of MNIST test.csv.

5

学霸联盟

Nearest neighbor classification and

handwritten digit classification

1 Introduction

Sometimes simple ideas can be surprisingly good. This is the case with one of the oldest,

but still rather popular learning rule, known as the k-nearest neighbor rule (abbreviated

k-NN in this document). Consider the setting of supervised learning. Suppose you have a

training data set {Xi, Yi}ni=1, where Xi ∈ X and Yi ∈ Y, where X should be a metric space

(that is, a space endowed with a way to measure distances). As usual, our goal is to learn

a prediction rule f : X → Y that is able to do “good” predictions on unseen data.

The idea of k-NN is remarkably simple. Given a point x ∈ X for which we want a

prediction, we simply look for the k “closest” points in the training set and make a prediction

based on a majority vote (classification) or average (regression) of the neighbor labels. That

is as simple as that. Computationally this might seem cumbersome, particularly for large

datasets. But one can use clever computational tricks to ensure this can be done quickly.

In this assignment, which is divided in two parts, you will: (i) get a first-hand experience

with this method by implementing it and choosing a good set of tunable parameters in a

sound way; (ii) analyze the performance of this method in some generality and get a better

understanding why it is sensible.

To make the explanation more concrete let us consider the problem of handwritten digit

classification (which is the topic of part I): given a low resolution image of a handwritten

digit we would like to classify it as one of the digits in {0, 1, . . . , 9}. More specifically our

images have 28x28 pixels, each pixel taking values in {0, 1, 2, . . . , 255}. Therefore X =

{0, 1, . . . , 255}28×28 and Y = {0, 1, . . . , 9}.

2 The k-NN rule

Let d : X × X → [0,+∞) be a metric1 in X . Let x ∈ X be an arbitrary point in X and

consider the re-ordering of each pair of the training data as(

X(1)(x), Y(1)(x)

)

,

(

X(2)(x), Y(2)(x)

)

, . . . ,

(

X(n)(x), Y(n)(x)

)

,

1A metric or distance is a function that must satisfy the following properties: (i) ∀x ∈ X d(x, x) = 0; (ii)

∀x, y ∈ X d(x, y) = d(y, x) (symmetry); (iii) ∀x, y, z ∈ X d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality).

1

so that

d(x,X(1)(x)) ≤ d(x,X(2)(x)) ≤ · · · ≤ d(x,X(n)(x)) .

Note that the ordering depends on the specific point x (hence the cumbersome notation)

and might not be unique. In that case we can break ties in some pre-defined way (e.g., if

two points are at equal distance from x the point that appears first in the original dataset

will also appear first in the ordered set). The k-NN rule (for classification) is defined as

fˆn(x) = arg max

y∈Y

{

k∑

i=1

1

{

Y(i)(x) = y

}}

. (1)

In other words, just look among the k-nearest neighbors and choose the class that is rep-

resented more often. Obviously, there might be situations where two (or more) classes

appear an equal number of times. In such situations one can break these ties according to

a pre-specified rule.

The performance of the method described above hinges crucially on the choice of two

parameters: k, the number of neighbors used for prediction and; d : X×X → R, the distance

metric used to define proximity of two points in the feature space. There are many possible

choices for d, and a na¨ıve but sensible starting point is to consider the usual Euclidean

distance: if x, y ∈ Rl then the Euclidean distance is simply given by

√∑l

i=1(xi − yi)2.

3 The MNIST dataset

This MNIST dataset2 is a classical dataset frequently used to demonstrate machine learn-

ing methods, and is still often used as a benchmark to demonstrate methodologies. This

dataset is provided as comma-separated value (csv) files in CANVAS. The training set

MNIST train.csv consists of 60000 images of handwritten digits and the corresponding la-

bel (provided by a human expert). The test set MNIST test.csv consists of 10000 images

of handwritten digits and the corresponding labels (this file will be provided only closer to

the assignment deadline). In addition, in CANVAS you will also find two smaller training

and test sets, MNIST train small.csv (3000 examples) and MNIST test small.csv (1000

examples). These will be used for a large part of the assignment, to avoid the struggles

associated with large datasets and to test out your implementations.

The format of the data is as follows: each row in the .csv file has 785 entries and

corresponds to a single example. The first entry in the row is the “true” label, in Y =

{0, 1, . . . , 9} and the 784 subsequent entries encode the image of the digit – each entry

corresponding to a pixel intensity, read in a lexicographical order (left-to-right then top-to-

bottom). Pixel intensities take values in {0, 1, . . . , 255}. The Matlab function showdigit.m

in CANVAS will take as input a row of this data and display the corresponding digit image.

Figure 3 shows several examples from MNIST train.csv.

2See http://yann.lecun.com/exdb/mnist/ for more details and information.

2

True label is 5 True label is 0 True label is 4 True label is 1 True label is 9

Figure 1: First five examples from MNIST train.csv and the corresponding labels provided

by a human expert.

Ultimately the goal is to minimize the probability of making errors. For the purposes of

this assignment we will use simply the 0/1 loss. This means that the empirical risk is simply

the average number of errors we make. If {X ′i, Y ′i }mi=1 denotes the pairs of features/labels

in a test set and {Yˆ ′i }mi=1 denotes the corresponding inferred labels by the k-NN rule then

the empirical risk on the test set is given by 1m

∑m

i=1 1

{

Yˆ ′i 6= Y ′i

}

.

PART I - Computational Assignment

The goal of the first part of the assignment is to implement “from scratch” a nearest neighbor

classifier. This means you will not be allowed to use existing libraries and implementations of

nearest neighbors, and should only make use of standard data structures and mathematical

operations3. The only “exception” is that you are allowed to use a sorting subroutine (i.e.,

a function that, given a vector of numerical values, can sort them in ascending order and

give the correspondent reordered indexes). The rational for the above restrictions is for

you to experience what are the critical aspects of your implementation, and understand if

it is scalable to big datasets. For this assignment you are allowed to use any language or

command interpreter (preferably a high-level language, but not necessarily so). You will

not be judged on your code, but rather on your choices and corresponding justification.

You should prepare a report (in English) and upload it via CANVAS. The report should

be self-contained, and you should pay attention to the following points:

• The report should feature an introduction, explaining the problem and methodology.

• Use complete sentences: there should be a coherent story and narrative - not simply

numerical answers to the questions without any criticism or explanation.

• Pay attention to proper typesetting. Use a spelling checker. Make sure figures and

tables are properly typeset.

3This means that, libraries encoding useful data-structures are allowed, as long as these are not specifically

targeting nearest neighbors.

3

• It is very important that for you to have a critical attitude, and comment on the your

choices and results.

The report for part I should be submitted as a single .pdf file. In addition, submit a

separate .pdf file with the code/script you used (you will not be graded on this, but if

needed we might look at it to better understand the results in your report). In your report

you should do the following experiments and answer the questions below.

a) Write down your implementation of k-NN neighbors (using as training data

MNIST train small.csv) and report on its accuracy to predict the labels

in both the training and test sets (respectively MNIST train small.csv and

MNIST test small.csv). For this question use the simple Euclidean distance. Make

a table of results for k ∈ {1, . . . , 20}, plot your the empirical training and test loss

as a function of k, and comment on your results. Explain how ties are broken in

Equation 1.

b) Obviously the choice of the number of neighbors k is crucial to obtain good per-

formance. This choice must be made WITHOUT LOOKING at the test dataset.

Although one can use rules-of-thumb, a possibility is to use cross-validation. Leave-

One-Out Cross-Validation (LOOCV) is extremely simple in our context. Implement

LOOCV to estimate the risk of the k-NN rule for k ∈ {1, . . . , 20}. Report these

LOOCV risk estimates4 on a table and plot them as well the empirical loss on the test

dataset (that you obtained in (a)). Given your results, what would be a good choice

for k? Comment on your results.

c) Obviously, the choice of distance metric also plays an important role. Consider a

simple generalization of the Euclidean distance, namely `p distances (also known as

Minkowski distances). For x, y ∈ Rl define

dp(x, y) =

(

l∑

i=1

|xi − yi|p

)1/p

,

where p ≥ 1. Use leave-one-out cross validation to simultaneously choose a good value

for k ∈ {1, . . . , 20} and p ∈ [1, 15].

d) (this question is more open) Building up on your work for the previous questions

suggest a different distance metric or some pre-processing of the data that you consider

appropriate to improve the performance of the k-NN method. Note that, any choices

you make should be done solely based on the training data (that is, do not clairvoyantly

optimize the performance of your method on the test data). Clearly justify ALL the

choices made and describe the exact steps you took. Someone reading your report

should be able to replicate your results.

4Recall that these estimates use only the information on the training dataset.

4

Now that you implemented and tested your methodologies in a smaller scale, let us see

how these methods scale to the full datasets. For the remaining questions you will use the

full MNIST training and test sets.

e) Make use of either the Euclidean distance or dp with your choice of p in part (c)

(use only one or the other). Determine a good value for k using leave-one-out cross

validation when considering the full training set (60000 examples). Was your imple-

mentation able to cope with this large amount of data? Did you have to modify it

in any way? If so, explain what you did. What is the risk estimate you obtain via

cross-validation?

f) (it is only possible to answer this question after I provide you the file

MNIST test.csv) Using the choice of k in part (e) compute the loss of your method

on the test set provided. How does this compare with the cross-validation estimate

you computed in (e)? Would you choose a different value for k had you been allowed

to look at the test dataset earlier?

g) Bonus question: each training example is currently a high-dimensional vector. A

very successful idea in machine learning is that of dimensionality reduction. This is

typically done in an unsupervised way - feature vectors are transformed so that most

information is preserved, while significantly lowering their dimension. A possibility in

our setting is to use Principal Component Analysis (PCA) to map each digit image

to a lower dimensional vector. There is an enormous computational advantage (as

computing distances will be easier) but there might be also an advantage in terms

of statistical generalization. Use this idea in our setting, and choose a good number

of principal components to keep in order to have good accuracy (again, this choice

should be solely based on the training data). Document clearly all the steps of your

procedure. In this question you are allowed to use an existing implementation of PCA

or related methods.

IMPORTANT: if for some reason you are unable to make things work for the large

datasets, use instead for the training data the first 30000 rows of MNIST train.csv and

for testing the first 5000 rows of MNIST test.csv.

5

学霸联盟

- 留学生代写
- Python代写
- Java代写
- c/c++代写
- 数据库代写
- 算法代写
- 机器学习代写
- 数据挖掘代写
- 数据分析代写
- Android代写
- html代写
- 计算机网络代写
- 操作系统代写
- 计算机体系结构代写
- R代写
- 数学代写
- 金融作业代写
- 微观经济学代写
- 会计代写
- 统计代写
- 生物代写
- 物理代写
- 机械代写
- Assignment代写
- sql数据库代写
- analysis代写
- Haskell代写
- Linux代写
- Shell代写
- Diode Ideality Factor代写
- 宏观经济学代写
- 经济代写
- 计量经济代写
- math代写
- 金融统计代写
- 经济统计代写
- 概率论代写
- 代数代写
- 工程作业代写
- Databases代写
- 逻辑代写
- JavaScript代写
- Matlab代写
- Unity代写
- BigDate大数据代写
- 汇编代写
- stat代写
- scala代写
- OpenGL代写
- CS代写
- 程序代写
- 简答代写
- Excel代写
- Logisim代写
- 代码代写
- 手写题代写
- 电子工程代写
- 判断代写
- 论文代写
- stata代写
- witness代写
- statscloud代写
- 证明代写
- 非欧几何代写
- 理论代写
- http代写
- MySQL代写
- PHP代写
- 计算代写
- 考试代写
- 博弈论代写
- 英语代写
- essay代写
- 不限代写
- lingo代写
- 线性代数代写
- 文本处理代写
- 商科代写
- visual studio代写
- 光谱分析代写
- report代写
- GCP代写
- 无代写
- 电力系统代写
- refinitiv eikon代写
- 运筹学代写
- simulink代写
- 单片机代写
- GAMS代写
- 人力资源代写
- 报告代写
- SQLAlchemy代写
- Stufio代写
- sklearn代写
- 计算机架构代写
- 贝叶斯代写
- 以太坊代写
- 计算证明代写
- prolog代写
- 交互设计代写
- mips代写
- css代写
- 云计算代写
- dafny代写
- quiz考试代写
- js代写
- 密码学代写
- ml代写
- 水利工程基础代写
- 经济管理代写
- Rmarkdown代写
- 电路代写
- 质量管理画图代写
- sas代写
- 金融数学代写
- processing代写
- 预测分析代写
- 机械力学代写
- vhdl代写
- solidworks代写
- 不涉及代写
- 计算分析代写
- Netlogo代写
- openbugs代写
- 土木代写
- 国际金融专题代写
- 离散数学代写
- openssl代写
- 化学材料代写
- eview代写
- nlp代写
- Assembly language代写
- gproms代写
- studio代写
- robot analyse代写
- pytorch代写
- 证明题代写
- latex代写
- coq代写
- 市场营销论文代写
- 人力资论文代写
- weka代写
- 英文代写
- Minitab代写
- 航空代写
- webots代写
- Advanced Management Accounting代写
- Lunix代写
- 云基础代写
- 有限状态过程代写
- aws代写
- AI代写
- 图灵机代写
- Sociology代写
- 分析代写
- 经济开发代写
- Data代写
- jupyter代写
- 通信考试代写
- 网络安全代写
- 固体力学代写
- spss代写
- 无编程代写
- react代写
- Ocaml代写
- 期货期权代写
- Scheme代写
- 数学统计代写
- 信息安全代写
- Bloomberg代写
- 残疾与创新设计代写
- 历史代写
- 理论题代写
- cpu代写
- 计量代写
- Xpress-IVE代写
- 微积分代写
- 材料学代写
- 代写
- 会计信息系统代写
- 凸优化代写
- 投资代写
- F#代写
- C#代写
- arm代写
- 伪代码代写
- 白话代写
- IC集成电路代写
- reasoning代写
- agents代写
- 精算代写
- opencl代写
- Perl代写
- 图像处理代写
- 工程电磁场代写
- 时间序列代写
- 数据结构算法代写
- 网络基础代写
- 画图代写
- Marie代写
- ASP代写
- EViews代写
- Interval Temporal Logic代写
- ccgarch代写
- rmgarch代写
- jmp代写
- 选择填空代写
- mathematics代写
- winbugs代写
- maya代写
- Directx代写
- PPT代写
- 可视化代写
- 工程材料代写
- 环境代写
- abaqus代写
- 投资组合代写
- 选择题代写
- openmp.c代写
- cuda.cu代写
- 传感器基础代写
- 区块链比特币代写
- 土壤固结代写
- 电气代写
- 电子设计代写
- 主观题代写
- 金融微积代写
- ajax代写
- Risk theory代写
- tcp代写
- tableau代写
- mylab代写
- research paper代写
- 手写代写
- 管理代写
- paper代写
- 毕设代写
- 衍生品代写
- 学术论文代写
- 计算画图代写
- SPIM汇编代写
- 演讲稿代写
- 金融实证代写
- 环境化学代写
- 通信代写
- 股权市场代写
- 计算机逻辑代写
- Microsoft Visio代写
- 业务流程管理代写
- Spark代写
- USYD代写
- 数值分析代写
- 有限元代写
- 抽代代写
- 不限定代写
- IOS代写
- scikit-learn代写
- ts angular代写
- sml代写
- 管理决策分析代写
- vba代写
- 墨大代写
- erlang代写
- Azure代写
- 粒子物理代写
- 编译器代写
- socket代写
- 商业分析代写
- 财务报表分析代写
- Machine Learning代写
- 国际贸易代写
- code代写
- 流体力学代写
- 辅导代写
- 设计代写
- marketing代写
- web代写
- 计算机代写
- verilog代写
- 心理学代写
- 线性回归代写
- 高级数据分析代写
- clingo代写
- Mplab代写
- coventorware代写
- creo代写
- nosql代写
- 供应链代写
- uml代写
- 数字业务技术代写
- 数字业务管理代写
- 结构分析代写
- tf-idf代写
- 地理代写
- financial modeling代写
- quantlib代写
- 电力电子元件代写
- atenda 2D代写
- 宏观代写
- 媒体代写
- 政治代写
- 化学代写
- 随机过程代写
- self attension算法代写
- arm assembly代写
- wireshark代写
- openCV代写
- Uncertainty Quantificatio代写
- prolong代写
- IPYthon代写
- Digital system design 代写
- julia代写
- Advanced Geotechnical Engineering代写
- 回答问题代写
- junit代写
- solidty代写
- maple代写
- 光电技术代写
- 网页代写
- 网络分析代写
- ENVI代写
- gimp代写
- sfml代写
- 社会学代写
- simulationX solidwork代写
- unity 3D代写
- ansys代写
- react native代写
- Alloy代写
- Applied Matrix代写
- JMP PRO代写
- 微观代写
- 人类健康代写
- 市场代写
- proposal代写
- 软件代写
- 信息检索代写
- 商法代写
- 信号代写
- pycharm代写
- 金融风险管理代写
- 数据可视化代写
- fashion代写
- 加拿大代写
- 经济学代写
- Behavioural Finance代写
- cytoscape代写
- 推荐代写
- 金融经济代写
- optimization代写
- alteryxy代写
- tabluea代写
- sas viya代写
- ads代写
- 实时系统代写
- 药剂学代写
- os代写
- Mathematica代写
- Xcode代写
- Swift代写
- rattle代写
- 人工智能代写
- 流体代写
- 结构力学代写
- Communications代写
- 动物学代写
- 问答代写
- MiKTEX代写
- 图论代写
- 数据科学代写
- 计算机安全代写
- 日本历史代写
- gis代写
- rs代写
- 语言代写
- 电学代写
- flutter代写
- drat代写
- 澳洲代写
- 医药代写
- ox代写
- 营销代写
- pddl代写
- 工程项目代写