xuebaunion@vip.163.com

3551 Trousdale Rkwy, University Park, Los Angeles, CA

留学生论文指导和课程辅导

无忧GPA：https://www.essaygpa.com

工作时间：全年无休-早上8点到凌晨3点

扫码添加客服微信

扫描添加客服微信

Python代写-MATH96007

时间：2021-02-11

Department of Mathematics

MATH96007 - MATH97019 - MATH97097

Methods for Data Science

Years 3/4/5

Coursework 1 – Supervised learning

Submission deadline: Wednesday, 24 February 2021, 5 pm

General instructions

The goal of this coursework is to analyse two datasets using several of the tools and algorithms introduced in the

lectures, which you have also studied in detail through the computational tasks in the course.

You will solve the tasks in this coursework using Python. You are allowed to use Python code that you will have

developed in your coding tasks. You are also allowed to use any other basic mathematical functions contained in

numpy. However, importantly, you are not allowed to use any model-level Python packages like sklearn,

statsmodels, or similar, for your solutions unless we explicitly state it.

Submission

Your coursework will be presented as a Jupyter notebook (file format: ipynb) with all your tasks clearly labelled.

The notebook should contain the cells with your code and their output, and some brief text explaining your

calculations, choices, mathematical reasoning, and explanation of results. The notebook must be run and outputs of

cells printed. You may produce your notebook with Google Colab, but we recommend to develop your local Jupyter

notebook through the Anaconda environment (or any local Python environment) installed on your computer.

Once you have executed all cells in your notebook and their outputs are printed, also save the notebook as an html

file , which you will also submit.

Submission instructions

The submission will be done online via Turnitin on Blackboard.

The deadline is Wednesday, 24 February 2021 at 5 pm .

You will upload two documents to Blackboard, wrapped into a single zip file :

1) Your Jupyter notebook as an ipynb file .

2) Your notebook exported as an html file .

You are also required to comply with these specific requirements:

● Name your zip file as ‘SurnameCID.zip’, e.g. Smith1234567.zip. Do not submit multiple files.

● Your ipynb file must produce all plots that appear in your html file, i.e., make sure you have run all cells

in the notebook before exporting the html.

● Use clear headings in your notebook to indicate the answers to each question, e.g. ‘Task 1.1’.

Note about online submissions: There are known issues with particular browsers (or settings with cookies or

popup blockers) when submitting to Turnitin. If the submission 'hangs', please try another browser.

You should also check that your files are not empty or corrupted after submission.

To avoid last minute problems with your online submission, we recommend that you upload versions of your

coursework early, before the deadline. You will be able to update your coursework until the deadline, but

having these early versions provides you with some safety back up.

Needless to say, projects must be your own work: You may discuss the analysis with your colleagues but the

code, writing, figures and analysis must be your own. The Department may use code profiling and tools such as

Turnitin to check for plagiarism, as plagiarism cannot be tolerated.

Marks

The coursework is worth 40% of your total mark for the course.

This coursework contains a mastery component for MSc and 4th year MSci students.

Guidance about solutions and marking scheme:

Coursework tasks are different from exams: sometimes they can be more open-ended and may require going

beyond what we have covered explicitly in lectures. In some parts of the tasks, initiative and creativity will be

important, as is the ability to pull together the mathematical content of the course, drawing links between subjects

and methods, and backing up your analysis with relevant computations that you will need to justify.

To gain the marks for each of the Tasks you are required to:

(1) complete the task as described;

(2) comment any code so that we can understand each step;

(3) provide a brief written introduction to the task explaining what you did and why you did it;

(4) provide appropriate, relevant, clearly labelled figures documenting and summarising your findings;

(5) provide an explanation of your findings in mathematical terms based on your own computations and analysis

and linking the outcomes to concepts presented in class or in the literature;

(6) consider summarising your results of different methods and options with a judicious use of summary tables.

The quality of presentation and communication is very important, so use good combinations of tables and figures to

present your results, as needed.

Explanation and understanding of the mathematical concepts are crucial.

Competent Python code is expected. As stated above, you are allowed to use your own code and the code

developed in the coding tasks in the course, but you are not allowed to use Python packages like sklearn,

statsmodels, etc unless explicitly stated.

Hence marks will be reserved and allocated for: presentation; quality of code; clarity of arguments; explanation of

choices made and alternatives considered; mathematical interpretation of the results obtained; as well as additional

relevant work that shows initiative and understanding beyond the task stated in the coursework.

Note that the mere addition of extra calculations (or ready-made 'pipelines') that are unrelated to the task without a

clear explanation and justification of your rationale will not be beneficial in itself and, in fact, can also be detrimental

if it reveals lack of understanding.

Overview of the coursework

In this first coursework, you will work with two different datasets of high-dimensional samples:

● a housing market dataset

● a collection of credit applications

You will perform a regression task with the former, and a binary classification task with the latter.

Task 1: Regression (50 marks)

Dataset: Your first task deals with a modified dataset that we have prepared based on a collection of household

information over various locations across Boston in the US. Each sample in the dataset corresponds to a household

characterised by 18 features, from per capita crime rate to proportion of non-retail business acres in the town of the

household. We will take one of these features (namely, the median value of owner-occupied homes in US$ 1000s)

as the target variable, and we use the other 17 features as predictors.

This modified Boston housing dataset has been split into a training set and a test set, and is made available to you

on Blackboard as regression_train.csv and regression_test.csv. The test set should not be used in any

learning or tuning of the models. It should only be used a posteriori to support your conclusions and to evaluate the

out-of-sample performance of your models.

Questions:

1.1 Linear regression (10 marks)

1.1.1 - For the modified Boston housing data set, obtain a linear regression model to predict the median

value of owner-occupied homes in USD 1000's as your target variable using all the other features as

predictors. Report the parameters of the model and the in-sample mean squared error (MSE) from the

training set.

1.1.2 - Use the model on the test data to predict the target variable, and compute the out-of-sample

MSE on the test set. Compare the out-of-sample and the in-sample MSE, and explain your answer.

1.2 Ridge regression (20 marks)

1.2.1 - Repeat task 1.1.1 employing ridge regression using a 5-fold cross validation algorithm to tune

the ridge model on the set regression_train.csv. Use one of the folds to demonstrate with plots

how you scan the penalty parameter of the ridge model to find the optimal value of the penalty by

examining the MSE on the corresponding validation subset. Report the values of the penalty parameter

obtained for the five folds.

1.2.2 - Obtain the average in-sample MSE and out-of-sample MSE (on the test set

regression_test.csv) over the 5 folds, and compare these values to the results found in 1.1.2

using linear regression. Explain the differences observed between the two methods justifying your

answer in terms of particular predictors of interest.

1.3 Regression with k nearest neighbours (kNN ) (20 marks)

1.3.1 - Repeat task 1.2.1 employing the kNN algorithm as a regression model. Tune your kNN model

using a 5-fold cross validation strategy on the same splits as in 1.2.1. Using one of the folds: (i)

demonstrate the process by which you iterate over a range of k to find your optimal value within this

range; (ii) examine both the MSE and the distribution of the errors obtained on the corresponding

validation subset. Explain your results.

1.3.2 - Obtain the average in-sample MSE and out-of-sample MSE (on the test set

regression_test.csv) over the 5 folds, and compare these values to the results obtained in 1.1

and 1.2. Compare the observed performance of the kNN algorithm to the models in 1.1 and 1.2,

considering the homogeneity of the data and the possible nonlinearity of the descriptors.

Task 2: Classification (50 marks)

Dataset: Your second dataset is a modified dataset based on a collection of credit applications to an unknown

bank. You have information on 11 different descriptors (or features) of each applicant and the decision of the banker

whether the applicant is granted a credit or not. We will take the 11 features as numerical predictors in our

classification tasks.

The data set is accessible on Blackboard and has been split into a training set and a test set:

classification_train.csv and classification_test.csv. Again, the test set should only be used a

posteriori to support your conclusions and to evaluate the out-of-sample performance of your models.

Questions:

2.1 Logistic regression (10 marks)

2.1.1 - Train a logistic regression classifier on your training data with gradient descent for 5000

iterations. Demonstrate that you have used a grid-search with 5-fold cross validation to find your optimal

set of hyperparameters (learning rate, decision threshold).

2.1.2 - Compare the performance of your optimal model on the training data and on the test data by

their mean accuracies.

2.2 Random forest (20 marks)

2.2.1 - Train a random forest classifier on the training data. You should use the same 5-fold cross

validation subsets to explore and optimise over suitable ranges of the following hyperparameters: (i)

number of decision trees; (ii) depth of trees, (iii) maximum number of descriptors (features) randomly

chosen at each split. Use cross-entropy as your information criterion for the splits.

2.2.2 - Compare the performance of your optimal model on the training data and on the tes data using

different measures computed from the confusion matrix.

2.3 Support vector machines (SVMs) (20 marks)

2.3.1 - This task will deal with two hard margin SVM classifiers: (i) Implement the standard linear SVM

with hard margin on the training data; (ii) implement a hard margin kernel SVM with radial basis function

(RBF) kernel, and demonstrate that you have used a grid-search with 5-fold cross validation (same folds

as above) to find the RBF kernel with the optimal hyperparameter with respect to the F1-score.

Compare the results of the linear SVM and the optimal kernel SVM.

2.3.2 - Evaluate the performance of the RBF SVM on the test data over the range of the

hyperparameter of the kernel and represent the results using a receiver operating characteristic (ROC)

curve . Use this ROC curve to evaluate the quality of the optimal kernel SVM obtained in 2.3.1 through

cross-validation on the training set.

Task 3 (mastery component): Sigmoid Kernelised SVM: hard versus soft margin (25 marks)

3.1 Hard margin sigmoid kernel (10 marks)

Train a hard margin kernelised SVM classifier with a sigmoid kernel on the training data. Demonstrate

that you have used a grid-search with 5-fold cross validation to find the optimal hyperparameters for this

kernel with respect to the F1-score.

3.2 Soft margin sigmoid kernel (15 marks)

Repeat task 3.1 but now using a soft margin objective (instead of hard margin) on the training data. You

will need to implement this soft margin SVM yourself. Demonstrate that you have used a grid-search

with 5-fold cross validation to find the optimal hyperparameters for this kernel with respect to the

F1-score.

Compare the performance of the hard margin vs soft margin versions on the test data, and explain the

differences considering your kernel and the data distribution .

学霸联盟

MATH96007 - MATH97019 - MATH97097

Methods for Data Science

Years 3/4/5

Coursework 1 – Supervised learning

Submission deadline: Wednesday, 24 February 2021, 5 pm

General instructions

The goal of this coursework is to analyse two datasets using several of the tools and algorithms introduced in the

lectures, which you have also studied in detail through the computational tasks in the course.

You will solve the tasks in this coursework using Python. You are allowed to use Python code that you will have

developed in your coding tasks. You are also allowed to use any other basic mathematical functions contained in

numpy. However, importantly, you are not allowed to use any model-level Python packages like sklearn,

statsmodels, or similar, for your solutions unless we explicitly state it.

Submission

Your coursework will be presented as a Jupyter notebook (file format: ipynb) with all your tasks clearly labelled.

The notebook should contain the cells with your code and their output, and some brief text explaining your

calculations, choices, mathematical reasoning, and explanation of results. The notebook must be run and outputs of

cells printed. You may produce your notebook with Google Colab, but we recommend to develop your local Jupyter

notebook through the Anaconda environment (or any local Python environment) installed on your computer.

Once you have executed all cells in your notebook and their outputs are printed, also save the notebook as an html

file , which you will also submit.

Submission instructions

The submission will be done online via Turnitin on Blackboard.

The deadline is Wednesday, 24 February 2021 at 5 pm .

You will upload two documents to Blackboard, wrapped into a single zip file :

1) Your Jupyter notebook as an ipynb file .

2) Your notebook exported as an html file .

You are also required to comply with these specific requirements:

● Name your zip file as ‘SurnameCID.zip’, e.g. Smith1234567.zip. Do not submit multiple files.

● Your ipynb file must produce all plots that appear in your html file, i.e., make sure you have run all cells

in the notebook before exporting the html.

● Use clear headings in your notebook to indicate the answers to each question, e.g. ‘Task 1.1’.

Note about online submissions: There are known issues with particular browsers (or settings with cookies or

popup blockers) when submitting to Turnitin. If the submission 'hangs', please try another browser.

You should also check that your files are not empty or corrupted after submission.

To avoid last minute problems with your online submission, we recommend that you upload versions of your

coursework early, before the deadline. You will be able to update your coursework until the deadline, but

having these early versions provides you with some safety back up.

Needless to say, projects must be your own work: You may discuss the analysis with your colleagues but the

code, writing, figures and analysis must be your own. The Department may use code profiling and tools such as

Turnitin to check for plagiarism, as plagiarism cannot be tolerated.

Marks

The coursework is worth 40% of your total mark for the course.

This coursework contains a mastery component for MSc and 4th year MSci students.

Guidance about solutions and marking scheme:

Coursework tasks are different from exams: sometimes they can be more open-ended and may require going

beyond what we have covered explicitly in lectures. In some parts of the tasks, initiative and creativity will be

important, as is the ability to pull together the mathematical content of the course, drawing links between subjects

and methods, and backing up your analysis with relevant computations that you will need to justify.

To gain the marks for each of the Tasks you are required to:

(1) complete the task as described;

(2) comment any code so that we can understand each step;

(3) provide a brief written introduction to the task explaining what you did and why you did it;

(4) provide appropriate, relevant, clearly labelled figures documenting and summarising your findings;

(5) provide an explanation of your findings in mathematical terms based on your own computations and analysis

and linking the outcomes to concepts presented in class or in the literature;

(6) consider summarising your results of different methods and options with a judicious use of summary tables.

The quality of presentation and communication is very important, so use good combinations of tables and figures to

present your results, as needed.

Explanation and understanding of the mathematical concepts are crucial.

Competent Python code is expected. As stated above, you are allowed to use your own code and the code

developed in the coding tasks in the course, but you are not allowed to use Python packages like sklearn,

statsmodels, etc unless explicitly stated.

Hence marks will be reserved and allocated for: presentation; quality of code; clarity of arguments; explanation of

choices made and alternatives considered; mathematical interpretation of the results obtained; as well as additional

relevant work that shows initiative and understanding beyond the task stated in the coursework.

Note that the mere addition of extra calculations (or ready-made 'pipelines') that are unrelated to the task without a

clear explanation and justification of your rationale will not be beneficial in itself and, in fact, can also be detrimental

if it reveals lack of understanding.

Overview of the coursework

In this first coursework, you will work with two different datasets of high-dimensional samples:

● a housing market dataset

● a collection of credit applications

You will perform a regression task with the former, and a binary classification task with the latter.

Task 1: Regression (50 marks)

Dataset: Your first task deals with a modified dataset that we have prepared based on a collection of household

information over various locations across Boston in the US. Each sample in the dataset corresponds to a household

characterised by 18 features, from per capita crime rate to proportion of non-retail business acres in the town of the

household. We will take one of these features (namely, the median value of owner-occupied homes in US$ 1000s)

as the target variable, and we use the other 17 features as predictors.

This modified Boston housing dataset has been split into a training set and a test set, and is made available to you

on Blackboard as regression_train.csv and regression_test.csv. The test set should not be used in any

learning or tuning of the models. It should only be used a posteriori to support your conclusions and to evaluate the

out-of-sample performance of your models.

Questions:

1.1 Linear regression (10 marks)

1.1.1 - For the modified Boston housing data set, obtain a linear regression model to predict the median

value of owner-occupied homes in USD 1000's as your target variable using all the other features as

predictors. Report the parameters of the model and the in-sample mean squared error (MSE) from the

training set.

1.1.2 - Use the model on the test data to predict the target variable, and compute the out-of-sample

MSE on the test set. Compare the out-of-sample and the in-sample MSE, and explain your answer.

1.2 Ridge regression (20 marks)

1.2.1 - Repeat task 1.1.1 employing ridge regression using a 5-fold cross validation algorithm to tune

the ridge model on the set regression_train.csv. Use one of the folds to demonstrate with plots

how you scan the penalty parameter of the ridge model to find the optimal value of the penalty by

examining the MSE on the corresponding validation subset. Report the values of the penalty parameter

obtained for the five folds.

1.2.2 - Obtain the average in-sample MSE and out-of-sample MSE (on the test set

regression_test.csv) over the 5 folds, and compare these values to the results found in 1.1.2

using linear regression. Explain the differences observed between the two methods justifying your

answer in terms of particular predictors of interest.

1.3 Regression with k nearest neighbours (kNN ) (20 marks)

1.3.1 - Repeat task 1.2.1 employing the kNN algorithm as a regression model. Tune your kNN model

using a 5-fold cross validation strategy on the same splits as in 1.2.1. Using one of the folds: (i)

demonstrate the process by which you iterate over a range of k to find your optimal value within this

range; (ii) examine both the MSE and the distribution of the errors obtained on the corresponding

validation subset. Explain your results.

1.3.2 - Obtain the average in-sample MSE and out-of-sample MSE (on the test set

regression_test.csv) over the 5 folds, and compare these values to the results obtained in 1.1

and 1.2. Compare the observed performance of the kNN algorithm to the models in 1.1 and 1.2,

considering the homogeneity of the data and the possible nonlinearity of the descriptors.

Task 2: Classification (50 marks)

Dataset: Your second dataset is a modified dataset based on a collection of credit applications to an unknown

bank. You have information on 11 different descriptors (or features) of each applicant and the decision of the banker

whether the applicant is granted a credit or not. We will take the 11 features as numerical predictors in our

classification tasks.

The data set is accessible on Blackboard and has been split into a training set and a test set:

classification_train.csv and classification_test.csv. Again, the test set should only be used a

posteriori to support your conclusions and to evaluate the out-of-sample performance of your models.

Questions:

2.1 Logistic regression (10 marks)

2.1.1 - Train a logistic regression classifier on your training data with gradient descent for 5000

iterations. Demonstrate that you have used a grid-search with 5-fold cross validation to find your optimal

set of hyperparameters (learning rate, decision threshold).

2.1.2 - Compare the performance of your optimal model on the training data and on the test data by

their mean accuracies.

2.2 Random forest (20 marks)

2.2.1 - Train a random forest classifier on the training data. You should use the same 5-fold cross

validation subsets to explore and optimise over suitable ranges of the following hyperparameters: (i)

number of decision trees; (ii) depth of trees, (iii) maximum number of descriptors (features) randomly

chosen at each split. Use cross-entropy as your information criterion for the splits.

2.2.2 - Compare the performance of your optimal model on the training data and on the tes data using

different measures computed from the confusion matrix.

2.3 Support vector machines (SVMs) (20 marks)

2.3.1 - This task will deal with two hard margin SVM classifiers: (i) Implement the standard linear SVM

with hard margin on the training data; (ii) implement a hard margin kernel SVM with radial basis function

(RBF) kernel, and demonstrate that you have used a grid-search with 5-fold cross validation (same folds

as above) to find the RBF kernel with the optimal hyperparameter with respect to the F1-score.

Compare the results of the linear SVM and the optimal kernel SVM.

2.3.2 - Evaluate the performance of the RBF SVM on the test data over the range of the

hyperparameter of the kernel and represent the results using a receiver operating characteristic (ROC)

curve . Use this ROC curve to evaluate the quality of the optimal kernel SVM obtained in 2.3.1 through

cross-validation on the training set.

Task 3 (mastery component): Sigmoid Kernelised SVM: hard versus soft margin (25 marks)

3.1 Hard margin sigmoid kernel (10 marks)

Train a hard margin kernelised SVM classifier with a sigmoid kernel on the training data. Demonstrate

that you have used a grid-search with 5-fold cross validation to find the optimal hyperparameters for this

kernel with respect to the F1-score.

3.2 Soft margin sigmoid kernel (15 marks)

Repeat task 3.1 but now using a soft margin objective (instead of hard margin) on the training data. You

will need to implement this soft margin SVM yourself. Demonstrate that you have used a grid-search

with 5-fold cross validation to find the optimal hyperparameters for this kernel with respect to the

F1-score.

Compare the performance of the hard margin vs soft margin versions on the test data, and explain the

differences considering your kernel and the data distribution .

学霸联盟

- 留学生代写
- Python代写
- Java代写
- c/c++代写
- 数据库代写
- 算法代写
- 机器学习代写
- 数据挖掘代写
- 数据分析代写
- Android代写
- html代写
- 计算机网络代写
- 操作系统代写
- 计算机体系结构代写
- R代写
- 数学代写
- 金融作业代写
- 微观经济学代写
- 会计代写
- 统计代写
- 生物代写
- 物理代写
- 机械代写
- Assignment代写
- sql数据库代写
- analysis代写
- Haskell代写
- Linux代写
- Shell代写
- Diode Ideality Factor代写
- 宏观经济学代写
- 经济代写
- 计量经济代写
- math代写
- 金融统计代写
- 经济统计代写
- 概率论代写
- 代数代写
- 工程作业代写
- Databases代写
- 逻辑代写
- JavaScript代写
- Matlab代写
- Unity代写
- BigDate大数据代写
- 汇编代写
- stat代写
- scala代写
- OpenGL代写
- CS代写
- 程序代写
- 简答代写
- Excel代写
- Logisim代写
- 代码代写
- 手写题代写
- 电子工程代写
- 判断代写
- 论文代写
- stata代写
- witness代写
- statscloud代写
- 证明代写
- 非欧几何代写
- 理论代写
- http代写
- MySQL代写
- PHP代写
- 计算代写
- 考试代写
- 博弈论代写
- 英语代写
- essay代写
- 不限代写
- lingo代写
- 线性代数代写
- 文本处理代写
- 商科代写
- visual studio代写
- 光谱分析代写
- report代写
- GCP代写
- 无代写
- 电力系统代写
- refinitiv eikon代写
- 运筹学代写
- simulink代写
- 单片机代写
- GAMS代写
- 人力资源代写
- 报告代写
- SQLAlchemy代写
- Stufio代写
- sklearn代写
- 计算机架构代写
- 贝叶斯代写
- 以太坊代写
- 计算证明代写
- prolog代写
- 交互设计代写
- mips代写
- css代写
- 云计算代写
- dafny代写
- quiz考试代写
- js代写
- 密码学代写
- ml代写
- 水利工程基础代写
- 经济管理代写
- Rmarkdown代写
- 电路代写
- 质量管理画图代写
- sas代写
- 金融数学代写
- processing代写
- 预测分析代写
- 机械力学代写
- vhdl代写
- solidworks代写
- 不涉及代写
- 计算分析代写
- Netlogo代写
- openbugs代写
- 土木代写
- 国际金融专题代写
- 离散数学代写
- openssl代写
- 化学材料代写
- eview代写
- nlp代写
- Assembly language代写
- gproms代写
- studio代写
- robot analyse代写
- pytorch代写
- 证明题代写
- latex代写
- coq代写
- 市场营销论文代写
- 人力资论文代写
- weka代写
- 英文代写
- Minitab代写
- 航空代写
- webots代写
- Advanced Management Accounting代写
- Lunix代写
- 云基础代写
- 有限状态过程代写
- aws代写
- AI代写
- 图灵机代写
- Sociology代写
- 分析代写
- 经济开发代写
- Data代写
- jupyter代写
- 通信考试代写
- 网络安全代写
- 固体力学代写
- spss代写
- 无编程代写
- react代写
- Ocaml代写
- 期货期权代写
- Scheme代写
- 数学统计代写
- 信息安全代写
- Bloomberg代写
- 残疾与创新设计代写
- 历史代写
- 理论题代写
- cpu代写
- 计量代写
- Xpress-IVE代写
- 微积分代写
- 材料学代写
- 代写
- 会计信息系统代写
- 凸优化代写
- 投资代写
- F#代写
- C#代写
- arm代写
- 伪代码代写
- 白话代写
- IC集成电路代写
- reasoning代写
- agents代写
- 精算代写
- opencl代写
- Perl代写
- 图像处理代写
- 工程电磁场代写
- 时间序列代写
- 数据结构算法代写
- 网络基础代写
- 画图代写
- Marie代写
- ASP代写
- EViews代写
- Interval Temporal Logic代写
- ccgarch代写
- rmgarch代写
- jmp代写
- 选择填空代写
- mathematics代写
- winbugs代写
- maya代写
- Directx代写
- PPT代写
- 可视化代写
- 工程材料代写
- 环境代写
- abaqus代写
- 投资组合代写
- 选择题代写
- openmp.c代写
- cuda.cu代写
- 传感器基础代写
- 区块链比特币代写
- 土壤固结代写
- 电气代写
- 电子设计代写
- 主观题代写
- 金融微积代写
- ajax代写
- Risk theory代写
- tcp代写
- tableau代写
- mylab代写
- research paper代写
- 手写代写
- 管理代写
- paper代写
- 毕设代写
- 衍生品代写
- 学术论文代写
- 计算画图代写
- SPIM汇编代写
- 演讲稿代写
- 金融实证代写
- 环境化学代写
- 通信代写
- 股权市场代写
- 计算机逻辑代写
- Microsoft Visio代写
- 业务流程管理代写
- Spark代写
- USYD代写
- 数值分析代写
- 有限元代写
- 抽代代写
- 不限定代写
- IOS代写
- scikit-learn代写
- ts angular代写
- sml代写
- 管理决策分析代写
- vba代写
- 墨大代写
- erlang代写
- Azure代写
- 粒子物理代写
- 编译器代写
- socket代写
- 商业分析代写
- 财务报表分析代写
- Machine Learning代写
- 国际贸易代写
- code代写
- 流体力学代写
- 辅导代写
- 设计代写
- marketing代写
- web代写
- 计算机代写
- verilog代写
- 心理学代写
- 线性回归代写
- 高级数据分析代写
- clingo代写
- Mplab代写
- coventorware代写
- creo代写
- nosql代写
- 供应链代写
- uml代写
- 数字业务技术代写
- 数字业务管理代写
- 结构分析代写
- tf-idf代写
- 地理代写
- financial modeling代写
- quantlib代写
- 电力电子元件代写
- atenda 2D代写
- 宏观代写
- 媒体代写
- 政治代写
- 化学代写
- 随机过程代写
- self attension算法代写
- arm assembly代写
- wireshark代写
- openCV代写
- Uncertainty Quantificatio代写
- prolong代写
- IPYthon代写
- Digital system design 代写
- julia代写
- Advanced Geotechnical Engineering代写
- 回答问题代写
- junit代写
- solidty代写
- maple代写
- 光电技术代写
- 网页代写
- 网络分析代写
- ENVI代写
- gimp代写
- sfml代写
- 社会学代写
- simulationX solidwork代写
- unity 3D代写
- ansys代写
- react native代写
- Alloy代写
- Applied Matrix代写
- JMP PRO代写
- 微观代写
- 人类健康代写
- 市场代写
- proposal代写
- 软件代写
- 信息检索代写
- 商法代写
- 信号代写
- pycharm代写
- 金融风险管理代写
- 数据可视化代写
- fashion代写
- 加拿大代写
- 经济学代写
- Behavioural Finance代写
- cytoscape代写
- 推荐代写
- 金融经济代写
- optimization代写
- alteryxy代写
- tabluea代写
- sas viya代写
- ads代写
- 实时系统代写
- 药剂学代写
- os代写
- Mathematica代写
- Xcode代写
- Swift代写
- rattle代写
- 人工智能代写
- 流体代写
- 结构力学代写
- Communications代写
- 动物学代写
- 问答代写
- MiKTEX代写
- 图论代写
- 数据科学代写
- 计算机安全代写
- 日本历史代写
- gis代写
- rs代写
- 语言代写
- 电学代写
- flutter代写
- drat代写
- 澳洲代写
- 医药代写
- ox代写
- 营销代写
- pddl代写
- 工程项目代写
- archi代写
- Propositional Logic代写
- 国际财务管理代写
- 高宏代写
- 模型代写
- 润色代写
- 营养学论文代写
- 热力学代写
- Acct代写
- Data Synthesis代写