xuebaunion@vip.163.com

3551 Trousdale Rkwy, University Park, Los Angeles, CA

留学生论文指导和课程辅导

无忧GPA：https://www.essaygpa.com

工作时间：全年无休-早上8点到凌晨3点

扫码添加客服微信

扫描添加客服微信

R代写-W 3/24

时间：2021-03-24

Mini-Project 2 (due W 3/24 by 11:59pm)

Instructions:

You may work with other students to complete the assignment, but each student needs to turn in

his or her own work (e.g., it is ok to discuss how to write the code to solve a question, but it is not

ok to copy and paste each other’s code). Show all work to receive full credit. For questions that ask

you to calculate something, I need to see the full calculation, not just the answer. Note that the

“calculation” will often (almost always) be done using code, so I would need to see the code that

leads up to and produces the final answer. Similarly, for questions that require software to perform

an analysis or generate a plot, I need to see the R code that produced the results. You may attach

relevant R code as an Appendix at the end of the assignment, or include the code as part of your

answer to the question that the code supports.

You should upload your assignment solution as a single pdf to the “Assignments” section of our

course in Canvas. Click on the name of the assignment, then click the “Submit Assignment” button,

then upload the file containing your solution, then click “Submit Assignment” a final time. The

filename should be in the format LastName_FirstName_ProjectNumber.pdf. For example, if I was

submitting the assignment, I would name it poythress_jc_proj2.pdf.

Questions:

1. We will analyze a dataset containing information about the number of FEMA buyouts of

flood-prone properties from 1989 to 2016, which can be downloaded from Canvas in the file

fema_buyouts.csv. The dataset contains information at both the county and state level, but

we will focus on the county-level data. The response variable of interest is NumBuyouts. Of

interest is how certain socioeconomic and demographic factors are associated with the number

of buyouts. The covariates of interest are:

ALAND: land area in m2

AWATER: water area in m2

FDD: number of federal disaster declarations

FDD_IA: number of federal disaster declarations with individual assistance

CountyIncome: average household income

CountyEducation: proportion with high school education

CountyRace: proportion white

CountyPopulation: total population

CountyPopDens: population density

CountyLanguage: proportion proficient in English

(a) Make a scatterplot matrix of the covariates FDD–CountyLanguage. Do any of the covariates

appear correlated with one another? If so, which ones, and what effect might correlation

1

among the covariates have on the analysis? Do any observations have relatively large values

of a particular covariate? Should we consider transforming those covariates? If yes, why?

And which transformations should we consider?

Also make scatterplots of ln(NumBuyouts+1) vs. ln(ALAND) and ln(NumBuyouts+1) vs.

ln(ALAND). We might argue that either ALAND or AWATER should be treated as an exposure

variable, since either could serve as a proxy for the number of properties at risk of flooding.

Does the relationship between NumBuyouts and ALAND or AWATER suggest that either should

be treated as an exposure variable?

(b) For now, don’t treat ALAND or AWATER as an exposure variable (i.e., don’t use an offset).

Assume the response NumBuyouts is a Poisson random variable and fit a Poisson regression

model. Use all of the covariates ALAND, AWATER, FDD–CountyLanguage and any transforma-

tions of the covariates you wish to perform model selection to find the best model possible

for the response. You may also construct new variables from combinations of two or more

variables [e.g., AWATER/(AWATER+ALAND) would represent the proportion of the county that is

water, which may be a relevant covariate]. You can use whichever model selection algorithm

and criterion for the “best model,” but you should justify your choices.

Does the final model you selected appear to fit the data well? If your final model happened

to include either ALAND or AWATER or some function of one them, does it suggest one or the

other should be treated as an exposure variable?

(c) Make a histogram of NumBuyouts. Are there any unusual features apparent in the histogram

(where“unusual”is in the context of assuming the counts follow a Poisson distribution)? How

might the distribution of the counts be related to the lack-of-fit you may have encountered

for the models you fit in the previous part? [Hint: You may want to make a custom set

of breaks in the histogram, because the features may be difficult to see using the default

breakpoints.]

(d) Use the zeroinfl function from the pscl package to fit a zero-inflated Poisson (ZIP) model.

As in part (b) fit a model with many/all of the covariates and transformations of the co-

variates, then perform model selection to find a reduced model that includes some subset of

those covariates. The model you select for the count part of the model need not include the

same covariates as the model you select for the zero-inflated part of the model.

Some hints:

Refer to Faraway’s example of fitting a ZIP model in R.

If you fit a model, look at the summary, and see NAs for the SEs, Z values, and P -values,

try standardizing the covariates first.

The step function appears to work for the object returned by zeroinfl, but only for

the count part of the model. You might consider removing covariates “by hand” and

2

using LRTs to justify their removal for the zero-inflated part of the model (like Faraway

did).

(e) Even though the ZIP model accounts for lack-of-fit due to excess zeros, it’s still possible

that the counts are overdispersed (or that the mean structure of the model is misspecified,

even after model selection). We should somehow check for lack-of-fit before interpreting

the model or drawing any conclusions. The zeroinfl function can also fit a zero-inflated

negative binomial (ZINB) model by changing the dist argument.

Fit a ZINB model with the same set of covariates in the count and zero-inflated parts of

the model that were in the final model you selected in the previous part. Presumably most

or all of the covariates you selected in your final ZIP model were significant. Are they still

significant in the ZINB model?

We discussed comparing the Poisson vs. NB models through a LRT of the overdispersion

parameter. We could do that for the zero-inflated versions of the models as well if we could

be confident that the asymptotic distribution of the LRT statistic under the null is the

same in the regular and zero-inflated versions of the models. However, I am unsure whether

or not that is the case. Alternatively, we could use AIC to compare the ZIP and ZINB

models. Unfortunately, we would need to know 1) that no constants have been left off the

log-likelihood 2) the AIC function counts the number of parameters of zeroinfl objects

properly and 3) the zeroinfl function uses the MLE for the estimate of the dispersion

parameter. Again, I am unsure of any of those things. However, the summary of the ZINB

model includes a Wald test for log(theta) (where theta is the overdispersion parameter),

which may not be ideal, but is at least something we can use to determine whether or not

there is evidence of overdispersion. Based on the summary of the fitted ZINB model, is there

evidence for overdispersion in the counts? That is, should we prefer the ZIP model or the

ZINB model for the purpose of drawing conclusions from the data?

(f) Suppose we decide that we want to base our conclusions on the ZINB model. If there

are covariates included in the ZINB model fit in the previous part that are not significant,

we should perform model selection once again before interpreting the model. Use whichever

procedure and criterion you prefer to perform model selection on the ZINB fit in the previous

part. Just make sure to clearly state why the criterion justifies removing a covariate from

the model, should you choose to remove any.

(g) Interpret the effect of the covariates in the final model you selected in the previous part. In

particular, how are the covariates included in the zero-inflated part of the model associated

with the odds that a county had no FEMA property buyouts from 1989 to 2016? How are

the covariates included in the count part of the model associated with the mean number of

FEMA property buyouts, among counties that had at least one FEMA buyout? Does the

association between each covariate and the number of FEMA buyouts match your intuition?

[Hint: If your final model includes standardized versions of the covariates, it’s OK to interpret

the covariate effects in loose terms. For example, if Income was included in the model, is an

increase in a county’s average income associated with an increase or decrease in the number

of FEMA buyouts? You don’t need to phrase the interpretation as “for a 1-unit increase in

standardized income, the mean number of FEMA buyouts increases/decreases by a factor

3

of ....” A precise quantitative interpretation of the effect of a covariate in anything but its

original units makes things messy and complicated and difficult to understand.]

2. Do Exercise 2 on page 126 of Faraway.

3. We will analyze a dataset about chronic respiratory disease, which can be downloaded from Can-

vas in the file respire.dat. The dataset has column names, so use the header=TRUE argument

when you read it into R.

The dataset has information on three covariates:

air: air pollution (low or high)

exposure: job exposure (yes or no)

smoking: non-smoker, ex-smoker, or current smoker

The goal is to analyze the covariates’ relationships with the response – the counts of individuals

falling into four chronic respiratory disease categories:

Level 1: no symptoms

Level 2: cough or phlegm < 3 months/year

Level 3: cough or phlegm ≥ 3 months/year

Level 4: cough or phlegm ≥ 3 months/year + shortness of breath

Thus, we have an ordinal multinomial response. Furthermore, we could argue that the categories

arise from a sequential mechanism, so that a continuation ratio logit model would be reasonable.

(a) After reading the data into R, take a look at the dataset to see how it is structured. Now use

the vglm function from the VGAM package to fit parallel and non-parallel versions of the cumu-

lative logit, adjacent category logit, and continuation ratio logit models. Use main effects for

air, exposure, and smoking as the covariates in the models. For each type of model, use a

LRT to determine whether the non-parallel version fits better than the parallel version. Is the

preferred version of the model (parallel vs. non-parallel) consistent across the three different

types of models? How many more parameters do the non-parallel versions of the models have

vs. the parallel versions? [Hint: The model type can be changed via the family argument,

where the names are cumulative, acat, and cratio. The parallel argument controls

which version of the model is fitted. So for example, the non-parallel version of the contin-

uation ratio logit model would specified with the argument family=cratio(parallel=F).

Note that we could also change the link argument, but logit is the default for all three

types, so we don’t need to adjust it.]

4

(b) For each type of model (cumulative, adjacent category, and continuation ratio), choose

either the parallel version or non-parallel version to proceed with, based on the LRTs from

the previous part. Use the drop1 function to perform LRTs to determine whether any

covariates can be removed from the models. If yes, refit the models with the covariate(s)

removed.

In your final models (there should be one for each of the three types), is there evidence for

lack-of-fit? Report the values of the statistics on which you base your conclusions regarding

lack-of-fit. Why is the statistic you chose an appropriate measure of goodness-of-fit?

(c) For each of the final models chosen in the previous part, interpret the effects of the covariates

on chronic respiratory disease. Note that each type of model involves a subtly different

function of the probabilities of the respiratory disease levels, and your interpretations should

reflect that. [Hint: If you look at the summary of the fitted model, besides the reference

levels for the covariates, it also tells you which linear predictors are being modelled. Not

only should this help you interpret the effects, but you can also reverse the direction of the

probabilities by refitting the model with the reverse=T argument if that leads to a more

convenient way to interpret the effects.]

FYI, if the models you chose were the non-parallel slopes versions, the interpretations can get

quite messy and tedious. In other words, something like “current smokers have a higher odds

of more severe respiratory disease” would be too simplistic; it neither takes into account the

non-parallel slopes nor the choice of model for the probabilities. So I am expecting the detail

and nuance of the interpretation to be commensurate with the complexity of the model.

(d) Suppose we wanted to pick just one type of model among the cumulative logit, adjacent

category logit, and continuation ratio logit. How would you compare the three final models

from the previous two parts? LRTs? AIC or BIC? Log-likelihood? Some other method?

Choose a model comparison method and determine which model is preferred among the

three final models you selected in the previous two parts. Justify your choice of model

comparison method by explaining why it is appropriate (and why some of the other choices

are not appropriate).

5

学霸联盟

Instructions:

You may work with other students to complete the assignment, but each student needs to turn in

his or her own work (e.g., it is ok to discuss how to write the code to solve a question, but it is not

ok to copy and paste each other’s code). Show all work to receive full credit. For questions that ask

you to calculate something, I need to see the full calculation, not just the answer. Note that the

“calculation” will often (almost always) be done using code, so I would need to see the code that

leads up to and produces the final answer. Similarly, for questions that require software to perform

an analysis or generate a plot, I need to see the R code that produced the results. You may attach

relevant R code as an Appendix at the end of the assignment, or include the code as part of your

answer to the question that the code supports.

You should upload your assignment solution as a single pdf to the “Assignments” section of our

course in Canvas. Click on the name of the assignment, then click the “Submit Assignment” button,

then upload the file containing your solution, then click “Submit Assignment” a final time. The

filename should be in the format LastName_FirstName_ProjectNumber.pdf. For example, if I was

submitting the assignment, I would name it poythress_jc_proj2.pdf.

Questions:

1. We will analyze a dataset containing information about the number of FEMA buyouts of

flood-prone properties from 1989 to 2016, which can be downloaded from Canvas in the file

fema_buyouts.csv. The dataset contains information at both the county and state level, but

we will focus on the county-level data. The response variable of interest is NumBuyouts. Of

interest is how certain socioeconomic and demographic factors are associated with the number

of buyouts. The covariates of interest are:

ALAND: land area in m2

AWATER: water area in m2

FDD: number of federal disaster declarations

FDD_IA: number of federal disaster declarations with individual assistance

CountyIncome: average household income

CountyEducation: proportion with high school education

CountyRace: proportion white

CountyPopulation: total population

CountyPopDens: population density

CountyLanguage: proportion proficient in English

(a) Make a scatterplot matrix of the covariates FDD–CountyLanguage. Do any of the covariates

appear correlated with one another? If so, which ones, and what effect might correlation

1

among the covariates have on the analysis? Do any observations have relatively large values

of a particular covariate? Should we consider transforming those covariates? If yes, why?

And which transformations should we consider?

Also make scatterplots of ln(NumBuyouts+1) vs. ln(ALAND) and ln(NumBuyouts+1) vs.

ln(ALAND). We might argue that either ALAND or AWATER should be treated as an exposure

variable, since either could serve as a proxy for the number of properties at risk of flooding.

Does the relationship between NumBuyouts and ALAND or AWATER suggest that either should

be treated as an exposure variable?

(b) For now, don’t treat ALAND or AWATER as an exposure variable (i.e., don’t use an offset).

Assume the response NumBuyouts is a Poisson random variable and fit a Poisson regression

model. Use all of the covariates ALAND, AWATER, FDD–CountyLanguage and any transforma-

tions of the covariates you wish to perform model selection to find the best model possible

for the response. You may also construct new variables from combinations of two or more

variables [e.g., AWATER/(AWATER+ALAND) would represent the proportion of the county that is

water, which may be a relevant covariate]. You can use whichever model selection algorithm

and criterion for the “best model,” but you should justify your choices.

Does the final model you selected appear to fit the data well? If your final model happened

to include either ALAND or AWATER or some function of one them, does it suggest one or the

other should be treated as an exposure variable?

(c) Make a histogram of NumBuyouts. Are there any unusual features apparent in the histogram

(where“unusual”is in the context of assuming the counts follow a Poisson distribution)? How

might the distribution of the counts be related to the lack-of-fit you may have encountered

for the models you fit in the previous part? [Hint: You may want to make a custom set

of breaks in the histogram, because the features may be difficult to see using the default

breakpoints.]

(d) Use the zeroinfl function from the pscl package to fit a zero-inflated Poisson (ZIP) model.

As in part (b) fit a model with many/all of the covariates and transformations of the co-

variates, then perform model selection to find a reduced model that includes some subset of

those covariates. The model you select for the count part of the model need not include the

same covariates as the model you select for the zero-inflated part of the model.

Some hints:

Refer to Faraway’s example of fitting a ZIP model in R.

If you fit a model, look at the summary, and see NAs for the SEs, Z values, and P -values,

try standardizing the covariates first.

The step function appears to work for the object returned by zeroinfl, but only for

the count part of the model. You might consider removing covariates “by hand” and

2

using LRTs to justify their removal for the zero-inflated part of the model (like Faraway

did).

(e) Even though the ZIP model accounts for lack-of-fit due to excess zeros, it’s still possible

that the counts are overdispersed (or that the mean structure of the model is misspecified,

even after model selection). We should somehow check for lack-of-fit before interpreting

the model or drawing any conclusions. The zeroinfl function can also fit a zero-inflated

negative binomial (ZINB) model by changing the dist argument.

Fit a ZINB model with the same set of covariates in the count and zero-inflated parts of

the model that were in the final model you selected in the previous part. Presumably most

or all of the covariates you selected in your final ZIP model were significant. Are they still

significant in the ZINB model?

We discussed comparing the Poisson vs. NB models through a LRT of the overdispersion

parameter. We could do that for the zero-inflated versions of the models as well if we could

be confident that the asymptotic distribution of the LRT statistic under the null is the

same in the regular and zero-inflated versions of the models. However, I am unsure whether

or not that is the case. Alternatively, we could use AIC to compare the ZIP and ZINB

models. Unfortunately, we would need to know 1) that no constants have been left off the

log-likelihood 2) the AIC function counts the number of parameters of zeroinfl objects

properly and 3) the zeroinfl function uses the MLE for the estimate of the dispersion

parameter. Again, I am unsure of any of those things. However, the summary of the ZINB

model includes a Wald test for log(theta) (where theta is the overdispersion parameter),

which may not be ideal, but is at least something we can use to determine whether or not

there is evidence of overdispersion. Based on the summary of the fitted ZINB model, is there

evidence for overdispersion in the counts? That is, should we prefer the ZIP model or the

ZINB model for the purpose of drawing conclusions from the data?

(f) Suppose we decide that we want to base our conclusions on the ZINB model. If there

are covariates included in the ZINB model fit in the previous part that are not significant,

we should perform model selection once again before interpreting the model. Use whichever

procedure and criterion you prefer to perform model selection on the ZINB fit in the previous

part. Just make sure to clearly state why the criterion justifies removing a covariate from

the model, should you choose to remove any.

(g) Interpret the effect of the covariates in the final model you selected in the previous part. In

particular, how are the covariates included in the zero-inflated part of the model associated

with the odds that a county had no FEMA property buyouts from 1989 to 2016? How are

the covariates included in the count part of the model associated with the mean number of

FEMA property buyouts, among counties that had at least one FEMA buyout? Does the

association between each covariate and the number of FEMA buyouts match your intuition?

[Hint: If your final model includes standardized versions of the covariates, it’s OK to interpret

the covariate effects in loose terms. For example, if Income was included in the model, is an

increase in a county’s average income associated with an increase or decrease in the number

of FEMA buyouts? You don’t need to phrase the interpretation as “for a 1-unit increase in

standardized income, the mean number of FEMA buyouts increases/decreases by a factor

3

of ....” A precise quantitative interpretation of the effect of a covariate in anything but its

original units makes things messy and complicated and difficult to understand.]

2. Do Exercise 2 on page 126 of Faraway.

3. We will analyze a dataset about chronic respiratory disease, which can be downloaded from Can-

vas in the file respire.dat. The dataset has column names, so use the header=TRUE argument

when you read it into R.

The dataset has information on three covariates:

air: air pollution (low or high)

exposure: job exposure (yes or no)

smoking: non-smoker, ex-smoker, or current smoker

The goal is to analyze the covariates’ relationships with the response – the counts of individuals

falling into four chronic respiratory disease categories:

Level 1: no symptoms

Level 2: cough or phlegm < 3 months/year

Level 3: cough or phlegm ≥ 3 months/year

Level 4: cough or phlegm ≥ 3 months/year + shortness of breath

Thus, we have an ordinal multinomial response. Furthermore, we could argue that the categories

arise from a sequential mechanism, so that a continuation ratio logit model would be reasonable.

(a) After reading the data into R, take a look at the dataset to see how it is structured. Now use

the vglm function from the VGAM package to fit parallel and non-parallel versions of the cumu-

lative logit, adjacent category logit, and continuation ratio logit models. Use main effects for

air, exposure, and smoking as the covariates in the models. For each type of model, use a

LRT to determine whether the non-parallel version fits better than the parallel version. Is the

preferred version of the model (parallel vs. non-parallel) consistent across the three different

types of models? How many more parameters do the non-parallel versions of the models have

vs. the parallel versions? [Hint: The model type can be changed via the family argument,

where the names are cumulative, acat, and cratio. The parallel argument controls

which version of the model is fitted. So for example, the non-parallel version of the contin-

uation ratio logit model would specified with the argument family=cratio(parallel=F).

Note that we could also change the link argument, but logit is the default for all three

types, so we don’t need to adjust it.]

4

(b) For each type of model (cumulative, adjacent category, and continuation ratio), choose

either the parallel version or non-parallel version to proceed with, based on the LRTs from

the previous part. Use the drop1 function to perform LRTs to determine whether any

covariates can be removed from the models. If yes, refit the models with the covariate(s)

removed.

In your final models (there should be one for each of the three types), is there evidence for

lack-of-fit? Report the values of the statistics on which you base your conclusions regarding

lack-of-fit. Why is the statistic you chose an appropriate measure of goodness-of-fit?

(c) For each of the final models chosen in the previous part, interpret the effects of the covariates

on chronic respiratory disease. Note that each type of model involves a subtly different

function of the probabilities of the respiratory disease levels, and your interpretations should

reflect that. [Hint: If you look at the summary of the fitted model, besides the reference

levels for the covariates, it also tells you which linear predictors are being modelled. Not

only should this help you interpret the effects, but you can also reverse the direction of the

probabilities by refitting the model with the reverse=T argument if that leads to a more

convenient way to interpret the effects.]

FYI, if the models you chose were the non-parallel slopes versions, the interpretations can get

quite messy and tedious. In other words, something like “current smokers have a higher odds

of more severe respiratory disease” would be too simplistic; it neither takes into account the

non-parallel slopes nor the choice of model for the probabilities. So I am expecting the detail

and nuance of the interpretation to be commensurate with the complexity of the model.

(d) Suppose we wanted to pick just one type of model among the cumulative logit, adjacent

category logit, and continuation ratio logit. How would you compare the three final models

from the previous two parts? LRTs? AIC or BIC? Log-likelihood? Some other method?

Choose a model comparison method and determine which model is preferred among the

three final models you selected in the previous two parts. Justify your choice of model

comparison method by explaining why it is appropriate (and why some of the other choices

are not appropriate).

5

学霸联盟

- 留学生代写
- Python代写
- Java代写
- c/c++代写
- 数据库代写
- 算法代写
- 机器学习代写
- 数据挖掘代写
- 数据分析代写
- Android代写
- html代写
- 计算机网络代写
- 操作系统代写
- 计算机体系结构代写
- R代写
- 数学代写
- 金融作业代写
- 微观经济学代写
- 会计代写
- 统计代写
- 生物代写
- 物理代写
- 机械代写
- Assignment代写
- sql数据库代写
- analysis代写
- Haskell代写
- Linux代写
- Shell代写
- Diode Ideality Factor代写
- 宏观经济学代写
- 经济代写
- 计量经济代写
- math代写
- 金融统计代写
- 经济统计代写
- 概率论代写
- 代数代写
- 工程作业代写
- Databases代写
- 逻辑代写
- JavaScript代写
- Matlab代写
- Unity代写
- BigDate大数据代写
- 汇编代写
- stat代写
- scala代写
- OpenGL代写
- CS代写
- 程序代写
- 简答代写
- Excel代写
- Logisim代写
- 代码代写
- 手写题代写
- 电子工程代写
- 判断代写
- 论文代写
- stata代写
- witness代写
- statscloud代写
- 证明代写
- 非欧几何代写
- 理论代写
- http代写
- MySQL代写
- PHP代写
- 计算代写
- 考试代写
- 博弈论代写
- 英语代写
- essay代写
- 不限代写
- lingo代写
- 线性代数代写
- 文本处理代写
- 商科代写
- visual studio代写
- 光谱分析代写
- report代写
- GCP代写
- 无代写
- 电力系统代写
- refinitiv eikon代写
- 运筹学代写
- simulink代写
- 单片机代写
- GAMS代写
- 人力资源代写
- 报告代写
- SQLAlchemy代写
- Stufio代写
- sklearn代写
- 计算机架构代写
- 贝叶斯代写
- 以太坊代写
- 计算证明代写
- prolog代写
- 交互设计代写
- mips代写
- css代写
- 云计算代写
- dafny代写
- quiz考试代写
- js代写
- 密码学代写
- ml代写
- 水利工程基础代写
- 经济管理代写
- Rmarkdown代写
- 电路代写
- 质量管理画图代写
- sas代写
- 金融数学代写
- processing代写
- 预测分析代写
- 机械力学代写
- vhdl代写
- solidworks代写
- 不涉及代写
- 计算分析代写
- Netlogo代写
- openbugs代写
- 土木代写
- 国际金融专题代写
- 离散数学代写
- openssl代写
- 化学材料代写
- eview代写
- nlp代写
- Assembly language代写
- gproms代写
- studio代写
- robot analyse代写
- pytorch代写
- 证明题代写
- latex代写
- coq代写
- 市场营销论文代写
- 人力资论文代写
- weka代写
- 英文代写
- Minitab代写
- 航空代写
- webots代写
- Advanced Management Accounting代写
- Lunix代写
- 云基础代写
- 有限状态过程代写
- aws代写
- AI代写
- 图灵机代写
- Sociology代写
- 分析代写
- 经济开发代写
- Data代写
- jupyter代写
- 通信考试代写
- 网络安全代写
- 固体力学代写
- spss代写
- 无编程代写
- react代写
- Ocaml代写
- 期货期权代写
- Scheme代写
- 数学统计代写
- 信息安全代写
- Bloomberg代写
- 残疾与创新设计代写
- 历史代写
- 理论题代写
- cpu代写
- 计量代写
- Xpress-IVE代写
- 微积分代写
- 材料学代写
- 代写
- 会计信息系统代写
- 凸优化代写
- 投资代写
- F#代写
- C#代写
- arm代写
- 伪代码代写
- 白话代写
- IC集成电路代写
- reasoning代写
- agents代写
- 精算代写
- opencl代写
- Perl代写
- 图像处理代写
- 工程电磁场代写
- 时间序列代写
- 数据结构算法代写
- 网络基础代写
- 画图代写
- Marie代写
- ASP代写
- EViews代写
- Interval Temporal Logic代写
- ccgarch代写
- rmgarch代写
- jmp代写
- 选择填空代写
- mathematics代写
- winbugs代写
- maya代写
- Directx代写
- PPT代写
- 可视化代写
- 工程材料代写
- 环境代写
- abaqus代写
- 投资组合代写
- 选择题代写
- openmp.c代写
- cuda.cu代写
- 传感器基础代写
- 区块链比特币代写
- 土壤固结代写
- 电气代写
- 电子设计代写
- 主观题代写
- 金融微积代写
- ajax代写
- Risk theory代写
- tcp代写
- tableau代写
- mylab代写
- research paper代写
- 手写代写
- 管理代写
- paper代写
- 毕设代写
- 衍生品代写
- 学术论文代写
- 计算画图代写
- SPIM汇编代写
- 演讲稿代写
- 金融实证代写
- 环境化学代写
- 通信代写
- 股权市场代写
- 计算机逻辑代写
- Microsoft Visio代写
- 业务流程管理代写
- Spark代写
- USYD代写
- 数值分析代写
- 有限元代写
- 抽代代写
- 不限定代写
- IOS代写
- scikit-learn代写
- ts angular代写
- sml代写
- 管理决策分析代写
- vba代写
- 墨大代写
- erlang代写
- Azure代写
- 粒子物理代写
- 编译器代写
- socket代写
- 商业分析代写
- 财务报表分析代写
- Machine Learning代写
- 国际贸易代写
- code代写
- 流体力学代写
- 辅导代写
- 设计代写
- marketing代写
- web代写
- 计算机代写
- verilog代写
- 心理学代写
- 线性回归代写
- 高级数据分析代写
- clingo代写
- Mplab代写
- coventorware代写
- creo代写
- nosql代写
- 供应链代写
- uml代写
- 数字业务技术代写
- 数字业务管理代写
- 结构分析代写
- tf-idf代写
- 地理代写
- financial modeling代写
- quantlib代写
- 电力电子元件代写
- atenda 2D代写
- 宏观代写
- 媒体代写
- 政治代写
- 化学代写
- 随机过程代写
- self attension算法代写
- arm assembly代写
- wireshark代写
- openCV代写
- Uncertainty Quantificatio代写
- prolong代写
- IPYthon代写
- Digital system design 代写
- julia代写
- Advanced Geotechnical Engineering代写
- 回答问题代写
- junit代写
- solidty代写
- maple代写
- 光电技术代写
- 网页代写
- 网络分析代写
- ENVI代写
- gimp代写
- sfml代写
- 社会学代写
- simulationX solidwork代写
- unity 3D代写
- ansys代写
- react native代写
- Alloy代写
- Applied Matrix代写
- JMP PRO代写
- 微观代写
- 人类健康代写
- 市场代写
- proposal代写
- 软件代写
- 信息检索代写
- 商法代写
- 信号代写
- pycharm代写
- 金融风险管理代写
- 数据可视化代写
- fashion代写
- 加拿大代写
- 经济学代写
- Behavioural Finance代写
- cytoscape代写
- 推荐代写
- 金融经济代写
- optimization代写
- alteryxy代写
- tabluea代写
- sas viya代写
- ads代写
- 实时系统代写
- 药剂学代写
- os代写
- Mathematica代写
- Xcode代写
- Swift代写
- rattle代写
- 人工智能代写
- 流体代写
- 结构力学代写
- Communications代写
- 动物学代写
- 问答代写
- MiKTEX代写
- 图论代写
- 数据科学代写
- 计算机安全代写
- 日本历史代写
- gis代写
- rs代写
- 语言代写
- 电学代写
- flutter代写
- drat代写
- 澳洲代写
- 医药代写
- ox代写
- 营销代写
- pddl代写
- 工程项目代写
- archi代写
- Propositional Logic代写
- 国际财务管理代写
- 高宏代写
- 模型代写
- 润色代写
- 营养学论文代写
- 热力学代写
- Acct代写
- Data Synthesis代写
- 翻译代写
- 公司法代写
- 管理学代写
- 建筑学代写
- 生理课程代写
- 动画代写
- 高数代写
- 内嵌式代写