CS 8395-英文代写
时间:2022-12-01
CS 8395: Homework 2
Overall directions
Any resource available to you may be used to answer the homework questions, and you may collaborate with
anyone in the class, with the following caveats:
• No collaboration on the “first problem”, the reviews.
• You must record the names of your collaborators at the top of your homework.
• You are strongly encouraged to avoid copying code/answers directly; instead, where possible, write
your own versions! (Copying boilerplate data/plotting code is often most efficient, but you’ll learn
more about the Deep Learning content by implementing those parts yourself.)
This homework is due December 13 at Midnight, via Brightspace. Please zip your code using either
the .zip or .tar.gz formats, and please submit written work in PDF.
Please also include the approximate amount of time you spent on this homework at the top of your
submission.
Reviews
For this “problem”, we will create practice reviews of real ICLR submissions from this year. The selected
submissions are all on topics we’ve covered in class, and all of the submissions selected are middle or lower
tier, meaning some have good points and bad points, and a few are surely rejects. Please choose one
submission to review.
Do not look up the submissions! You can obviously look up their references, and for many cases
you might have to look up background material, but attempting to find the authors or other reviews/other
versions of this submission wouldn’t be allowed while reviewing.
Each review should contain the following fields:
1. Paper Summary
2. Strengths and Weaknesses, at four sub-items total (can be all strengths or all weaknesses).
3. Clarity (scientific writing, description of their method/experiments, etc.)
4. Quality, Novelty, and Reproducibilty (how good is the paper, how new is it to the current
literature, do you think you could reproduce these results)
5. Recommendation: a score out of 10, where a 10 gets a talk at the conference, and a 1 gets an extra
year in grad school.
Papers
• Ti-MAE: Self-supervised Masked Time Series Autoencoders
https://openreview.net/pdf?id=9AuIMiZhkL2
• Rotation Invariant Quantization for Model Compression
https://openreview.net/pdf?id=gurtzTlw6Q
• Neural Decoding of Visual Imagery via Hierarchical Variational Auto-encoders
https://openreview.net/pdf?id=TM9jOSaIzN
1
• DDM2 : Self-Supervised Diffusion MRI Denoising with Generative Diffusion Models
https://openreview.net/pdf?id=0vqjc50HfcC
• Compressive Predictive Information Coding
https://openreview.net/pdf?id=rde9B5ue32F
• The GANFather: Controllable Generation of Malicious Activity to Expose Detection
Weaknesses and Improve Defense Systems
https://openreview.net/pdf?id=9Y0P3YoERSy
• CI-VAE: a Class-Informed Deep Variational Autoencoder for Enhanced Class-Specific
Data Interpolation
https://openreview.net/pdf?id=jdEXFqGjdh
• FARE: Provbly Fair Representation Learning
https://openreview.net/pdf?id=vzdrgR2nomD
Upper-quartile (just for reference):
Just for reference (or if you’re interested in a palate cleanser), I’ve also included two top tier papers.
• Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve
https://openreview.net/pdf?id=OJ8aSjCaMNK
• Emergence of Maps in the Memories of Blind Navigation Agents
https://openreview.net/pdf?id=lTt4KjHSsyl
U-Nets and Denoising Diffusion Generators
Instead of doing this section, you can instead do one additional review from the previous section.
Please implement a U-Net denoising diffusion probabilistic model for the MNIST dataset, implementing
the following loss function:
L[εˆ(xt, t)] = ∥εˆ(xt, t)− ε∥22 (1)
Here, εˆ is the U-Net, which takes as inputs the noisy datapoint xt, and the number of noising steps t. As a
reminder, the forward process is
xt =
(√
α¯t
)
x0 +
(√
1− α¯t
)
ε (2)
α¯t =

αt (3)
Here we can choose whatever αt schedule we’d like...so let’s use a linear function from 0.0001 to 0.01, for 1000
steps. (You can choose this however you’d like, but I recommend using very small numbers). I recommend
using a U-Net with at least three down/up convolution block pairs, and using LeakyReLU. Padding the
MNIST dataset to 32× 32 is often helpful here.
Plot your results using Algorithm 2 of the original paper:
Ho et al. 2020
https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf
(I strongly recommend using GPU-time for training these models).


essay、essay代写