COSC2759-PostgreSQL代写
时间:2023-05-29
RMIT Classification: Trusted
COSC2759 Lab/Tutorial Week 12 (2023)
Goals of this lab
• Get hands-on experience with Helm.
• See how YAML manifests packaged with software can be used to configure
supporting software such as nginx or even cloud resources such as Elastic Load
Balancer.
• Reflect on what you have learned during the semester.
Overview
We will use kubectl and helm to deploy a sample web app behind an nginx ingress controller
and an AWS Elastic Load Balancer. We will barely scratch the surface of the complexity
possible with this kind of configuration, but we will demonstrate some important concepts.
Deploy Kubernetes (EKS)
We will deploy an Elastic Kubernetes Service (EKS) cluster with a simple EC2 node pool, as
we did in week 8’s lab. Summary instructions are provided here, but for detailed instructions
and screenshots refer to week 8’s lab.
1. Start your AWS learner lab, and open the AWS console.
2. Go to the “Elastic Kubernetes Service” portal.
3. Click the “Add cluster” -> “Create” button:
4. Fill in the form/wizard to configure the cluster:
a. Pick a name for your cluster, e.g. “patcluster” if your name is Pat.
b. Leave the Kubernetes version as “1.26”
c. “Cluster service role” should be pre-populated as “LabRole”. This is the IAM
role that the Kubernetes control plane will use to talk to AWS API’s (e.g. to
manage load balancers).
d. Under networking, leave the VPC as “default” and uncheck the subnets for
us-east-1{d,e,f}. Three availability zones is enough for us.
e. Accept the default “add-ons”.
f. Finally click “Create” to create your cluster:
5. Wait for your cluster to be created. It can take up to 10 minutes to create the cluster.
It will have status “Creating” during this time.
6. Go to the Compute tab, scroll down to Node groups and click “Add node group”.
7. Give the node group any old name, and assign the “LabRole” role to it (that’s the only
IAM role you have available in the learner lab environment).
8. You can accept the defaults for the rest of the node group wizard, and finally “Create”
your node group.
9. Wait for up to 10 minutes while the node group is created. The node group’s status
will show as “Creating” during this time.
10. Once the node group’s status says “Active”, congratulations, you now have your
Kubernetes cluster.
RMIT Classification: Trusted
Configure command-line tools
In this section we will configure the `aws` CLI and install `kubectl` and `helm`. The process
of feeding credentials to the `aws` CLI should be familiar to you by now. You may also
already have `kubectl` installed from week 8’s lab.
1. Go to the learner lab portal and get your AWS credentials. Put them in
`~/.aws/credentials`.
2. Check whether you have `kubectl` installed; if not, install it as follows:
# Install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl
# Download the Kubernetes signing key
sudo curl -fsSLo \
/etc/apt/keyrings/kubernetes-archive-keyring.gpg \
https://dl.k8s.io/apt/doc/apt-key.gpg
# Register the Kubernetes apt package repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-
keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial
main" \
| sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install kubectl
sudo apt-get update
sudo apt-get install -y kubectl
3. Authorise `kubectl` to talk to your EKS cluster:
aws eks --region us-east-1 update-kubeconfig \
--name ${YOUR_CLUSTER_NAME_HERE}
4. Test `kubectl` by saying `kubectl get nodes`. You should get a list of the nodes (EC2
instances) in your node pool.
5. Install `helm` using the handy shell script from the Helm project:
curl
https://raw.githubusercontent.com/helm/helm/main/scripts/get-
helm-3 | bash
Deploy nginx ingress controller
In previous labs we have installed nginx as an apt package on various Ubuntu EC2
instances. We have mainly used it to serve simple HTML pages. However, in industry nginx
is more often used as a proxy, passing requests on to backends such as NodeJS apps. In
this capacity nginx offers load balancing, caching, etc. As a small example, in week 10’s lab
we used the `proxy_pass` directive in an nginx config file to pass traffic on to a particular
version of a NodeJS app.
The nginx ingress controller (known as “ingress-nginx”) is, at its core, a way to run nginx on
a Kubernetes cluster, with its configuration file managed through Kubernetes manifests.
Kubernetes offers a resource type called “Ingress”. An “Ingress” represents a set of possible
inbound HTTP requests (e.g. requests for a particular hostname and path) and specifies
RMIT Classification: Trusted
which service or pods the requests should be passed to. An “Ingress” resource by itself
simply records a wish that certain requests would be forwarded to a particular destination;
somebody still needs to actually do the forwarding! That’s where the nginx ingress controller
comes in. It reads the Ingress resources and generates an nginx configuration file
accordingly.
All of this is very complicated, but we can skip most of it by just installing a Helm chart
(package) which deploys the nginx ingress controller into our cluster.
1. Tell Helm to recognise the official ingress-nginx repository, so we can install charts
from it:
helm repo add ingress-nginx \
https://kubernetes.github.io/ingress-nginx
2. Create a Kubernetes namespace for the nginx ingress controller to live in, separate
from any applications:
kubectl create ns ingress-nginx
3. Finally install the nginx ingress controller into the new namespace:
helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx
Notice the parallels between installing software a single computer/VM using apt, and
installing software on a Kubernetes cluster using Helm. See if you can find the resources
created by this Helm chart, e.g. start with `kubectl get pods -A` which should include at least
one ingress-nginx-controller pod.
For more information about Helm, see https://helm.sh/docs/.
Check out the Elastic Load Balancer
1. Go to your AWS portal. Go to the EC2 portal, and in the left-hand menu scroll down
to “Load balancers”. In the list you will see a ("classic”) Elastic Load Balancer. Where
did that come from? You didn’t create that in the AWS portal, nor did you run any
Terraform to deploy it.
What happened was that as part of the installation of ingress-nginx, a Service was
created called “ingress-nginx-controller".
2. View the ingess-nginx-controller Service with `kubectl get svc -A`.
Observe that the other Services in the cluster have type “ClusterIP”. The ingress-
nginx-controller Service has type “LoadBalancer”. This causes the “load balancer
controller” built into EKS to provision an Elastic Load Balancer for it, allowing external
traffic to be sent into the cluster.
3. Note down the DNS name of the Elastic Load Balancer. You can get this from the
AWS portal, or from `kubectl get svc –A`. You will need this in the next section.
Deploy sample application
RMIT Classification: Trusted
The sample application is very basic. It uses a docker image called “http-echo” to print the
string “Hello COSC2759” in response to every HTTP request. Our challenge is to get traffic
from the public internet into this app.
1. Clone the "labs” repo from https://github.com/rmit-computing-technologies/cosc2759-
sem1-2023-labs. If you already have it, you can `git pull` to get the latest version.
Make sure you have the directory `lab12`.
2. `cd` into the `lab12` directory.
3. Open `sample-app.yaml` and observe the Kubernetes resources defined therein:
• Ingress
• Service
• Deployment
4. Deploy the sample app:
kubectl apply -f ./sample-app.yaml
5. Check that the app’s Pod has been created:
kubectl get pods -A
6. Check that the app’s Ingress has been created:
kubectl get ingress -A
7. Visit your new web app in a web browser, at
`http://${LOAD_BALANCER_DNS_NAME_GOES_HERE}`. You should be returned
the string “Hello COSC2759”.
That was a very complicated way to host such a simple web app. However in this case we
simply passed all requests from the external load balancer to the one app. We could have
multiple Ingress resource types, so that requests for different paths went to different apps.
From the outside, it would look like one complex app, but internally it would be built up out of
very simple containerised applications, assembled using things like ingress controllers.
Indeed this is one of the foundational ideas of the “microservices” pattern.
For the reference for the Ingress resource type, see
https://kubernetes.io/docs/concepts/services-networking/ingress/. For the microservices
pattern, see your local Twitter flamewar thread, debating “microservices vs monoliths”. (You
might also like to look into service meshes. Behold the power of the Kube!)
Apart from the question of microservices, we are also demonstrating here the concept of
“application-driven configuration of infrastructure". That is, we don’t deploy a load balancer
for its own sake. We only deploy a load balancer as and when an application needs one (as
indicated by its Kubernetes YAML manifests), and the process is automated. This is a very
DevOpsy way to do things.
Reflection activity
This is the last lab of the semester. In the first lab (week 2), you followed instructions and
used a set of shell scripts from a git repo to configure a Ubuntu VM on your laptop. You have
since seen all kinds of other ways to configure computers to run programs, which is the main
day-to-day task of a DevOps engineer – ideally with as much automation as possible,
although also being careful with complexity and cost.
RMIT Classification: Trusted
The following questions may be useful to help you reflect on what you have learned and
where you should go next.
Out of the methods you experienced in the labs, which did you think were most useful or
least useful? In what scenarios would some be more useful than others?
If some tasks were awkward or annoying to perform, how could that be alleviated? E.g.
better tools, better configuration, better documentation?
What do you think will be most important for you to know, going into a career as a DevOps
engineer? How can you get practical, hands-on experience before you enter the workforce?
Finally I (Pat) would like to thank you for participating in the labs, and thankyou to the tutors
for helping to smooth over all the rough edges in the instructions ( ), and I wish all of you
the best in your ongoing journey in DevOps.


essay、essay代写