Aim:
1. Openstack(Manual Create Instance)
1.1 Introduction
OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacentre, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface (see www.OpenStack.org/ for more details).
Open stack is an Open source platform for IaaS, the university runs an OpenStack cluster, to which we have access. Official documentation is available docs.cs.cf.ac.uk/notes/using….
At the time of writing the OpenStack deployment consists of over 150 cores of computing power, 300 GB of memory, and nearly 20TB of storage. The provision is shared with research, though teaching has priority.
This OpenStack service must be used for the coursework. cscloud.cf.ac.uk.
1.2 Setting up your OpenStack account
There are some configurations and settings that need to be done before we really start creating the instances.
1.2.1 SSH keys
Firstly we need to set up an ssh key for communication between your machine and any VM instance that you create (this follows similar ideas as the gitlab keys).
We will save a public key in your OpenStack account and then OpenStack will insert this public key into instances that it creates. When you try to SSH into that created instance, you will be able to use the associated private key and the VM can be certain that it is you who is requesting the ssh connection. On your machine create a sensible folder (e.g. ……./DevOps/Keys/OpenStackKey) then open a gitbash shell in that directory.
In the shell, create the rsa key using a sensible name:
ssh-keygen -t rsa -f username_keypair.key
Fix the permissions of the private key:
chmod 400 username_keypair.key
400means that the owner of the file hasreadpermission, and all other users have no permissions.
This has created the keys on your machine, but we now need to save this on your OpenStack account. Login to your account: cscloud.cf.ac.uk
To store the key:
- Click on Compute > keypairs
- Click on Import Public Key
- Give the keypair a name e.g.
smart_town_keypair - Key type is
ssh - Paste key from e.g.,
smart_town_keypair .key.pub - Click on import keypair You will be able to use this later.
Easy way to cut and paste the key is to use gitbash. cd to the key’s directory then pipe the cat command into the clipboard E.G.,
cat cn_keypair.key.pub |clip
1.2.2 Creating an internal network
If you imagine that your house is the OpenStack cluster and you have 10 or 15 computers in your house, in order for them to communicate with each other and the outside world, you need to set up your home router. All computers in your house connected to that router are on a local area network and have internal IP addresses like 192.168.0.1.
We need the same sort of thing in our open stack account, a network that you can connect all the instances to so they can communicate with each other and with the outside world.
You may find that there is already a default network configured on your account, but you can use these instructions if you require separate networks.
- Click on Network > Networks > Create Network
- Give the network a name e.g. username_network
- Give the subnet a name e.g. username_subnet
- Give the subnet the following addresses
- 192.168.0.0/24
- Subnet Details> Set the allocation pool the following range
- 192.168.0.3,192.168.0.250
- Set the name servers (DNS)
- 10.239.40.2
- 10.239.40.130
- Click create
You will also need to create a router and attach the router to the network via an interface.
Create the router:
- Click on Network > Routers > Create Router
- Give the router a name e.g. username_router
- Set the external network cscloud_priv_floating
- Click create
Set the interface:
- Click on Network > Routers
- Click on the router you created earlier
- Click on the interfaces tab
- Click on add interface
- Set the subnet to the subnet you created earlier
- Click submit
This is a lot of configuration but once done, you would usually never need to recreate it (and as stated above there may already be a default network you can use).
1.3 Create an instance of a VM:
1.3.1 Details: set the instance name
1.3.2. Source: using Debian 12 Bookworm
1.3.3. Flavour: choose m1.large
1.3.4. Network: choose smart_town_network
1.3.5. Security Groups: default (might change later)
1.3.6. Key pair: smart_town_keypair
1.3.7. Configuration
you can put your script files there(ex. Vagrant)
Once the instance has been created, it will run through the build script and it will install Maria, Java, Gradle etc to cuild your application.
1.3.8. Lanch Instance
once successfully launched, you shold able to see this:
1.4 Associate Floating Address
You now have an instance running on OpenStack, but you can not access it, your network has no connection to the outside world. You need to assign (associate) an IP address to your instance.
Your network will deal with routing the messages from the external IP address to the local area instance. If you have not already obtained IP Addresses for your project you will need to do this.
-
Go to the Network tab > Floating Ips > Allocate IP to Project
-
Check the pool is
cscloud_private_floating -
Click Allocate At this point you should see an additional IP address in your IP Address list. You can now associate that IP address to your server.
-
Click on Compute > Instances
-
Click on the drop down on Right Hand Side of your instance.
-
Click on Associate Floating IP
-
Select IP Address
-
Click associate
- if success, you should be able to see the ip address like this:
1.5 Security group (optional)
You now have a network connected to the internet (within Cardiff Uni domain so there are still certain restrictions) but your network has a firewall so you can control what communication is allowed with your instances.
The security group is like the configuration of a firewall, and you need to allow SSH connections. It is likely that you will have a default security group associated with your instance and that may already have the SSH (port 22) enabled, but you can add many other rules in a similar way.
- Click on Network
- Click on Security Groups
- Chose the security group to edit (or create a new one for use later)
- Click on Manage Rules
- Click on Add Rule
- Select SSH from the rule drop down
- Click add You will need to add the server port for your application at some point, Custom TCP rules allow you to specify the required port.
1.6 Connect to the instance
You can now SSH into your server and “do Linuxy things”. You will need to connect using the secret key that you set up at the beginning of the work sheet.
Open a gitbash prompt in the key’s directory and connect to your instance’s associated IP address using the
ssh key. ssh -i username_keypair.key rocky@ipaddress
The VM’s default username is debian.
1.7 Summary
2. Terraform-Openstack(Auto Create Instance)
2.1 Getting started
We are going to create an Openstack instance with the matrix application installed. Download the Matrix.zip from Learning Central and extract the files into a Terraform project directory. The file structure should resemble the image to the right.
[Attention!!] You need to replace your script file with the server1.sh.
The files are written in HCL (HashiCorp Configuration Language)
Firstly, we will need to configure a set of environment variables so that terraform can run. Helpfully, Openstack provides a personalised script file that will set these for us. In cscloud.cf.ac.uk, Click on the Project > API access tab, Click on Download OpenStack RC File and copy it into your Terraform project directory.
Remember that we created an instance manually. We needed to create:
- Network
- Name, subnet name, subnet address, range, name servers.
- Router
- Name, external network, assign subnet (interface).
- SSH key
- Name, key data.
- Security group
- Add SSH rule.
- Instance
- Name, source, flavour, Network, Associate Floating IP
In that the network, router and ssh key are already set up this project just configures the security group and the instance. Two files configure the creation of the openstack resources.
- plan.tf – containing all the resource configurations,
- variables.tf – to store any variables used.
variables.tf
variable "flavor" { default = "m1.large" }
variable "image" { default = "Debian 12 Bookworm" } # you may need to change this
variable "name1" { default = "SmartTownDebian12" }
variable "keypair" { default = "smart_town_keypair" } # you may need to change this
variable "network" { default = "smart_town_network" } # you need to change this
variable "pool" { default = "cscloud_private_floating" }
variable "server1_script" { default = "./server1.sh" }
variable "security_description" { default = "Terraform security group" }
variable "security_name" { default = "tf_securityMat" }
plan.tf
terraform {
required_version = ">= 0.14.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.35.0"
}
}
}
resource "openstack_networking_floatingip_v2" "floating_ip_1" {
pool = var.pool
}
resource "openstack_compute_secgroup_v2" "security_group" {
name = var.security_name
description = var.security_description
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
resource "openstack_compute_instance_v2" "instance_1" {
name = var.name1
image_name = var.image
flavor_name = var.flavor
security_groups = [openstack_compute_secgroup_v2.security_group.name]
key_pair = var.keypair
user_data = file(var.server1_script)
network {
name = var.network
}
}
resource "openstack_compute_floatingip_associate_v2" "floating_ip_1" {
floating_ip = openstack_networking_floatingip_v2.floating_ip_1.address
instance_id = openstack_compute_instance_v2.instance_1.id
}
In the files we configure which provider we are using.
In these files, we also setup the security group and the instance. You may also need to specify which network you are using.
resource "openstack_networking_floatingip_v2" "floating_ip_1" {
pool = var.pool
}
And configure the floating ip address, with var.pool referring to the variable pool in the variables.tf file
variable "pool" { default = "cscloud_private_floating" }
The security group is configured as an openstack_compute_secgroup_v2 resource. You can duplicate the rule block many times to add more rules e.g. for port 8080.
resource "openstack_compute_secgroup_v2" "security_group" {
name = var.security_name
description = var.security_description
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
The instance requires a key_pair for the ssh into the instance, this is set to the name of a key you added to openstack. If you wish to run a script on the instance, once it is created, we need to point to the script. These variables are set in the variables.tf file. If you have more than one network in your openstack account, you will need to define the network you are choosing to use.
Remember to change this to your network in the variables.tf file. You will also need to select an appropriate image in the variables file. Copy this precisely from the image names available in the OpenStack web interface.
resource "openstack_compute_instance_v2" "instance_1" {
name = var.name1
image_name = var.image
flavor_name = var.flavor
security_groups = [openstack_compute_secgroup_v2.security_group.name]
key_pair = var.keypair
user_data = file(var.server1_script)
network {
name = var.network
}
}
Finally in the provided file, we can associate the floating ip address.
resource "openstack_compute_floatingip_associate_v2" "floating_ip_1" {
floating_ip = openstack_networking_floatingip_v2.floating_ip_1.address
instance_id = openstack_compute_instance_v2.instance_1.id
}
2.2 Enable Terraform access to your OpenStack
Open a gitbash shell in your project directory and set the environment variables (run your OpenStack RC File )
source XXXXXXX-openrc.sh
To initialise the project, type
terraform init
You are now ready to test and apply the terraform files.
We can now run
terraform plan
This will connect to your openstack account and check that all resources can be created. It will list all resources that will be created if you run apply. Check these are all OK. Syntax errors will show up at this stage.
Then run
terraform apply
All the resources will be created. Execution errors will show up at this stage.
Remember: To connect to the instance via ssh you need to open a gitbash prompt in the key’s directory and connect to your instance’s associated IP address using the ssh key.
ssh -i [YOUR_KEY.key] debian@ip_address
Remember: to view logs you may need to ssh in to the VM and run
journalctl
To tear down an instance, run terraform destroy and confirm.
terraform destroy
2.3 Deploy your project
Hints:
- Replace the script.sh with the Vagrant script’s bash script contents (keep the
#!/usr/bin/bashline) - Add a security rule to allow port
8080 - The terraform apply account starts in the / directory under the root user.
- You need to use ~/ to reach the root directory use this to set up the
~/.ssh/known_hosts - Install everything else in the
/home/debiandirectory – for convenience. - replaced the package manager commands specific to Rocky Linux (
yumanddnf) with the ones used in Debian (aptorapt-get).
3. Jenkins
3.1 Install Jenkins
edit your script file serverJenkins.sh:
Find more info there: Debian - Linux操作系统 (jenkins.io)
#!/usr/bin/bash
echo "---------------This is a test script-----------------"
echo "update logging configuration..."
sudo sh -c "echo '*.info;mail.none;authpriv.none;cron.none /dev/ttyS0' >> /etc/rsyslog.conf"
sudo systemctl restart rsyslog
cd /home/debian
echo "--------------------ls files-------------------"
ls
echo in directory $PWD
echo "installing MariaDB..."
sudo apt-get install mariadb-server -y
sudo systemctl start mariadb
sudo systemctl status mariadb
sudo systemctl enable mariadb
echo "creating mysql_secure_installation.txt..."
touch mysql_secure_installation.txt
cat << `EOF` >> mysql_secure_installation.txt
n
Y
comsc
comsc
Y
Y
Y
Y
Y
`EOF`
echo "running mysql_secure_installation..."
sudo mysql_secure_installation < mysql_secure_installation.txt
sudo apt update && sudo apt upgrade
sudo apt-get install wget -y
sudo apt-get install unzip -y
sudo apt-get install git -y
sudo apt-get install gnupg2 -y
echo "installing Java 17..."
sudo apt update
sudo apt install openjdk-17-jdk -y
echo java --version
echo "Link to Jenkins repository"
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
echo "install Jenkins"
sudo apt-get update
sudo apt-get install jenkins -y
echo "installing gitlab server key... has to be added to the jenkins user home (~) dir "
mkdir /var/lib/jenkins/.ssh
sudo touch /var/lib/jenkins/.ssh/known_hosts
sudo ssh-keyscan git.cardiff.ac.uk >> /var/lib/jenkins/.ssh/known_hosts
sudo chmod 644 /var/lib/jenkins/.ssh/known_hosts
sudo systemctl start jenkins
systemctl status jenkins
sudo systemctl enable jenkins
3.2 Navigate to Jenkins starting page in the browser
Use the FloatingIPaddress you have associated with your Jenkins server instancetonavigate to Jenkins starting page on port 8080.
In this example: http://10.72.97.86:8080/
In the keys folder
Enter this:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
copy the password from either location and paste it below.
3.2.1 Setup your new Jenkins account
leave it
now your jenkins is ready.
3.3 Create and configure a new Jenkins job
-
Click on “create a job” from Jenkins homepage.
-
Enter the
job name(any name you like)>SelectFreestyle project>ClickOK -
On General > Select Git as the source code management.
-
Useyour repository link e.g.
git@git.cardiff.ac.uk:c23091223/team-2-smart-towns-test.git
- Click on adding new Credentials and follow the screenshot below>>
You will enter the private key here and the public key in gitlab as a deploy key.
Ensure that you have selected the Credentials that you have just set up (git).
- You may also wish to specify a branch or ensure that the default branch(ex.
main) exists. As by default it specifiesmaster.
- Add a build step (execute shell) and paste in the provided shell script below inside the box like in the screenshot.
add scripts like below:
#!/usr/bin/bash
cd SmartTowns
ls
chmod +x ./gradlew
ls -l ./gradlew
mysql -u root -pcomsc < src/main/resources/schema.sql
mysql -u root -pcomsc < src/main/resources/data.sql
./gradlew clean
./gradlew build
./gradlew bootjar
./gradlew bootrun
./gradlew jacocoTestReport
new version(without gradlew)
#!/usr/bin/bash
cd SmartTowns
ls
#chmod +x ./gradlew
#ls -l ./gradlew
mysql -u root -pcomsc < src/main/resources/schema.sql
mysql -u root -pcomsc < src/main/resources/data.sql
/opt/gradle/gradle-7.6/bin/gradle clean
/opt/gradle/gradle-7.6/bin/gradle build
/opt/gradle/gradle-7.6/bin/gradle bootjar
/opt/gradle/gradle-7.6/bin/gradle jacocoTestReport
something wrong with the database??
solution: add this into data.sql and schema.sql
use smarttowns;
使用 gradlew 和 gradle 的区别 - 堆栈溢出 (stackoverflow.com)
SAVEthe Build Configuration.
3.4 Back to homepage > Click on “Build Now”
3.5 Check the console output for the build
Click on Console Output>Scroll down to the end of the page to verify that the build was successful
The build is successful.
3.5.1 Verify that SmartTown workspace exists
-
Login into server2_instance(SmartTownBuild) via
sshssh -i cn_keypair.key debian@10.72.97.86
-
Check that
SmartTownBuild existsand the applicationJARfileis generated -
Check that
code inspectionandunit tests reportsare generated
3.5.2 Check SmartTown database exist
- Login into your Jenkins server instance via SSH in GitBash
- Run >>
sudo mysql –u root -p - Check if databases exist >>
show databases; - Show all the tables.
use smarttowns; show tables;
3.6 Jenkins: Additional Configuration
There are some additional configurations that are really useful for monitoring the project.
3.6.1 Report Junit( error!!!)
This shows the results of all JUnit tests. There is no extra plugin required. You need to add a “Post Build Action” so go to your project and:
Configure >> Post-build Actions >> add post build action >> Publish Junit test report
Link to the Junit report files:
**/build/test-results/test/*.xml
Hint
(error!!! cant find build folder in workspace)Solution:
You need to build the project successfully to get thebuildfolder
you may need to check your project does store the files in the default location.
3.6.2 **/build/test-results/test/*.xml
We can run the Jacoco jest report from the build script and add a plugin to Jenkins to report on those results.
-
Add Jacoco plugin to Jenkins
- Dashboard >> Manage Jenkins >> Plugins >> Available plugins-- search for jacoco, check the box and scroll down to install the plugin (without restart)
-
Add to build script
- Go to the project >> configure >> add line to build script
./gradlew jacocoTestReport
-
Add a post build action
- Configure >> Post-build Actions >> add post build action >> Record Jacoco Coverage Report
- Point to .exec files /build/jacoco/.exec (Check the location on your project)
- Configure >> Post-build Actions >> add post build action >> Record Jacoco Coverage Report
-
You can also add warning points and fail points for the project. (when the pipeline is up and running, play with this).
3.6.3 Webhooks to GitLab
So far we need to trigger a build manually by clicking on ‘build now’. But what we really want is for the build to be triggered by a push to Gitlab. We can set this up by asking gitlab to notify Jenkins every time this happens. This is called a webhook.
We need to set up both Jenkins and Gitlab to configure this. docs.gitlab.com/ee/integrat…
Firstly we need to set up Jenkins
-
Install the Gitlab Plugin (Jenkins(Dashboard) > Manage Jenkins > Plugins > Available plugins >search GitLab >> install)
-
Configure global options for the gitlab plugin
- Dashboard > Manage Jenkins > System > Gitlab > Enable authentication for '/project' end-point >>> not checked
-
Configure your project to accept a webhook and generate the token for security.
- SmartTown > Configure > Build triggers > Build when a change is pushed to GitLab
- Will need the webhook URL later
- add
Push Events - add
Opened Merge Request Events
- Build Triggers> advanced > generate a
secret token(will need this later)
- SmartTown > Configure > Build triggers > Build when a change is pushed to GitLab
Next, we can set up Gitlab.
- Create the webhook for your project
-
Your project >settings > webhooks >
{copy the webhook from Jenkins into the URL}
{copy the secret token from Jenkins into the Secret Token}
{check the triggers you want (push Events)}
-
Click “Add WebHook”
-
- Test the push event from the test button in gitlab.
Now we can test the pipeline from intelij by making a simple change to the source files, committing and pushing the project.
Summary
We can now trigger a build from a push to the Gitlab repo, this will build, test and report on the project.
3.7 Jenkins-Terraform Build
Currently we can test our applications on a local VM using Vagrant to automate the process. We can create VMs on Openstack both manually and using Terraform to automate the process. We can use Jenkins to automate the build, test, and reporting of the pipeline process on the Openstack environment.
We now want to use Jenkins to trigger the build of a new, independent, server. In real life this could be used for additional testing i.e., acceptance testing, performance testing, security testing etc, or even being set up as a new production server that we could swap to at the point of update. But in our scenario, we will be setting this up as a temporary server to be destroyed on the next build (we have limited resources) but this proves the principle.
One additional compromise that we are making is that the application code for the second instance is being re-cloned and built from the git repository. This means that if there has been an additional commit to the repository in between the initial testing and the second job starting, the code will be incorrect / un-tested. In a real situation we should store the build artifacts (.jar files) from the initial build in an artifact repository.
We can then use the artifact repository as a source for all further server instances.
3.7.1 Jenkins jobs
For Jenkins to execute a terraform scrip, we can configure the build to trigger a build script. Getting Jenkins to trigger a script is simple, we have done this before. But we must think about the order of the operations we are configuring.
We may be wanting to fail the build on a code quality metric – configured in a post build action - this would mean that if we just ran the terraform script in the build process, it would run before the build was due to fail.
Alternatively, we may want to introduce a manual trigger at certain points in the pipeline e.g., before building a deployment server.
A better solution is to trigger a new Jenkins job on completion of the initial build job.
Jenkins > New project > freestyle project
Configure > Build Triggers > check “build after other projects are built > select project
In this job we need to configure a shell script to run (as a build step) which will run our terraform script.
add a Terraform directory in your project
3.7.2 Running Terraform
3.7.2.1 Terraform scripts
Our first problem is that we have no terraform scripts. A simple solution1 is to put the terraform scripts in the git repository and have them cloned from the repository along with the application code. I suggest that these scripts are put in a separate top-level directory (I have pushed similar files to the MScTakeaway2021 git repo). We will need the plan.tf, the variables.tf, and the script.sh files. The contents of the Openstack xxxxx-openrc.sh file will be included in the Jenkins build step.
We now need to find these terraform files. In the virtual machine, on which Jenkins is running, each Jenkins job is loaded from an initial point in the directory structure. The level above this has a directory for all Jenkins jobs, so from the workspace in our second job, cd ../<FirstJenkinsJob> will put you in the directory of your initial job. And in here you will find all the contents of the cloned repo (including the newly added Terraform directory).
We can copy the Terraform directory from the first job into the second job.
cp -r ../SmartTown/Terraform ./
Unfortunately, it is not quite so simple. It is likely that once we have run this job once, we have a running instance on the openstack cluster that we need to destroy. So we can run if [ -d Terraform ]; to find out if the terraform directory already exists (and there is probably an instance running). If there is we should cd into the Terraform directory, set the environment variables and run Terraform destroy -auto-approve.
We can then remove the old version of the Terraform directory and re-copy the new one from the first job then init, plan and apply the terraform scripts. The full process looks like this(where SmartTown is the name of the initial Jenkins job):
#!/usr/bin/bash
<<SET Environment Variables >>
if [ -d Terraform ]; then
cd Terraform
/usr/local/bin/terraform destroy -auto-approve
rm *.*
cd ../
fi
cp -r ../SmartTown/Terraform ./
cd Terraform
/usr/local/bin/terraform init
/usr/local/bin/terraform plan
/usr/local/bin/terraform apply -auto-approve
Note: If you look in the terraform script given for setting up the Jenkins instance, you will see that terraform was also installed.
3.7.2.1 Password Storage
Our second problem is that when setting up the environment variables for terraform we are asked for our University password!!! This is a very important password, and it should NEVER be stored insecurely (neither should any other password for other Openstack accounts).
Jenkins has a facility to securely store passwords so that they do not need to be put into scripts as plain text2 . To set up the secret in Jenkins using credentials go to:
Dashboard > Manage Jenkins > Credentials> global > add credentials
Then:
Kind= > secret textSecret= [YOUR PASSWORD] (will show as dots)- Give it an
ID!!! (avoid repeated) Description: Password for Smart Town Project
Now we can configure our jobs to use this secret:
In the build environment , tick the user secret text(s) or files(s), chose a variabl name you want to use in the scripts, and associate the variable name with the ID you gave the secret.
You can now use the secret in the shell script in your build step --- $smartTownPSW.
Above, we mentioned
<<SET Environment Variables >>, these come from the file(c23091223-openrc.sh). We need to make a slight alteration to the file; where it prompts for the password, we can now directly enter the secret variable. So we can replace <<SET Environment Variables >> with your version of the RC file:
c23091223-openrc.sh
#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
export OS_AUTH_URL=https://cscloud.cf.ac.uk:5000
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=0c43521f933b4fca8d77fe5002be3f42
export OS_PROJECT_NAME="c23091223"
export OS_USER_DOMAIN_NAME="cardiff.ac.uk"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="3693afdd0603423a9e8984fd32df7a0c"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="c23091223"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$smartTownPSW
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
example:
Hence, the scripts you should put in the build should be:
#!/usr/bin/bash
pwd
whoami
ls
export OS_AUTH_URL=https://cscloud.cf.ac.uk:5000
export OS_PROJECT_ID=0c43521f933b4fca8d77fe5002be3f42
export OS_PROJECT_NAME="c23091223"
export OS_USER_DOMAIN_NAME="cardiff.ac.uk"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
export OS_PROJECT_DOMAIN_ID="3693afdd0603423a9e8984fd32df7a0c"
if [ -z "$OS_PROJECT_DOMAIN_ID" ]; then unset OS_PROJECT_DOMAIN_ID; fi
unset OS_TENANT_ID
unset OS_TENANT_NAME
export OS_USERNAME="c23091223"
export OS_PASSWORD=$smartTownPSW
export OS_REGION_NAME="RegionOne"
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
if [ -d Terraform ]; then
cd Terraform
/usr/local/bin/terraform destroy -auto-approve
em *.*
cd ../
fi
cp -r ../SmartTown/Terraform ./
cd Terraform
/usr/local/bin/terraform init
/usr/local/bin/terraform plan
/usr/local/bin/terraform apply -auto-approve
example from ian:
error:
Questions:
3. why we need the build script in server1 config build script?
4. explain the procedure: we build a job, and triger another job to build?(The pipeline should leave the application running on a separate server and accessible via a browser via the University’s network.)
5. do we still need to have the install jenkins command in server.sh file?
6. JUnit & Jacoco Test Error?
7. how to write test file in project?
8. in-class demonstration session (week 11)
9. delete the sapce on -auto-approve:
- mysql - ERROR 1698 (28000): Access denied for user 'root'@'localhost'
When we try to run the spring bootrun, we see the error like this:
Solution:
mysql - ERROR 1698: Access denied for user 'root'@'localhost' - Stack Overflow
fixed by adding this in test.sh:
echo "change root password..."
sudo mysql -u root -pcomsc -e "ALTER USER 'root'@'localhost' IDENTIFIED BY 'comsc';"
and deleted this one maybe??
error 11. Web server failed to start. Port 8080 was already in use.
solution[error:] not working:
How to Change the Default Jenkins Port? (Linux, MacOS & Windows) - Scaler Topics
Using this command:
echo "-----Change Jenkins port to 8083-----"
# If you want jenkins on port 8083 so you can run your app on 8080 then change the default jenkins port.
sudo systemctl stop jenkins
# does not work
#sudo sed -i 's/JENKINS_PORT="8080"/JENKINS_PORT="8083"/g' /etc/default/jenkins
sudo sed --i 's/JENKINS_PORT=8080/JENKINS_PORT=8083/g' /usr/lib/systemd/system/jenkins.service
sudo systemctl daemon-reload
sudo systemctl restart jenkins
sudo systemctl status jenkins
sudo systemctl enable jenkins
now it works:
you also need to change the port for webhook to 8083 on gitlab:
and jenkins location:
error:
error: build will never end?
To resolve the port conflict, you can either find and stop the process using port 8080 or configure your application to use a different port.
To find the process, you can use the following command in your terminal:
For Linux:
sudo lsof -i :8080
Then stop it using:
sudo kill -9
<PID>
To change the application’s port, you’ll need to refer to its documentation as the steps vary by application.
issue: try to use doker to create instance
Issue: Backup Jenkins
Jenkins 备份指南:如何备份 Jenkins 数据和配置 (devopscube.com)
4. Docker
jenkins/jenkins - Docker Image | Docker Hub
use jenkins in docker · jenkinsci/docker (github.com)
Error: Openstack
5. NGINX(extra tools)
5.1 NGINX: download
Download the Stable version: ex.nginx/Windows-1.26.0
Unzip it in your tools directory.
5.2 NGINX: getting started
Open a gitbash window in the nginx directory.
start nginx
By default this will serve on port 80
To Stop the server:
./nginx -s quit
5.3 NGNIX: configuration
-
…/nginx-1.26.0/conf/nginx.conf
- Duplicate the .conf file so you have a backup.
-
Static files served from:
- …/nginx-1.26.0/html
-
Exercise:
- Change the listening port to 8081
- Put a Big HTML file(home.html) in the html directory.
- Check this works.
- Change the listening port to 8081