OpenShift Kubernetes Docker Cheatsheet

 


oc describe

oc describe node <node1>      # show deatils of a specific resource
oc describe pod POD_NAME      # pod details
oc describe svc SERVICE_NAME  # service details
oc describe route ROUTE_NAME  # route details


oc export

    oc export RESOURCE_TYPE RESOURCE_NAME -o OUTPUT_FORMAT
# export a definition of a resource (for backup etc) in JSON or YAML format.
oc export pod mysql-1-p1d35 -o yaml
oc export svc/myapp -o json                                                

Managing pods

Get pods, Rollout, delete etc.

  oc get pods # list running pods inside a project
 oc get pods -o wide # detailed listing of pods
oc get pod -o name            # for pod names
oc get pods -n PROJECT_NAME   # list running pods inside a project/name-space
oc get pods --show-labels     # show pod labels
oc get pods --selector env=dev
                              # list pods with env=dev
oc get po POD_NAME -o=jsonpath="{..image}"
                              # get othe pod image details
oc get po POD_NAME -o=jsonpath="{..uid}"
                              # get othe pod uid details
oc adm manage-node NODE_NAME --list-pods
                              # list all pods running on specific node
oc rollout history dc/<name>  # available revisions
oc rollout latest hello       # deploy a new version of app.
oc rollout undo dc/<name>     # rollback to the last successful
                                deployed revision of your configuration
oc rollout cancel dc/hello    # cancel current depoyment

oc delete pod POD_NAME -n PROJECT_NAME --grace-period=0 --force
                              # delete a pod forcefully
                                if pod still stays in Terminating state,
                                try replace deletionTimestamp: null
                                as well as finalizers: null
                                (it may contain an item foregroundDeletion,
                                remove that)
kubectl get pods \
  --server kubesandbox:6443 \
  --client-key admin.key \
  --client-certificate admin.crt \
  --certificate-authority ca.cert
                             

kubectl get pods --kubeconfig config
                              


Managing Nodes

oc get nodes                  # list nodes in a cluster
oc get node/NODE_NAME -o yaml
                              # to see a node’s current capacity and allocatable resources
oc get nodes --show-labels | grep -i "project101=testlab"
                              # show nodes info with lable and list only node with a lable "project101=testlab"
oc get nodes -L region -L env
                              # show nodes with "region" and "evn" labels

oadm manage-node compute-102 --schedulable=false
kubectl cordon node-2
                              # make a node unschedulable

oc adm drain compute-102
kubectl drain node-1          # drain node by evicting pods
                              -–force — force deletion of bare pods
                              –-delete-local-data — delete even if there are
                                pods using emptyDir (local data that will be deleted
                                when the node is drained)
                              -–ignore-daemonsets — ignore daemonset-managed pods

oadm manage-node compute-102 --schedulable=true
kubectl uncordon node-1       # enable scheduling on node



PV & PVC - PersistentVolume & PersistentVolumeClaim

oc get pv                       # list all pv in the cluster
oc create -f mysqldb-pv.yml     # create a pv with template
oc get pvc -n PROJECT_NAME      # list all pvc in the project
oc set volume dc/mysqldb \
  --add --overwrite --name=mysqldb-volume-1 -t pvc \
  --claim-name=mysqldb-pvclaim \
  --claim-size=3Gi \
  --claim-mode='ReadWriteMany'
                                # Create volume claim for mysqldb-volume-1
kubectl get pv
kubectl get pvc


oc exec - execute command inside a containe

oc exec  <pd> -i -t -- <command>
                              # run command inside a container without login
                                eg: oc exec  my-php-app-1mmh1 -i -t -- curl -v http://dbserver:8076



Events and Troubleshooting

oc get events                 # list events inside cluster
oc logs POD                   # get logs from pod
oc logs <pod> --timestamps
oc logs -f bc/myappx          # check logs of bc
oc rsh <pod>                  # login to a pod

kubectl logs -f POD_NAME CONTAINER_NAME
                              # mention container name if you have
                                more than one container inside pod



Help and Understand

oc explain <resource>         # documentation of a resource and its fields
                                eg: oc explain pod
                                    oc explain pod.spec.volumes.configMap



Applications

oc new-app will create a,

  • dc (deploynment configuration)
  • is (image stream)
  • svc (service)
oc new-app -h                     # list all options and examples
oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=mydb -l db=mysql
                                  # create a new application
oc new-app --docker-image=myregistry.example.com/dockers/myapp --name=myapp
                                  # create a new application from private registry
oc new-app https://github.com/techbeatly/python-hello-world --name=python-hello
                                  # create a new application from source code (s2i)
                                  # -i or --image-stream=[] : Name of an image stream to use in the app


How to find registry ?

oc get route -n default           # you can see the registry url

Get Help


# 2. oc help                      # list oc command help options

Build from image

oc new-build openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git --name='newbuildtest'

Enable/Disable scheduling

oadm manage-node mycbjnode --schedulable=false
                              # Disable scheduling on node

Resource quotas

Hard constraints how much memory/CPU your project can consume

oc create -f <YAML_FILE_with_kind: ResourceQuota> -n PROJECT_NAME
                              # create quota details with YAML tempalte where kind should ResourceQuota
                              # Sample : https://github.com/ginigangadharan/openshift-cli-cheatsheet/blob/master/quota-template-32Gi_no_limit.yaml
oc describe quota -n PROJECT_NAME
                              # describe the quota details
oc get quota -n PROJECT_NAME
                              # get quota details of the project
oc delete quota -n PROJECT_NAME
                              # delete a quota for the project

Labels & Annotations

  1. Label examples: release, environment, relationship, dmzbased, tier, node type, user type
    • Identifying metadata consisting of key/value pairs attached to resources
  2. Annotation examples: example.com/skipValidation=true, example.com/MD5checksum-1234ABC, example.com/BUILDDATE=20171217
    • Primarily concerned with attaching non-identifying information, which is used by other clients such as tools or libraries
oc label node1 region=us-west zone=power1a --overwrite
oc label node node2 region=apac-sg zone=power2b --overwrite
oc patch node NODE_NAME -p '{"metadata": {"labels": {"project101":"testlab"}}}'
                              # add label to node
oc patch dc myapp --patch '{"spec":{"template":{"nodeselector":{"env":"qa"}}}'
                              # modify dc to run pods only on nodes where label 'evn':'qa'
oc label secret ssl-secret env=test
                              # add label



Limit ranges

  • mechanism for specifying default project CPU and memory limits and requests
oc get limits -n development
oc describe limits core-resource-imits -n development

ClusterQuota or ClusterResourceQuota

Ref: https://docs.openshift.com/container-platform/3.3/admin_guide/multiproject_quota.html

oc create clusterquota for-user-developer --project-annotation-selector openshift.io/requester=developer --hard pods=8
oc get clusterresourcequota |grep USER
                              # find the clusterresourcequota for USER
oc describe clusterresourcequota USER

Config View

oc config view                  # command to view your current, full CLI configuration
                                # also can see the cluster url, project url etc.
oc config get-contexts          # lists the contexts in the kubeconfig file.


kubectl config view             # to view the config in ~/.kube/config

kubectl config view --kubeconfig=path-to-config-file
                                # to view the config

kubectl config use-context dev@singapore-cluster
                                # to change the current-context

kubectl config -h               # to list avaialbe options

Managing Environment Variables

https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html

oc env rc/RC_NAME --list -n PROJECT
                                # list environment variable for the rc
oc env rc my-newapp MAX_HEAP_SIZE=128M
                                # set environment variable for the rc

Security Context Constraints

oc get scc                      # list all seven SCCs
                                      - anyuid
                                      - hostaccess
                                      - Hostmount-anyuid
                                      - hostnetwork
                                      - nonroot
                                      - privileged
                                      - restricted
oc describe scc SCC_NAME        # can see which all service account enabled.

Services & Routes

oc expose service SERVICE_NAME route-name-project-name.default-domain
or
oc expose svc SERVICE_NAME
                                # create/expose a service route
eg:
oc expose service myapache --name=myapache --hostname=myapache.app.cloudapps.example.com
                                # if you don't mention the hostname, then
                                # it will create a hostname as route-name-project-name.default-domain
                                # if you don't mention the route name, then
                                # it will take the service name as route name

oc port-forward POD_NAME 3306:3306
                                # temporary port-forwarding to a port from local host.

Scaling & AutoScaling of the pod - HorizontalPodAutoscaler

OpenShift

oc scale dc/APP_NAME --replicas=2
                                # scale application (increase or decrease replicas)
oc autoscale dc my-app --min 1 --max 4 --cpu-percent=75
                                # enable autoscaling for my-app
oc get hpa my-app               # list Horizontal Pod Autoscaler
oc describe hpa/my-app

Kubernetes

kubectl create -f replicaset-defenition.yml
                                # create replicaset
kubectl create -f replicaset-defenition.yml -namespace=YOUR_NAMESPACE
                              # create in a specific namespace
kubectl replace -f replicaset-defenition.yml
                                # change the replicas option in replicaset defenition
                                # and then run it
kubectl scale --replicas=6 -f replicaset-defenition.yml
kubectl scale --replicas=6 replicaset myapp-replicaset
                                # this one will not update replica details
                                # in replicaset defenition file
kubectl delete replicaset myapp-replicaset
                                # delete replicaset

Configuration Maps (ConfigMap)

  • Similar to secrets, but with non-sensitive text-based configuration

Creation of objects

oc create configmap test-config --from-literal=key1=config1 --from-literal=key2=config2 --from-file=filters.properties

oc volume dc/nodejs-ex --add -t configmap -m /etc/config --name=app-config --configmap-name=test-config

Reading config maps

oc rsh nodejs-ex-26-44kdm ls /etc/config

Dynamically change the config map

oc delete configmap test-config

<CREATE AGAIN WITH NEW VALUES>

<NO NEED FOR MOUNTING AS VOLUME AGAIN>

Mounting config map as ENV

oc set env dc/nodejs-ex --from=configmap/test-config
oc describe pod nodejs-ex-27-mqurr

The Replication Controller

to be done

oc describe RESOURCE RESOURCE_NAME
oc export
oc create
oc edit
oc exec POD_NAME <options> <command>
oc rsh POD_NAME <options>
oc delete RESOURCE_TYPE name
oc version
docker version

oc cluster up \
  --host-data-dir=... \
  --host-config-dir=...

oc cluster down

oc cluster up \
  --host-data-dir=... \
  --host-config-dir=... \
  --use-existing-config

oc project myproject

PersistentVolume

  • Supports stateful applications
  • Volumes backed by shared storage which are mounted into running pods
  • iSCSI, AWS EBS, NFS etc.

PersistentVolumeClaim

  • Manifests that pods use to retreive and mount the volume into pod at initialization time
  • Access modes: REadWriteOnce, REadOnlyMany, ReadWriteMany

Deployments

kubectl run blue --image=nginx --replicas=6
                                # Create a new deployment named blue
                                  with nginx image and 6 replicas
kubectl set image deployment/myapp-dc
                                # specify new image to deployment
kubectl apply -f DEFINITION.YML
                                # apply new config to existing deployment
kubectl rollout undo deployment/myapp-dc
                                # rollback a deployment
kubectl rollout status deployment/myapp-dc
                                # status of deployment
kubectl rollout history deployment/myapp-dc
                                # history of deployment


Blue-Green deployments

oc new-app https://github.com/devops-with-openshift/bluegreen#green --name=green
oc patch route/bluegreen -p '{"spec":{"to":{"name":"green"}}}'
oc patch route/bluegreen -p '{"spec":{"to":{"name":"blue"}}}'

A/B Deployments

oc annotate route/ab haproxy.router.openshift.io/balance=roundrobin
oc set route-backends ab cats=100 city=0
oc set route-backends ab --adjust city=+10%

Canary Deployments

Rollbacks

oc rollback cotd --to-version=1 --dry-run
                                # Dry run only
oc rollback cotd --to-version=1
oc describe dc cotd

Pipelines

oc new-app jenkins-pipeline-example
oc start-build sample-pipeline
  • Customizing Jenkins:
vim openshift.local.config/master/master-confi.yaml

jenkinsPipelineConfig:
  autoProvisionEnabled: true
  parameters:
    JENKINS_IMAGE_STREAM_TAG: jenkins-2-rhel7:latest
    ENABLE_OAUTH: true
  serviceName: jenkins
  templateName: jenkins-ephemeral
  templateNamespace: openshift
  • Good resource for Jenkinsfiles: https://github.com/fabric8io/fabric8-jenkinsfile-library

Configuration Management

Secrets

Creation

  • Maximum size 1MB
oc secret new test-secret cert.pem
oc secret new ssl-secret keys=key.pem certs=cert.pem
oc get secrets --show-labels=true
oc delete secret ssl-secret

Using secrets in Pods

  • Mounting the secret as a volume
oc volume dc/nodejs-ex --add -t secret --secret-name=ssl-secret -m /etc/keys --name=ssl-keys deploymentconfigs/nodejs-ex
oc rsh nodejs-ex-22-8noey ls /etc/keys
  • Injecting the secret as an env var
oc secret new env-secrets username=user-file password=password-file
oc set env dc/nodejs-ex --from=secret/env-secret
oc env dc/nodejs-ex --list

ENV

Adding

oc set env dc/nodejs-ex ENV=TEST DB_ENV=TEST1 AUTO_COMMIT=true
oc set env dc/nodejs-ex --list

Removing

oc set env dc/nodejs-ex DB_ENV-

Change triggers

  1. ImageChange - when uderlying image stream changes

  2. ConfigChange - when the config of the pod template changes

OpenShift Builds

Build strategies

  • Source-to-Image (S2I): uses the opensource S2I tool to enable developers to reporducibly build images by layering the application’s soure onto a container image

  • Docker: using the Dockerfile

  • Pipeline: uses Jenkins, developers provide Jenkinsfile containing the requisite build commands

  • Custom: allows the developer to provide a customized builder image to build runtime image

Build sources

  • Git
  • Dockerfile
  • Image
  • Binary

Build Configurations

  • contains the details of the chosen build strategy as well as the source
oc new-app https://github.com/openshift/nodejs-ex
oc get bc/nodejs-ex -o yaml
  • unless specified otherwise, the oc new-app command will scan the supplied Git repo. If it finds a Dockerfile, the Docker build strategy will be used; otherwise source strategy will be used and an S2I builder will be configured

S2I

  • Components:
  1. Builder image - installation and runtime dependencies for the app
  2. S2I script - assemble/run/usage/save-artifacts/test/run
  • Process:
  1. Start an instance of the builder image
  2. Retreive the source artifacts from the specified repository
  3. Place the source artifacts as well as the S2I scripts into the builder image (bundle into .tar and stream into builder image)
  4. Execute assemble script
  5. Commit the image and push to OCP registry
  • Customize the build process:
  1. Custom S2I scripts - their own assemble/run etc. by placing scripts in .s2i/bin at the base of the source code, can also contain environment file
  2. Custom S2I builder - write your own custom builder

Troubleshooting

  • Adding the –follow flag to the start-build command
  • oc get builds
  • oc logs build/test-app-3
  • oc set env bc/test-app BUILD_LOGLEVEL=5 S2I_DEBUG=true
oc adm diagnostics
  • --dry-run: To test your command; this will not create the resource, instead, tell you weather the resource can be created and if your command is right.
  • -o yaml: to print the output in YAML (or JASON) format.

  • Operational layers:
  1. Operating system infrastructure operations - compute, network, storage, OS
  2. Cluster operations - cluster managemebt OpenShift/Kubernetes
  3. Application operations - deployments, telemetry, logging

Integrated logging

  • the EFK (Elasticsearch/Fluentd/Kibana) stack aggregates logs from nodes and application pods
oc cluster up --logging=true

Simple metrics

  • The Kubelet/Heapster/Cassandra and you can use Grafana to build dashboard
oc cluster up --metrics=true

kubectl top node                # memory and CPU usage on node
kubectl top pod                 # memory and CPU usage by pods

# 3. Enable metrics in minikube
minikube addons enable metrics-server

Resource scheduling

  • default behavior:
  1. best effor isolation = no primises what resources can be allocated for your project

  2. might get defaulted values

  3. out of memory killed randomly

  4. might get CPU starved (wait to schedule your workload)

Multi project quota

  • you may use project labels or annotations when creating multiproject spanning quotas
oc login -u system:admin
oc login -u developer -p developer
oc describe AppliedClusterResourceQuota

Essential Docker Registry Commands

docker login -u USER_NAME -p TOKEN REGISTRY_URL
                                # before we push images, we need to
                                  login to docker registry.

docker login -u developer -p ${TOKEN} \
  docker-registry-default.apps.lab.example.com
                                # TOKEN can be get as TOKEN=$(oc whoami)

docker images --no-trunc --format ' ' --filter "dangling=true" --filter "before=IMAGE_ID"
                                # list image with format and
                                # using multiple filters

Private Docker Registry and Access

kubectl create secret docker-registry private-docker-cred \
    --docker-server=myregistry
    --docker-username=registry-user
    --docker-password=registry-password
    --docker-email=registry-user@example.com
                                # Create a secret for docker-registry

Then specify the image pull secret under the imagePullSecrets of pod/deployment definition (same level of container)

    imagePullSecrets:
    - name: private-docker-cred


Basic Networking

(For docker/kubernetes/openshift operations)

ip link                         # show interface of host
ip addr add 10.1.10.10/24 dev eth0
                                # assign IP to an interface
ip route add 10.1.20.0/24 via 10.1.10.1
                                # add a route to another network 10.1.20.0/24
                                  via 10.1.10.1 which is our router/gateway.
ip route add default via 10.1.10.1
                                # add defaulr route to any network; like internet
                                  you can also mention 0.0.0.0/0 instead of default
route                           # display kernel routing table

ip netns add newnamespace       # create a new network namespace
ip netns                        # list network namespaces
ip netns exec red ping IP_ADDRESS
ip netns exec newnamespace ip link
                                # display details inside namespace
ip link add veth-red type veth peer name veth-blue
                                # create a pipe or virtual eth (veth)
ip link set veth-red netns red
                                # attach the virtual interface to a namespace
ip -n red addr add 10.1.10.1 dev veth-red
                                # assign ip for virtual interface (veth)
                                  inside a namespace
ip -n red link set veth-red up
                                # make virtual interface up and running
ip link add v-net-0 type bridge
                                # add linux bridge

Technical Jargons

OSSM                OpenShift Service Mesh (OSSM)
                    Istio is the upstream project
                    - The upstream Istio community installation automatically
                      injects the sidecar to namespaces you have labeled.
                    - Red Hat OpenShift Service Mesh does not automatically
                      inject the sidecar to any namespaces, but requires you to
                      specify the sidecar.istio.io/inject annotation as
                      illustrated in the Automatic sidecar injection section.

CRI-O               Container Runtime Interface
OCI                 Open Container Initiative
cgroup              control group
Jaeger              Distributed Tracing System
kiali               observability console for Istio
                    Kiali answers the questions:
                    -  What microservices are part of my Istio service mesh?
                    -  How are they connected?
                    -  How are they performing?

runc                CLI tool for spawning and running
                    containers according to the OCI specification.
FaaS                Function as a Service
CaaS                Containers as a service


OpenShift 4 (to be moved to above sub-sections later)

Cluster Details

oc get clusterversion         # retrieve the cluster version
oc get clusteroperators       # retrieve the list of all cluster operators

Check logs of systemd services

oc adm node-logs -u crio my-node-name
oc adm node-logs -u kubelet my-node-name

oc adm node-logs my-node-name
                              # display all journal logs of a node

Run commands on nodes

oc debug node/my-node-name
...output omitted...
sh-4.4# chroot /host
sh-4.4# systemctl is-active kubelet

sh-4.4# toolbox               # start toolbox container

Pod Logs

oc logs pod-name
oc logs pod-name container-name

oc debug deployment/deployment-name --as-root
                              # debug pod for the application

Troubleshoting containers

oc rsh pod-name
oc cp /source pod-name:/destination
oc port-forward pod-name local-port:remote-port

Debug levels

oc get pods --loglevel 6      # or 10

StorageClass & Persistent Storage

oc get storageclass

oc set volumes deployment/example-application \
  --add --name example-pv-storage \
  --type pvc --claim-class nfs-storage \
  --claim-mode rwo \
  --claim-size 15Gi \
  --mount-path /var/lib/example-app \
  --claim-name example-pv-claim

Networking

Cluster Network Operator (to see the pod network, service network and so on)

oc get network/cluster -o yaml


Docker Commands

Docker commands Dockerfile references are moved to

Registry Commands

docker login -u USER_NAME -p TOKEN REGISTRY_URL
                                # before we push images, we need to 
                                  login to docker registry.
                                  
docker login -u developer -p ${TOKEN} \
  docker-registry-default.apps.lab.example.com                                
                                # TOKEN can be get as TOKEN=$(oc whoami)
                                
docker images --no-trunc --format ' ' --filter "dangling=true" --filter "before=IMAGE_ID"
                                # list image with format and 
                                # using multiple filters 

Private Docker Registry and Access

kubectl create secret docker-registry private-docker-cred \
    --docker-server=myregistry
    --docker-username=registry-user
    --docker-password=registry-password
    --docker-email=registry-user@example.com
                                # Create a secret for docker-registry

Then specify the image pull secret under the imagePullSecrets of pod/deployment definition (same level of container)


imagePullSecrets:
    - name: private-docker-cred
Image Handling

docker create [IMAGE]           # Create a new container from a particular image.
docker search [term]            # Search the Docker Hub repository for a particular term.
docker login                    # Log into the Docker Hub repository.
docker pull [IMAGE]             # Pull an image from the Docker Hub repository.
docker push [username/image]    # Push an image to the Docker Hub repository.
docker tag [source] [target]    # Create a target tag or alias that refers to a source image.

docker build

docker build [OPTIONS] PATH
docker build --help

  -t, --tag - set the name and tag of the image
  -f, --file - set the name of the Dockerfile
  --build-arg - set build-time variables

Running Containers

docker start [CONTAINER]        # Start a particular container.
docker stop [CONTAINER]         # Stop a particular container.
docker exec -ti [CONTAINER] [command]
                                # Run a shell command inside a particular container.
docker run -ti — image [IMAGE] [CONTAINER] [command]
                                # Create and start a container at the same time, and then run a command inside it.
docker run -ti — rm — image [IMAGE] [CONTAINER] [command]
                                # Create and start a container at the same time, 
                                # run a command inside it, and then remove the container 
                                # after executing the command.
docker pause [CONTAINER]        # Pause all processes running within a particular container.

Docker Utilities

docker history [IMAGE]          # Display the history of a particular image.
docker ps                       # List all of the containers that are currently running.
docker version                  # Display the version of Docker that is currently installed on the system.
docker images                   # List all of the images that are currently stored on the system.
docker inspect [object]         # Display low-level information about a particular Docker object.

Cleaning Docker Environment

docker kill [CONTAINER]         # Kill a particular container.
docker kill $(docker ps -q)     # Kill all containers that are currently running.
docker rm [CONTAINER]           # Delete a particular container that is not currently running.
docker rm $(docker ps -a -q)    # Delete all containers that are not currently running.
docker network ls               # list available networks

Dockerfile

FROM - to set the base image RUN - to execute a command COPY & ADD - to copy files from host to the container CMD - to set the default command to execute when the container starts EXPOSE - to expose an application port

Advanced Dockerfile Instructions

The RUN Instruction

  • RUN - executes the commands in a new layer on top of current image and then commits the result. (using /bin/sh to execute command)
    RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms"
    RUN yum update
    RUN yum install -y httpd
    RUN yum clean all -y
    
    • each RUN will create and additional layer and effects image size. So the best practice is to combine multiple RUN commands together if possible (use &&)
      RUN yum --disablerepo=* --enablerepo="rhel-7-server-rpms" && \
      yum update && \
      yum install -y httpd && \
      yum clean all -y
      

The LABEL Instruction

  • LABEL - defines image metadata as key-value pair.
    OpenShift supported labels
    • io.openshift.tags
    • io.k8s.description
    • io.openshift.expose-services
    • LABEL version="2.0" \
      description="This is an example container image" \
      creationDate="01-09-2017"
      

The WORKDIR Instruction

  • WORKDIR - set the working directory for any following RUNCMDENTRYPOINTCOPYADD instructions. Recommended to use absolute path.

The ENV Instruction

  • ENV - defines the environment variables inside the container.
    • use env command to list environment variables inside a container.
    ENV MYSQL_DATABASE_USER="my_database_user" \
      MYSQL_DATABASE="my_database"
    

The USER Instruction

  • USER - run image as a specific user (recommended to run as non-root user). OpenShift will ignore the USER instruction and will use a random userid other than root (0).

The VOLUME Instruction

  • VOLUME - crate a mount point inside the container and keep the data persistent.

Building Images with the ONBUILD Instruction

  • use ONBUILD to declare instructions that are executed only when building a child image.
FROM registry.access.redhat.com/rhscl/nodejs-6-rhel7
EXPOSE 3000
# Mandate that all Node.js apps use /usr/src/app as the main folder (APP_ROOT).
RUN mkdir -p /opt/app-root/
WORKDIR /opt/app-root

# Copy the package.json to APP_ROOT
ONBUILD COPY package.json /opt/app-root

# Install the dependencies
ONBUILD RUN npm install

# Copy the app source code to APP_ROOT
ONBUILD COPY src /opt/app-root

# Start node server on port 3000
CMD [ "npm", "start" ]


# My Openshift Cheatsheet

### Project Quotes, Limits and Templates
 * Cluster Quota
```
oc create clusterquota env-qa \
    --project-label-selector environment=qa \
    --hard pods=10,services=5
    
oc create clusterquota user-qa \
    --project-annotation-selector openshift.io/requester=qa \
    --hard pods=12,secrets=20
```

 * Templates
```
# export the default template yaml
oc adm create-bootstrap-project-template -o yaml > /tmp/project-template.yaml

# after making changes to the template
oc create -f project-template.yaml -n openshift-config

# update the projects.config.openshift.io/cluster to use the new template
oc edit projects.config.openshift.io/cluster -n openshift-config
apiVersion: config.openshift.io/v1
kind: Project
metadata:
  name: cluster
spec:
  projectRequestTemplate:
    name: project-request
```

### Openshift Secrets

"There are different secret types which can be used to enforce usernames and keys in the secret object: service-account-token, basic-auth, ssh-auth, tls and opaque. The default type is opaque. The opaque type does not perform any validation, and allows unstructured key:value pairs that can contain arbitrary values.

Data is stored inside a secret resource using base64 encoding. When data from a secret is injected into a container, the data is decoded and either mounted as a file, or injected as environment variables inside the container."

* To create ssh secret:
```
oc create secret generic sshsecret \
    --from-file=ssh-privatekey=$HOME/.ssh/id_rsa
```

 * To create SSH-based authentication secret with .gitconfig file:
```
oc create secret generic sshsecret               \
    --from-file=ssh-privatekey=$HOME/.ssh/id_rsa \
    --from-file=.gitconfig=</path/to/file>
```

* To create secret that combines .gitconfig file and CA certificate:
```
oc create secret generic sshsecret           \
    --from-file=ca.crt=<path/to/certificate> \
    --from-file=.gitconfig=</path/to/file>
```

* To create basic authentication secret with CA certificate file:
```
oc create secret generic <secret_name>  \
    --from-literal=username=<user_name> \
    --from-literal=password=<password>  \
    --from-file=ca.crt=<path/to/certificate>
```

* To create basic authentication secret with .gitconfig file and CA certificate file:
```
oc create secret generic <secret_name>     \
    --from-literal=username=<user_name>    \
    --from-literal=password=<password>     \
    --from-file=.gitconfig=</path/to/file> \
    --from-file=ca.crt=<path/to/certificate>
```

### Examine the **cluster** quota defined for the environment:

```
$ oc describe AppliedClusterResourceQuota
```

### Install pkgs using yum in a Dockerfile

```
# Install Runtime Environment
RUN set -x && \ 2
    yum clean all && \
    REPOLIST=rhel-7-server-rpms,rhel-7-server-optional-rpms,rhel-7-server-thirdparty-oracle-java-rpms \
    INSTALL_PKGS="tar java-1.8.0-oracle-devel" && \
    yum -y update-minimal --disablerepo "*" --enablerepo ${REPOLIST} --setopt=tsflags=nodocs \
      --security --sec-severity=Important --sec-severity=Critical && \
    yum -y install --disablerepo "*" --enablerepo ${REPOLIST} --setopt=tsflags=nodocs ${INSTALL_PKGS} && \
    yum clean all
```

### Docker push to ocp internal registry

```
01. oc extract -n default secrets/registry-certificates --keys=registry.crt
02. REGISTRY=$(oc get routes -n default docker-registry -o jsonpath='{.spec.host}')
03. mkdir -p /etc/containers/certs.d/${REGISTRY}
04. mv registry.crt /etc/containers/certs.d/${REGISTRY}/

05. oc adm new-project openshift-pipeline
06. oc create -n openshift-pipeline serviceaccount pipeline
07. SA_SECRET=$(oc get secret -n openshift-pipeline | grep pipeline-token | cut -d ' ' -f 1 | head -n 1)
08. SA_PASSWORD=$(oc get secret -n openshift-pipeline ${SA_SECRET} -o jsonpath='{.data.token}' | base64 -d)
09. oc adm policy add-cluster-role-to-user system:image-builder system:serviceaccount:openshift-pipeline:pipeline

10. docker login ${REGISTRY} -u unused -p ${SA_PASSWORD}
11. docker pull docker.io/library/hello-world
12. docker tag docker.io/library/hello-world ${REGISTRY}/openshift-pipeline/helloworld
13. docker push ${REGISTRY}/openshift-pipeline/helloworld

14. oc new-project demo-project
15. oc policy add-role-to-user system:image-puller system:serviceaccount:demo-project:default -n openshift-pipeline
16. oc new-app --image-stream=openshift-pipeline/helloworld:latest
```

### Creates a service to point to an external service addr (DNS or IP)

```
oc create service externalname myservice \
    --external-name myhost.example.com
```

> A typical service creates endpoint resources dynamically, based on the selector attribute of the service. The oc status and oc get all commands do not display these resources. You can use the oc get endpoints command to display them.

> If you use the oc create service externalname --external-name command to create a service, the command also creates an endpoint resource that points to the host name or IP address given as argument.

> If you do not use the --external-name option, it does not create an endpoint resource. In this case, you need to use the oc create -f command and a resource definition file to explicitly create the endpoint resources.

> If you create an endpoint from a file, you can define multiple IP addresses for the same external service, and rely on the OpenShift service load-balancing features. In this scenario, OpenShift does not add or remove addresses to account for the availability of each instance. An external application needs to update the list of IP addresses in the endpoint resource.

### Patching a DeploymentConfig from the CLI

 * this example removes an config attribute using JSON path
```
oc patch dc/mysql --type=json \
    -p='[{"op":"remove", "path": "/spec/strategy/rollingParams"}]'
```

* this example cnhage an existing attribute value using JSON format
```
oc patch dc/mysql --patch \
    '{"spec":{"strategy":{"type":"Recreate"}}}'
```

### Creating a Custom template by exporting existing resources
> The oc export command can create a resource definition file by using the --as-template option. Without the --as-template option, the oc export command only generates a list of resources. With the --as-template option, the oc export command wraps the list inside a template resource definition. After you export a set of resources to a template file, you can add annotations and parameters as desired.

> The order in which you list the resources in the oc export command is important. You need to export dependent resources first, and then the resources that depend on them. For example, you need to export image streams before the build configurations and deployment configurations that reference those image streams.

```
oc export is,bc,dc,svc,route --as-template > mytemplate.yml
```

> Depending on your needs, add more resource types to the previous command. For example, add secret before bc and dc. It is safe to add pvc to the end of the list of resource types because a deployment waits for persistent volume claim to bind.

> The oc export command does not generate resource definitions that are ready to use in a template. These resource definitions contain runtime information that is not needed in a template, and some of it could prevent the template from working at all. Examples of runtime information are attributes such as status, creationTimeStamp, image, and tags, besides most annotations that start with the openshift.io/generated-by prefix.

> Some resource types, such as secrets, require special handling. It is not possible to initialize key values inside the data attribute using template parameters. The data attribute from a secret resource needs to be replaced by the stringData attribute and all key values need to be unencoded.

### Logging Aggregation throubleshooting
 * https://access.redhat.com/articles/3136551

### Process a template, create a new binary build to customize something and them change the DeploymentConfig to use the new Image...

```
oc process openshift//datagrid72-basic | oc create -f -

oc new-build --name=customdg -i openshift/jboss-datagrid72-openshift:1.0 --binary=true --to='customdg:1.0'
oc set triggers dc/datagrid-app --from-image=openshift/jboss-datagrid72-openshift:1.0 --remove
oc set triggers dc/datagrid-app --from-image=customdg:1.0 -c datagrid-app

```

#### List only paramaters of a given template file definition

```
oc process -f mytemplate.yaml --parameters
``` 

### Copy file content from a specific image to local file system

```
docker run registry.access.redhat.com/jboss-datagrid-7/datagrid72-openshift:1.0 /bin/sh -c 'cat /opt/datagrid/standalone/configuration/clustered-openshift.xml' > clustered-openshift.xml
```

### set the default storage-class
```
oc patch storageclass glusterfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

### Change Default response timeout for a specific route:
```
oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=10s
```

### Add a nodeSelector on RC ou DC
```
oc patch dc|rc <dc_name> -p "spec:                                                                                         
  template:     
    spec:
      nodeSelector:
        region: infra"
```

### Binary Builds
```
oc new-build --binary=true --name=ola2 --image-stream=redhat-openjdk18-openshift --to='mycustom-jdk8:1.0'
oc start-build ola2 --from-file=./target/ola.jar --follow
oc new-app 
```

### Turn off/on DC triggers to do a batch of changes without spam many deployments
```
oc rollout pause dc <dc name>
oc rollout resume dc <dc name> 
```
### get a route URL using OC
```
http://$(oc get route nexus3 --template='{{ .spec.host }}')
```

### Using Nexus repo manager to store deployment artifacts
Maven uses settings.xml in $HOME/.m2 for configuration outside of pom.xml:

```xml
<?xml version="1.0"?>
<settings>
  <mirrors>
    <mirror>
      <id>Nexus</id>
      <name>Nexus Public Mirror</name>
      <url>http://nexus-opentlc-shared.cloudapps.na.openshift.opentlc.com/content/groups/public/</url>
      <mirrorOf>*</mirrorOf>
    </mirror>
  </mirrors>
  <servers>
  <server>
    <id>nexus</id>
    <username>admin</username>
    <password>admin123</password>
  </server>
</servers>
</settings>
```

Maven can automatically store artifacts using -DaltDeploymentRepository parameter for deploy task:

```
mvn deploy -DskipTests=true \
-DaltDeploymentRepository= nexus::default::http://nexus3.nexus.svc.cluster.local:8081/repository/releases
```

### to update a DeploymentConfig in order to change the Docker Image used by a specific container
```
oc project <project>
oc get is

# creates an ImageStream from a Remote Docker Registry image
oc import-image <image name> --from=docker.io/<imagerepo>/<imagename> --all --confirm

oc get istag

OC_EDITOR="vim" oc edit dc/<your_dc>

    spec:
      containers:
      - image: docker.io/openshiftdemos/gogs@sha256:<the new image digest from Image Stream>
        imagePullPolicy: Always

```

### BuildConfig with Source pull secrets
```
oc secrets new-basicauth gogs-basicauth --username=<your gogs login> --password=<gogs pwd>
oc set build-secret --source bc/tasks gogs-basicauth
```

### Adding a volume in a given DeploymentConfig

```
oc set volume dc/myAppDC --add --overwrite --name....
``` 

### Create a configmap file and mount as a volume on DC
```
oc create configmap myconfigfile --from-file=./configfile.txt
oc set volumes dc/printenv --add --overwrite=true --name=config-volume --mount-path=/data -t configmap --configmap-name=myconfigfile
```

### create a secret via CLI
```
oc create secret generic mysec --from-literal=app_user=superuser --from-literal=app_password=topsecret
oc env dc/printenv --from=secret/mysec
oc set volume dc/printenv --add --name=db-config-volume --mount-path=/dbconfig --secret-name=printenv-db-secret
```

### Configure Liveness/Readiness probes on DCs
```
oc set probe dc cotd1 --liveness -- echo ok
oc set probe dc/cotd1 --readiness --get-url=http://:8080/index.php --initial-delay-seconds=2 
```

### Create a new JOB
```
oc run pi --image=perl --replicas=1  --restart=OnFailure \
    --command -- perl -Mbignum=bpi -wle 'print bpi(2000)'
```

### CRON JOB
```
oc run pi --image=perl --schedule='*/1 * * * *' \
    --restart=OnFailure --labels parent="cronjobpi" \
    --command -- perl -Mbignum=bpi -wle 'print bpi(2000)'
```

### A/B Deployments - Split route trafic between services

```
oc expose service cotd1 --name='abcotd' -l name='cotd'
oc set route-backends abcotd --adjust cotd2=+20%
oc set route-backends abcotd cotd1=50 cotd2=50
```

### to pull an image directly from red hat offcial docker registry
```
docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift
```

### to validate a openshift/kubernates resource definition (json/yaml file)  in order to find malformed/sintax problems
```
oc create --dry-run --validate -f openshift/template/tomcat6-docker-buildconfig.yaml
```

* to prune old objects
 * https://docs.openshift.com/container-platform/3.3/admin_guide/pruning_resources.html

* to enable cluster GC
 * https://docs.openshift.com/container-platform/3.3/admin_guide/garbage_collection.html

### to get current user Barear Auth Token

```
oc whoami -t
```

### to test Master API 

```
curl -k -H "Authorization: Bearer <api_token>" https://<master_host>:8443/api/v1/namespaces/<projcet_name>/pods/https:<pod_name>:8778/proxy/jolokia/

# get pod memory via jmx
curl -k -H "Authorization: Bearer <api_token>" https://<master_host>:8443/api/v1/namespaces/<projcet_name>/pods/https:<pod_name>:8778/proxy/jolokia//read/java.lang:type\=Memory/HeapMemoryUsage | jq .
```

### to login via CLI `oc`
```
oc login --username=tuelho --insecure-skip-tls-verify --server=https://master00-${guid}.oslab.opentlc.com:8443

### to login as Cluster Admin through master host
oc login -u system:admin -n openshift
```

### to view the cluster roles and their associated rule sets in the cluster policy
```
oc describe clusterPolicy default
```

### add a role to user
```
#local binding
oadm policy add-role-to-user <role> <username>

#cluster biding
oadm policy add-cluster-role-to-user <role> <username>
```

### allow containers run with root user inside openshift
```
oadm policy add-scc-to-user anyuid -z default
```

> for more details consult: https://docs.openshift.com/enterprise/3.1/admin_guide/manage_authorization_policy.html

### to test a POD service locally
```
ip=`oc describe pod hello-openshift|grep IP:|awk '{print $2}'`
curl http://${ip}:8080
```

### to access a POD container shell
```
oc exec -ti  `oc get pods |  awk '/registry/ { print $1; }'` /bin/bash

#new way to do the same:
oc rsh <container-name>
```

### to edit an object/resource
```
oc edit <object_type>/<object_name>

#eg

oc edit dc/myDeploymentConfig
```

### Ataching a new `PersistentVolumeClaim` to a `DeploymentConfig`

```
oc volume dc/docker-registry \
   --add --overwrite \
   -t persistentVolumeClaim \
   --claim-name=registry-claim \
   --name=registry-storage
```

### Docker builder app creation
 
 ```
 oc new-app --docker-image=openshift/hello-openshift:v1.0.6 -l "todelete=yes"
 ```
 
### To create an app using a template (`eap64-basic-s2i`): Ticketmonster demo

```
oc new-app javaee6-demo
oc new-app --template=eap64-basic-s2i -p=APPLICATION_NAME=ticketmonster,SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/ticket-monster,SOURCE_REPOSITORY_REF=2.7.0.Final,CONTEXT_DIR=demo
```

### STI app creation

```
oc new-app https://github.com/openshift/sinatra-example -l "todelete=yes"
oc new-app openshift/php~https://github.com/openshift/sti-php -l "todelete=yes"
```

###  To watch a build process log

```
oc get builds
oc logs -f builds/sti-php-1
```

### To create application using Git repository at current directory:

```
$ oc new-app
```

### To create application using remote Git repository and context subdirectory:

```
$ oc new-app https://github.com/openshift/sti-ruby.git \
    --context-dir=2.0/test/puma-test-app
```

### To create application using remote Git repository with specific branch reference:
```
$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4
```

> New App From Source Code
> 
>  Build Strategy Detection
> 
>  If new-app finds a Dockerfile in the repository, it uses docker build strategy Otherwise, new-app uses source strategy
>   
>  To specify strategy, set `--strategy flag` to source or docker
>  Example: To force new-app to use docker strategy for local source repository:
  
  ```
  $ oc new-app /home/user/code/myapp --strategy=docker
  ```

### to create a definition generated by `oc new-app` command based on S2I support

```
$ oc new-app https://github.com/openshift/simple-openshift-sinatra-sti.git -o json | \
   tee ~/simple-sinatra.json
```

### To create application from MySQL image in Docker Hub:

```
$ oc new-app mysql
```

### To create application from local registry:

```
$ oc new-app myregistry:5000/example/myimage
```

> If the registry that the image comes from is not secured with SSL, cluster administrators must ensure that the Docker daemon on the OpenShift Enterprise nodes is run with the --insecure-registry flag pointing to that registry. You must also use the `--insecure-registry=true` flag to tell new-app that the image comes from an insecure registry.

### To create application from stored template:

```
$ oc create -f examples/sample-app/application-template-stibuild.json
$ oc new-app ruby-helloworld-sample
```

### To set environment variables when creating application for database image:

```
$ oc new-app openshift/postgresql-92-centos7 \
    -e POSTGRESQL_USER=user \
    -e POSTGRESQL_DATABASE=db \
    -e POSTGRESQL_PASSWORD=password
```

### To output new-app artifacts to file, edit them, then create them using oc create:

```
$ oc new-app https://github.com/openshift/ruby-hello-world -o json > myapp.json
$ vi myapp.json
$ oc create -f myapp.json
```

* To deploy two images in single pod:

```
$ oc new-app nginx+mysql
```

### To deploy together image built from source and external image:

```
$ oc new-app \
    ruby~https://github.com/openshift/ruby-hello-world \
    mysql \
    --group=ruby+mysql
```

### to export all the project's objects/resources as a single template:

```
$ oc export all --as-template=<template_name>
```

> You can also substitute a particular resource type or multiple resources instead of all. Run $ oc export -h for more examples

* to create a new project using `oadm` and defining an admin user
```
$ oadm new-project instant-app --display-name="instant app example project" \
    --description='A demonstration of an instant-app/template' \
    --node-selector='region=primary' --admin=andrew
```

### to create an app using `oc` CLI based on a `template`
```
$ oc new-app --template=mysql-ephemeral --param=MYSQL_USER=mysqluser,MYSQL_PASSWORD=redhat,MYSQL_DATABASE=mydb,DATABASE_SERVICE_NAME=database
```

### to see a list of `env` `vars` defined in a DeploymentConfig object
```
$ oc env dc database --list
# deploymentconfigs database, container mysql
MYSQL_USER=***
MYSQL_PASSWORD=***
MYSQL_DATABASE=***

```

### to manage enviorenmet variables in different ose objects types.

The first adds, with value /data. The second updates, with value /opt.

```
$ oc env dc/registry STORAGE=/data
$ oc env dc/registry --overwrite STORAGE=/opt
```

To unset environment variables in the pod templates:

```
$ oc env <object-selection> KEY_1- ... KEY_N- [<common-options>]
```

> The trailing hyphen (-, U+2D) is required.

This example removes environment variables ENV1 and ENV2 from deployment config d1:
```
$ oc env dc/d1 ENV1- ENV2-
```

This removes environment variable ENV from all replication controllers:
```
$ oc env rc --all ENV-
```

This removes environment variable ENV from container c1 for replication controller r1:

To list environment variables in pods or pod templates:
```
$ oc env rc r1 --containers='c1' ENV-
```

This example lists all environment variables for pod p1:
```
$ oc env <object-selection> --list [<common-options>]
```

```
$ oc env pod/p1 --list
```

### to apply some change (patch)

```
oc patch dc/<dc_name> \
   -p '{"spec":{"template":{"spec":{"nodeSelector":{"nodeLabel":"logging-es-node-1"}}}}}'
```

### to apply a vlome storage

```
oc volume dc/<dc_name> \
          --add --overwrite --name=<volume_name> \
          --type=persistentVolumeClaim --claim-name=<claim_name>
```

### to make a node unschedulable in a cluster

```
oadm manage node <nome do node > --schedulable=false
```

### to create a registry with storage-volume mounted on host
```
oadm registry --service-account=registry \
    --config=/etc/origin/master/admin.kubeconfig \
    --credentials=/etc/origin/master/openshift-registry.kubeconfig \
    --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \
    --mount-host=<path> --selector=meuselector
```

### to export all resources from a project/namespace as a template
```
oc export all --as-template=<template_name>
```

### to create a build from a Dockerfile

```
# create the build
cat ./path/to/your/Dockerfile | oc new-build --name=build-from-docker --binary --strategy=docker -l app=app-from-custom-docker-build  -D -

#if you need to give some input to your Docker Build process
oc start-build build-from-docker --from-dir=. --follow

# create an OSE app from the docker build image
oc new-app app-from-custom-docker-build -l app=app-from-custom-docker-build

oc expose service app-from-custom-docker-build
```

### to copy files to/from a POD

```
#Ref: https://docs.openshift.org/latest/dev_guide/copy_files_to_container.html

oc rsync /home/user/source devpod1234:/src

oc rsync devpod1234:/src /home/user/source
```

## Cluster nodes CleanUp

1. Desliga todos os containers que vc não tá usando no seu ambiente do openshift
2. Executa em todos os nodes e master o comando: docker rm $(docker ps -a -q)
3. Remove todas as imagens de todos os nodes e master. Para isso loga em cada uma delas via ssh e remove as imagens usando docker rmi <id da image>. Pega as imagens que começa com o ip do registry 172.30...
4. configurar GC: https://docs.openshift.com/enterprise/3.1/admin_guide/garbage_collection.html

##Tips

* internal DNS name of ose/kubernetes services
 * follows the pattern `<service-name>.<project>.svc.cluster.local`


 Object Type	 | Example 
--------------- | ----------------------------------------------
 Default        | <pod_namespace>.cluster.local 
 Services       | <service>.<pod_namespace>.svc.cluster.local
 Endpoints      | <name>.<namespace>.endpoints.cluster.local
 
 > "he only caveat to this, is that if we are using the multi-tenant OVS networking plugin, our cluster administrators will have to make visible our ci project to all other projects:" Ref: https://blog.openshift.com/improving-build-time-java-builds-openshift/
```
$ oadm pod-network make-projects-global ci
```

### Adjust Master Log Level
To adjust openshift-master log level, edit following line of `/etc/sysconfig/atomic-openshift-master` from master VM:

```
OPTIONS=--loglevel=4
```

To make changes valid, restart atomic-openshift-master service:

```
$ sudo -i systemctl restart atomic-openshift-master.service
```

In node machine, to provide filtered information:
```
# journalctl -f -u atomic-openshift-node
```

### Enable EAP clustering/replication

Make sure that your default service account has sufficient privileges to communicate with the Kubernetes REST API.
Add the view role to serviceaccount for the project:

```
$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
```
Examine the first entry in the log file:

```
Service account has sufficient permissions to view pods in kubernetes (HTTP 200). Clustering will be available.
```

## OCP Internal VIP failover for Routers running on Infra nodes

```
oc adm ipfailover ipf-ha-router 
    --replicas=2 --watch-port=80 \
    --selector="region=infra" \
    --virtual-ips="x.0.0.x" \
    --iptables-chain="INPUT" \
    --service-account=ipfailover --create
```

## Creating a new Template
 * Common strategy for building template definitions:
  * Use oc new-app and oc expose to manually create resources application needs
  * Test to make sure resources work as expected
  * Use oc export with -o json option to export existing resource definitions
  * Merge resource definitions into template definition file
  * Add parameters
  * Test resource definition in another project
  > JSON syntax errors are not easy to identify, and OpenShift is sensitive to them, refusing JSON files that most browsers would accept as valid. Use jsonlint -s from the python-demjson package, available from EPEL, to identify syntax issues in a JSON resource definition file.
 
 * Use `oc new-app` with `-o json` option to bootstrap your new template definition file
 ```
oc new-app -o json openshift/hello-openshift > hello.json 
 ```
 
 * Converting the Resource Definition to a Template
  * Change kind attribute from List to Template
  * Make two changes to metadata object:
  * Add name attribute and value so template has name users can refer to
  * Add annotations containing description attribute for template, so users know what template is supposed to do.
  * Rename items array attribute as objects 

## Working with Templates
 * to list all parameters from mysql-persistent template:
```
$ oc process --parameters=true -n openshift mysql-persistent
```
 * Customizing resources from a preexisting Template
 
Example:
```
$ oc export -o json
	-n openshift mysql-ephemeral > mysql-ephemeral.json
... change the mysql-ephemeral.json file ...
$ oc process -f mysql-ephemeral.json \
	-v MYSQL_DATABASE=testdb,MYSQL_USE=testuser,MYSQL_PASSWORD=
	> testdb.json
$ oc create -f testdb.json
```

> oc process uses the -v option to provide parameter values, while oc new-app command uses the -p option.

### Create Definition Files for Volumes

```
ssh master00-$guid
mkdir /root/pvs
```

```
export volsize="5Gi"
for volume in pv{1..25}; \
do \
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ${volume} 
spec:
  capacity:
    storage: ${volsize} 
  accessModes:
  - ReadWriteOnce 
  nfs: 
    path: /var/export/pvs/${volume} 
    server: 192.168.0.254 
  persistentVolumeReclaimPolicy: Recycle 
EOF
     echo "Created def file for ${volume}"; \
done
```

```
export volsize="10Gi"
for volume in pv{26..50}; \
do \
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ${volume} 
spec:
  capacity:
    storage: ${volsize} 
  accessModes:
  - ReadWriteOnce 
  nfs: 
    path: /var/export/pvs/${volume} 
    server: 192.168.0.254 
  persistentVolumeReclaimPolicy: Recycle 
EOF
     echo "Created def file for ${volume}"; \
done
```

```
export volsize="1Gi"
for volume in pv{51..100}; \
do \
cat << EOF > /root/pvs/${volume}.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: ${volume} 
spec:
  capacity:
    storage: ${volsize} 
  accessModes:
  - ReadWriteOnce 
  nfs: 
    path: /var/export/pvs/${volume} 
    server: 192.168.0.254 
  persistentVolumeReclaimPolicy: Recycle 
EOF
     echo "Created def file for ${volume}"; \
done
```

### Patch PVs definitions

```
for pv in $(oc get pv|awk '{print $1}' | grep pv | grep -v NAME); do oc patch pv $pv -p "spec:      
  accessModes:
  - ReadWriteMany
  - ReadWriteOnce
  - ReadOnlyMany
  persistentVolumeReclaimPolicy: Recycle"
```

### Patch a DC on OCP 4 to set env vars from a ConfigMap

```
oc patch -n user1 dc/events -p '{ "metadata" : { "annotations" : { "app.openshift.io/connects-to" : "invoice-events,inventory-events" } }, "spec": { "template": { "spec": { "containers": [ { "name": "events", "env": [ { "name": "AMQP_HOST", "valueFrom": { "configMapKeyRef": { "name": "amq-config", "key": "service.host" } } }, { "name": "AMQP_PORT", "valueFrom": { "configMapKeyRef": { "name": "amq-config", "key": "service.port.amqp" } } } ] } ] } } } }'
```

### Patch a ConfigMap
```
oc patch configmap/myconf --patch '{"data":{"key1":"newvalue1"}}'
```

### Verify if a giver Service Account has a given `rolebinding`
```
oc get rolebinding -o wide -A | grep -E 'NAME|ClusterRole/view|namespace/sa_name'
```

### Using `jq` utility to search/filter through a `oc get` json output:
```bash
#!/bin/bash

oc get service --all-namespaces -o json  | jq '.items[]
 | select(
    .metadata.labels."discovery.3scale.net" == "true"
    and .metadata.annotations."discovery.3scale.net/port"
    and .metadata.annotations."discovery.3scale.net/scheme"
   )
 | {
     "service-name": .metadata.name,
     "service-namespace": .metadata.namespace,
     "labels": .metadata.labels,
     "annotations": .metadata.annotations
   } '
```
	
### Operators troubleshutting stuff

```
oc get ClusterServiceVersion --all-namespaces
oc get subs -n openshift-operators
oc api-resources
oc explain <resource name>[.json attribute]
```

## Openshift Image Streams and Tags

The OpenShift community recommends using image stream resources **to refer to container images instead of using direct references to container images**. **An image stream resource points to a container image** either in the internal registry or in an external registry, and stores metadata such as available tags and image content checksums.

Having container image metadata in an image stream allows OpenShift to perform operations, such as image caching, based on this data instead of going to a registry server every time. It also allows using either notification or pooling strategies to react to image content updates.

Build configurations and deployment configurations use image stream events to perform operations such as:

Triggering a new S2I build because the builder image was updated.

Triggering a new deployment of pods for an application because the application container image was updated in an external registry.

The easiest way to create an image stream is by using the oc import-image command with the `--confirm` option. The following example creates an image stream named myis for the acme/awesome container image that comes from the insecure registry at `registry.acme.example.com`:

```
[user@host ~]$ oc import-image myis --confirm \
--from registry.acme.example.com:5000/acme/awesome --insecure
```

The openshift project provides a number of image streams for the benefit of all OpenShift cluster users. You can create your own image streams in the current project using both the oc new-app command as well as using OpenShift templates.

An image stream resource can define multiple image stream tags. An image stream tag can either point to a different container image tag or to a different container image name. This means you can use simpler, shorter names for common images, such as S2I builder images, and use different names or registries for variations of the same image. For example, the ruby image stream from the openshift project defines the following image stream tags:

```
ruby:2.5 refers to rhel8/ruby-25 from the Red Hat Container Catalog.
ruby:2.6 refers to rhel8/ruby-26 from the Red Hat Container Catalog.
```
	
## DeploymentConfig Post-deployment (lifecycle) hook sample
```bash
oc patch dc/mysql --patch \
'{"spec":{"strategy":{"recreateParams":{"post":{"failurePolicy": "Abort","execNewPod":{"containerName":"mysql","command":["/bin/sh","-c","curl -L -s https://github.com/RedHatTraining/DO288-apps/releases/download/OCP-4.1-1/import.sh -o /tmp/import.sh&&chmod 755 /tmp/import.sh&&/tmp/import.sh"]}}}}}}'
```

## oc CLI + bash tricks
### tail logs for all pods at once
```
oc get pods -o name | xargs -L 1 oc logs [--tail 1 [-c <conatiner-name>]]
```

### print response fields with `curl`
```
curl -s \
   -w 'HTTP code: %{http_code}\nTime: %{time_total}s\n' \
   "$SVC_URL"
```

### retrieving a POD Name dynamically

```
INGRESS_POD=$(oc -n istio-system get pods -l istio=ingressgateway -o jsonpath='{.items..metadata.name}')
oc -n istio-system exec $INGRESS_POD -- ls /etc/istio/customer-certs
```

### creating a inline json patch file and applying to a resource

```
cat > gateway-patch.json << EOF
[{
  "op": "add",
  "path": "/spec/template/spec/containers/0/volumeMounts/0",
  "value": {
    "mountPath": "/etc/istio/customer-certs",
    "name": "customer-certs",
    "readOnly": true
  }
},
{
  "op": "add",
  "path": "/spec/template/spec/volumes/0",
  "value": {
  "name": "customer-certs",
    "secret": {
      "secretName": "istio-ingressgateway-customer-certs",
      "optional": true
    }
  }
}]
EOF
```

applying the patch

```
oc -n istio-system patch --type=json deploy istio-ingressgateway -p "$(cat gateway-patch.json)"

```

## Istio stuff

Verify the given pod uses a unique SVID ([SPIFFE - Secure Production Identity Framework for Everyone](spiffe.io) Verified Identity Document):

```
oc exec $POD_NAME -c istio-proxy -- \
 curl -s  http://127.0.0.1:15000/config_dump  | \
 jq -r .configs[5].dynamic_active_secrets[0].secret | \
 jq -r .tls_certificate.certificate_chain.inline_bytes | \
 base64 --decode | \
 openssl x509 -text -noout | \
 grep "X509v3 Subject" -A 1
  X509v3 Subject Alternative Name: critical
    URI:spiffe://cluster.local/ns/mtls/sa/POD_NAME
```

## Wait for a resource (eg. POD) to be read (met a condition)
```
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=90s
```
oc create secret generic sshsecret \` `
oc create secret generic sshsecret \





DOCKER CONTAINERS

A docker image’s runtime instance is referred to as a container. The container remains consistent regardless of the infrastructure in use. This isolation of software from its environment guarantees uniformity in function, even in cases where there are discrepancies between development and staging.

Name

Command

Starting Containers

docker container start nginx

Stopping Containers

docker container stop nginx

Restarting Containers

docker container restart nginx

Pausing Containers

docker container pause nginx

Unpausing Containers

docker container unpause nginx

Blocking a Container

docker container wait nginx

Sending SIGKILL Containers

docker container kill nginx

Sending another signal

docker container kill -s HUP nginx

Connecting to an Existing Container

docker container attach nginx

Check the Containers

docker ps

To see all running containers

docker container ls

Container Logs

docker logs infinite

‘tail -f’ Containers’ Logs

docker container logs infinite -f

Inspecting Containers

docker container inspect infinite

Inspecting Containers for certain

docker container inspect –format ‘{{ .NetworkSettings.IPAddress }}’ $(docker ps -q)

Containers Events

docker system events infinite

docker system events infinite

docker container port infinite

Running Processes

docker container top infinite

Container Resource Usage

docker container stats infinite

Inspecting changes to files or directories on a container’s filesystem

docker container diff infinite













Comments

Popular posts from this blog

RHEL - How to back out a failed patch

Vathsa's- Linux - SysOps and DevOps

Local Yum Repository for Oracle Linux 8