Posts

Showing posts from 2026

Local Yum Repository for Oracle Linux 8

            Local Yum Repository for Oracle Linux 8   Configure Server Repositories Repository Creation Resync the Repository Setup the HTTP Server Point Servers to the Local Repository Make sure the repositories of interest are available on the server, # vim oel8-tmp.repo [ol8_baseos_latest] name=Oracle Linux $releasever BaseOS ($basearch) baseurl=https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/$basearch gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 Install OEL8 related repo files from below command # dnf reinstall oraclelinux-release-el8 # dnf clean packages # dnf install oracle-epel-release-el8.x86_64 oracle-gluster-release-el8.x86_64 oracle-spacewalk-client-release-el8.x86_64 oraclelinux-developer-release-el8.x86_64 oraclelinux-release-el8.x86_64 # dnf install oracle-epel-release-el8.x86_64 oracle-gluster-release-el8.x86_64 oracle-spacewa...

Mount a host file system inside a KVM virtual machine

 Mount a host file system inside a KVM virtual machine       To mount a host file system inside a KVM virtual machine, the modern and most efficient method is using virtio-fs . Alternatively, for older systems, the 9p (VirtFS) protocol can be used.   Method 1: Using virtio-fs (Recommended)   Virtio-fs provides near-native performance by leveraging shared memory between the host and the guest.   Configure Host Shared Memory: Open Virt-Manager . Go to Memory settings and check Enable shared memory . Add Filesystem Hardware: Click Add Hardware > Filesystem . Driver: Select virtiofs . Source path: Enter the directory on your host (e.g., /home/user/shared ). Target path: Enter a "mount tag" (a label like myshare ). Mount Inside the Guest VM: Create a mount point: sudo mkdir /mnt/shared Mount the filesystem: sudo mount -t virtiofs myshare /mnt/shared . Persistent Mount (Optional): Add this line to /etc/fstab in the guest: myshare /mnt/shared virtio...

How to create RHEL 7/8/9 repositories in a single RHEL 9 environment using Podman?

    How to create RHEL 7/8/9 repositories in a single RHEL 9 environment using Podman?    Environment Red Hat Enterprise Linux Server release 7/8/9 Podman Issue Normally, it is not possible to create multiple RHEL repository servers for versions 7, 8, and 9 on a single RHEL server. Resolution There are various methods to configure RHEL repositories for multiple releases, such as 7 and 8, on a single RHEL 9 server using Podman. Please find below sample plans for this request. Feel free to modify them as per your requirements. RHEL 7 Repository: The RHEL 7 repository should be utilized with a UBI7 container. RHEL 8 Repository: The RHEL 8 repository should be utilized with a UBI8 container. RHEL 9 Repository: The RHEL 9 repository should be utilized with a UBI9 container. Example: Create a containe file      $ cat ubi7containerfile      FROM ubi7/ubi:latest    ...

Redhat Cluster 6.5 Cheet Sheet

  Redhat Cluster 6.5  Cheet Sheet    Checking status of the cluster: clustat clustat -m -> Display status of and exit clustat -s -> Display status of and exit clustat -l -> Use long format for services cman_tool status -> Show local record of cluster status cman_tool nodes -> Show local record of cluster nodes cman_tool nodes -af ccs_tool lsnode -> List nodes ccs_tool lsfence -> List fence devices group_tool -> displays the status of fence, dlm and gfs groups group_tool ls -> displays the list of groups and their membership Resource Group Control Commands: clusvcadm -d -> Disable clusvcadm -e -> Enable clusvcadm -e -F -> Enable according to failover domain rules clusvcadm -e -m -> Enable on clusvcadm -r -m -> Relocate to member clusvcadm -R -> Restart a group in place. clusvcadm -s -> Stop Resource Group Locking (for cluster Shutdown / Debugging): clusvcadm -l -> Lock local resource group manager. This preven...

Pacemaker Cluster on RHEL 9 by Vathsa

    Implementing Pacemaker Cluster on RHEL Servers   Two RHEL v9 nodes in cluster providing high availability to storage and LAN network sudo subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms sudo dnf install pcs pacemaker fence-agents-all -y sudo firewall-cmd --permanent --add-service=high-availability sudo firewall-cmd --reload sudo passwd hacluster sudo systemctl enable --now pcsd ## Preparing cluster configuration ## ## Basic Informations ## *You can substitute the values below accordingly to your environment: Linux Node1: LNXSRV1 IP Node1: X.X.X.X Linux Node2: LNXSRV2 IP Node2: Y.Y.Y.Y Virtual Hostname: HOSTVT Virtual IP: V.V.V.V LAN Network Adapter: enpXsY Shared Disk: /dev/sdA (the disk name on the node that you'll configure the cluster) VG Name: nameVG LV1 Name: lv1_name (ex: lv_www) FS1 name: fs1name (ex: /var/www/) LV2 Name: lv2_name (ex: lv_bkp) FS2 name: fs2name (ex: /web/bkp/ for backup) LV3 Name: lv3_name (ex: lv_mon) FS3 name: fs3...