Friday, December 12, 2025

Steps to Deploy a 4-Node MySQL 8.4 InnoDB Cluster

 

   

Introduction

MySQL InnoDB Cluster provides native, highly available, and fault-tolerant database services. It packages three components—MySQL Shell (for configuration), MySQL Router (for connection handling), and the Group Replication plugin—into one cohesive solution. This guide outlines the setup on four Linux nodes.

Prerequisites

Before starting, ensure all four Linux nodes (node1 through node4) meet these requirements:

  • MySQL Server: MySQL 8.4 installed on the first three nodes (Refer to my blog on how to install MySQL 8.4.0 https://edson-help.blogspot.com/2025/08/step-by-step-guide-to-setup-mysql-84-on.html)
  • Network: All nodes can communicate with each other on the necessary ports (3306, 33060, and Group Replication ports 33061-33062).
  • Enable ports 6446 and 6447 on the database nodes for read/write and read only connections.
  • Initial Setup: Each MySQL instance must be initialized, running, and accessible via the MySQL Shell.

  • Download the required binaries for MySQL Shell & MySQL Router
Steps For to Install MySQL Shell & InnoDB Cluster Setup

 1.     Install MySQL Shell on all VMs (Repeat for all four nodes)

              unzip MySQLShell.zip

              yum install mysql-shell-commercial-8.4.7-1.1.el8.x86_64.rpm



 2.     Add the hostname of each VM on each server for internal DNS resolution.


 3.      Test connection to the MySQL Databases using mysql shell  (mysqlsh)

        mysqlsh root@mysql-db-02:3307

        mysqlsh root@mysql-db-02:3307

        mysqlsh root@mysql-db-03:3307



4. Check the instance configuration 

  dba.checkInstanceConfiguration('admin@mysql-db-01:3307')

 

5. Use MySQL Shell to fix issues (confirm required changes)

The MySQL instance will restart during this process.

 dba.configureInstance('admin@mysql2:3307')

 
6. 
Execute the check the instance configuration again after fixing the issues. Repeat for all three database nodes.



7. Create the InnoDB Cluster from the primary node

\connect admin@mysql-db-01:3307

var cluster = dba.createCluster('cdlcluster')


8. Add the remaining two nodes to the newly created cluster.

cluster.addInstance('admin@mysql-db-02:3307')

cluster.addInstance('admin@mysql-db-03:3307')


9. Check the status of the InnoDB Cluster


 

MySQL Router Deployment

1.     Enable port 6446 and 6447 for connections from the router to the InnoDB cluster.

Execute the below commands for all three database nodes.



2. Unzip the MySQLRouter binary file


3. Install MySQL Router via rpm package


4. Grant access on the databases to allow connections from the Router.

Repeat these commands on all three database nodes.        

5. 
Configure the MySQL Router

sudo mysqlrouter --bootstrap admin@mysql-db-01:3307 --user=mysqlrouter


6.   Start the MySQL Router

 sudo systemctl start mysqlrouter


7. Test the connection with a mysql client connect to 6446 port (read/write). The connection takes you to the primary node mysql-db-01

 

mysqlsh admin@127.0.0.1:6446

 


8. Test the connection with a client connect to 6447 port (read only mode). The connection takes you to one of the secondary node mysql-db-02/03

Tuesday, October 21, 2025

OMS Startup Errors Caused by an Expired OHS Keystore Certificate

 If your Oracle Management Service (OMS) suddenly refuses to start, and you see errors related to the WebTier or Oracle HTTP Server (OHS) failing to initialize, the culprit is often an expired default SSL certificate within the OHS wallet.

When the default self-signed certificate in the OHS keystore expires, the OHS component—which acts as the WebTier for your EM console—can no longer initialize the secure (HTTPS) environment. This failure prevents the OHS from starting, which in turn causes the entire OMS startup sequence to fail with errors like below:



                                                                                                      

This guide provides the exact steps to replace the expired self-signed certificate with a new, valid self-signed certificate to restore your EM service quickly.

Before proceeding, ensure you have:

  1. Stopped the OMS:

    emctl stop oms -all  
    
  2. Access: You must be logged in as the Oracle software owner (e.g., oracle user).

  3. Environment Variables: Your environment should be set up to use the necessary Oracle Fusion Middleware binaries (e.g., the path to orapki).

The process involves backing up the old wallet (optional but recommended), creating a new auto-login wallet, generating a new self-signed certificate within it, and copying the new wallet file to the necessary configuration location.

Note: The Oracle Wallet utility (orapki) is located in the ORACLE_HOME/oracle_common/bin directory.

Step 1: Backup the Existing Certificate  

While we are replacing it, exporting the existing certificate is a good practice for historical record and reference, though it's optional for the fix itself.

$ /oracle/software/product/oem13.4/oracle_common/bin/orapki  wallet display -wallet /oracle/software/product/gc_inst/user_projects/domains/GCDomain/config/fmwconfig/components/OHS/ohs1/keystores/default

$ mv cwallet.sso cwallet.sso.expired







Step 2: Create a New Empty Auto-Login Wallet

First, we will initialize a brand new wallet in the required location, ensuring it is an auto-login wallet (cwallet.sso). This means the password is not required when the OHS tries to access it at startup.

$  /oracle/software/product/oem13.4/oracle_common/bin/orapki wallet create -wallet /oracle/software/product/gc_inst/user_projects/domains/GCDomain/config/fmwconfig/components/OHS/ohs1/keystores/default -auto_login_only




Step 3: Add a New Self-Signed Certificate to the Wallet

Now, we generate the new self-signed certificate with a long validity period (e.g., 3650 days, or 10 years) and add it to the newly created wallet. Ensure the Distinguished Name (DN) matches your configuration.

/oracle/software/product/oem13.4/oracle_common/bin/orapki wallet add -wallet /oracle/software/product/gc_inst/user_projects/domains/GCDomain/config/fmwconfig/components/OHS/ohs1/keystores/default -dn 'CN=localhost,OU=GCDomain ohs1,O=FOR TESTING ONLY' -keysize 2048 -self_signed -validity 3650 -sign_alg sha256 -auto_login_only






Step 4: Copy the Wallet to the OHS Instance Directory

The OHS instance uses a copy of the wallet file (cwallet.sso) from the configuration location. We must copy the newly created auto-login wallet file to the active OHS instance directory.

 $ cp /oracle/software/product/gc_inst/user_projects/domains/GCDomain/config/fmwconfig/components/OHS/ohs1/keystores/default/cwallet.sso /oracle/software/product/gc_inst/user_projects/domains/GCDomain/config/fmwconfig/components/OHS/instances/ohs1/keystores/default



ReferenceEM13c: WebTier Could Not Be Started Due To The Default OHS Keystore Certificate Expired (Doc ID 3070848.1)

Friday, August 1, 2025

Step by Step Guide To Setup MySQL 8.4 on OCI

 

Introduction

As organizations increasingly turn to the cloud for scalable, secure, and cost-effective infrastructure, Oracle Cloud Infrastructure (OCI) has emerged as a powerful platform for running high-performance workloads. For developers and DBAs looking to maintain full control over their database environment, deploying MySQL 8.4 on an OCI Compute instance offers the perfect blend of flexibility, performance, and cloud-native capabilities.

In this blog post, I will walk you through the process of setting up MySQL 8.4 on a virtual machine in OCI, covering installation and configuring the MySQL server.

 Pre-requisites

  •            Provision OCI compute instance Linux 8.0 x86_64 bits.
  •             Access to Oracle support portal to download MySQL 8.4.0.
  •             Prepare private and public ssh keys to the OCI virtual machine

1. Install the  ncurses-compat-libs package on Oracle Linux 8 virtual machine. This package is required when installing MySQL Enterprise Edition using the tar package. This requirement does not apply when using RPM packages for installation.

$ sudo yum install -y ncurses-compat-libs


2. Add the below environment variables to the opc user.

export PATH=$PATH:/mysql/mysql-latest/bin

export MYSQL_PS1="\\u on \\h>\\_"

 $ vi /home/opc/.bashrc


       3 Create a new user/group for the MySQL service and add mysql group to opc. Note that login to mysql user is disabled for security reasons. 

$ sudo groupadd mysql

$ sudo useradd -r -g mysql -s /bin/false mysql


        4 . Create new directory structure:

           $ sudo mkdir /mysql/ /mysql/etc/6401 /mysql/data/6401 /mysql/log /mysql/temp /mysql/binlog

          $ cd /mysql/binlog 

          $ mkdir -p replicate/64001 relay/6401

          $ sudo chown -R mysql:mysql /MySQL

          $ sudo chmod -R 750 /mysql

          $ sudo chown -R mysql:mysql /mysql 

      

      5. Validate the operating system to confirm that it is x86_64 bits.

  

      6. Download the MySQL tar binary from the Oracle support portal and upload it to an OCI storage bucket. 

         Use wget command to download the tar binary on the virtual machine.

        $ sudo wget https://MySQLp36562194_840_Linux-x86-64.zip

 A screenshot of a computer

AI-generated content may be incorrect. 


         7. Assign necessary permission on the newly downloaded binary. 
             
             $ sudo chown mysql :mysql MySQLp36562194_840_Linux-x86-64.zip
 
  8. Unzip the binary file using the command below. 
         $ sudo unzip MySQLp36562194_840_Linux-x86-64.zip
         $ sudo tar -xvf mysql-commercial-8.4.0-linux-glibc2.28-x86_64.tar.xz

    A computer screen shot of a program

AI-generated content may be incorrect.

 

          A screen shot of a computer

AI-generated content may be incorrect. 

 

        9. Create a symbolic link to mysql binary installation

             $ sudo ln -s  mysql-commercial-8.4.0-linux-glibc2.28-x86_64 mysql8.4.0 

      

      10. Create the MySQL database configuration file.

               $ sudo vi /mysql/etc/my.6401.cnf      

           A screen shot of a computer

AI-generated content may be incorrect.

 Below is the content of MySQL configuration fileA screen shot of a computer

AI-generated content may be incorrect.

              11. Initialize and start the mysql instance

              $ sudo /mysql/mysql8.4.0/bin/mysqld --defaults-file=/mysql/etc/my.6401.cnf  \

               --initialize --user=mysql  --basedir=/mysql/mysql8.4.0  --datadir=/mysql/data/6401

    

          12. Start the MySQL Instance

                 $ sudo /mysql/mysql8.4.0/bin/mysqld --defaults-file=/mysql/etc/my.6401.cnf --user=mysql &

          13. Verify that the mysqld  process is running

                        $ ps -ef|grep mysqld

 $ netstat -an | grep 6401

A black background with colorful text

AI-generated content may be incorrect.

 

             14. Retrieve root password for first login. The temporary password for root is located inside the logfile.

                $ cat /mysql/log/mysqld.6401.log or grep -i 'temporary password' /mysql/log/mysqld.6401.log

A computer screen with many small colored lines

AI-generated content may be incorrect.

        15. Test connection to the newly installed mysql database.

              $ sudo /mysql/mysql8.4.0/bin/mysql -uroot -p -h 127.0.0.1 -P6401

A computer screen with text

AI-generated content may be incorrect.

      16. Change the root password using the command below and check the status of the database.

mysql> SET PASSWORD=’xxxxxxx';

mysql> status;

A screen shot of a computer

AI-generated content may be incorrect.

 

 






Thursday, July 24, 2025

How to Setup MySQL InnoDB Cluster on OCI Using Terraform

 

Introduction

Setting up a MySQL InnoDB Cluster on Oracle Cloud Infrastructure (OCI) using Terraform allows you to automate the deployment of a highly available, self-healing MySQL environment. The InnoDB Cluster offers features such as native high availability, automatic failover, and built-in group replication, making it ideal for production-grade database deployments.

Using Terraform, Oracle's recommended Infrastructure as Code (IaC) tool, simplifies and standardizes the provisioning of cloud resources. It ensures repeatable, version-controlled, and scalable deployments.

In this blog post, I will provide a step-by-step guide on how to set up a three-node MySQL InnoDB Cluster on Oracle Cloud Infrastructure (OCI). This setup includes a Bastion host where the MySQL Router is deployed to load balance application connections to the clustered database.

Pre-requisites

The following requirements are needed to setup the InnoDB Cluster

  1. Create your API private and public keys
  2. Create your SSH keys.
  3. Access to Oracle Cloud Infrastructure resource.

 

Step 1 Add your API key to using the steps below.

  1.           Login to OCi
  2.        From your user profile -> Click on API Keys -> Add Public Key




Step 2 Terraform Installation

   1.     Download the software using the command below.

          $ wget  https://releases.hashicorp.com/terraform/0.13.4/terraform_0.13.4_linux_amd64.zip


        2. Unzip the file and validate the terraform version

              $ unzip terraform_0.13.4_linux_amd64.zip

              $ terraform version



 3.     Get the Terraform code

$ git clone https://github.com/lefred/oci-mysql-idc.git



4. Create the terraform.tfvars from the template.

   $ cd oci-mysql-idc

   $ cp terraform.tfvars.template terraform.tfvars

   Replace the following based on your OCI environment and requirement:

-- tenancy_ocid, user_ocid, fingerprint, private_key_path, region

--compartment_ocid,ssh_authorized_keys_path, ssh_private_key_path, 

clusteradmin_password,, cluster_name, number_of_nodes & node_shape

Below is my terraform.tfvars file after updating it with the details of my OCI environment. 

 
5.     Terraform Deployment

-                  Initialize the environment with the command below from the oci-mysql-idc directory.

     $ terraform init  


          6.     Create a terraform plan with the command below 

                $ terraform plan 


   7. Execute the terraform apply command to create the InnoDB cluster. 

       $ terraform apply 


         Confirm by responding with “yes” to provision the cluster


    After successful provisioning of the MySQL InnoDB cluster, you will see the

     IP addresses of the of the cluster nodes and the Bastion/MySQL router server.




Test connectivity to the Bastion and confirm that the router is running. 



    Connect to the MySQL InnoDB Cluster 


   Switching from the JavaScript/Pyton interface to the SQL interface


    Check InnoDB Cluster Status to confirm the Primary and Secondary Database Instances


8. To delete all the resources that where provisioned, execute the command below.

$ terraform destroy  





Steps to Deploy a 4-Node MySQL 8.4 InnoDB Cluster

      Introduction MySQL InnoDB Cluster provides native, highly available, and fault-tolerant database services. It packages three compone...