PolarSPARC

Introduction to Vert.x - Part 4


Bhaskar S 05/26/2019


Overview

In Part-3 of this series, we explored the 3 types of messaging patterns using the EventBus, which is the core communication backbone in Vert.x.

In this part, we will continue with examples around the distributed cluster mode of the EventBus using Hazelcast, which allows Verticle(s) running in different JVMs to communicate with each other.

Hands-on with Vert.x - 4

Vert.x uses a pluggable architecture to enable a specific implementation of a cluster manager for distributed computing. Hazelcast is the default cluster manager in Vert.x . A cluster manager is used for the following purposes in Vert.x:

The following is the modified listing of the Maven project file pom.xml that includes the additional library vertx-hazelcast as a dependency:

pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  
  <groupId>com.polarsparc</groupId>
  <artifactId>Vertx</artifactId>
  <version>1.0</version>
  <packaging>jar</packaging>
  <name>Vertx</name>

  <build>
    <pluginManagement>
      <plugins>
        <plugin>
          <artifactId>maven-compiler-plugin</artifactId>
          <version>3.3</version>
          <configuration>
            <fork>true</fork>
            <meminitial>128m</meminitial>
            <maxmem>512m</maxmem>
            <source>1.8</source>
            <target>1.8</target>
          </configuration>
        </plugin>
      </plugins>
    </pluginManagement>
  </build>

  <dependencies>
    <dependency>
        <groupId>io.vertx</groupId>
        <artifactId>vertx-core</artifactId>
        <version>3.7.0</version>
    </dependency>  
    <dependency>
        <groupId>io.vertx</groupId>
        <artifactId>vertx-config</artifactId>
        <version>3.7.0</version>
    </dependency>
    <dependency>
        <groupId>io.vertx</groupId>
        <artifactId>vertx-hazelcast</artifactId>
        <version>3.7.0</version>
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>3.8.1</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
</project>

Hazelcast is a popular in-memory data grid that distributes and shards data across a cluster of nodes in a network. It can either use a multicast network or use TCP to discovery nodes (or members) of the cluster. For our example, we will use TCP network for member discovery. To configure Hazelcast, one must provide a configuration file in XML format.

The following is the listing for the config file my-cluster.xml in the XML format:

my-cluster.xml
<?xml version="1.0" encoding="UTF-8"?>

<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.10.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <properties>
        <property name="hazelcast.wait.seconds.before.join">0</property>
    </properties>
       
    <group>
        <name>polarsparc</name>
    </group>
   
    <network>
        <join>
            <multicast enabled="false"/>
            <tcp-ip enabled="true">
                <interface>127.0.0.1</interface>
            </tcp-ip>
        </join>
        <interfaces enabled="true">
            <interface>127.0.0.1</interface>
        </interfaces>
    </network>
   
</hazelcast>

NOTE :: the path to the configuration file must be specified using the system property vertx.hazelcast.config.

Let us explain and understand the configuration parameters listed above.

The value for the property hazelcast.wait.seconds.before.join indicates how long (in seconds) Hazelcast should block to complete any operation before joining the cluster.

The group <name> element specifies the Hazelcast cluster name.

Hazelcast by default uses multicast networking to discovery nodes in a cluster. In our case, we have disabled multicast networking and enabled TCP networking on the interface 127.0.0.1 for the cluster member discovery.

The following is the listing for the message consumer verticle Sample08.java:

Sample08.java
/*
 * Topic:  Introduction to Vert.x
 * 
 * Name:   Sample 8
 * 
 * Author: Bhaskar S
 * 
 * URL:    https://www.polarsparc.com
 */

package com.polarsparc.Vertx;

import java.util.logging.Level;
import java.util.logging.Logger;

import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.spi.cluster.ClusterManager;
import io.vertx.spi.cluster.hazelcast.HazelcastClusterManager;

public class Sample08 {
    private static Logger LOGGER = Logger.getLogger(Sample08.class.getName());
    
    private static String ADDRESS = "msg.address";
    
    // Consumer verticle
    private static class MsgConsumerVerticle extends AbstractVerticle {
        String name;
        
        MsgConsumerVerticle(String str) {
            this.name = str;
        }
        
        @Override
        public void start() {
            vertx.eventBus().consumer(ADDRESS, res -> {
                 LOGGER.log(Level.INFO, String.format("[%s] :: Received message - %s", name, res.body()));
            });
        }
    }
    
    public static void main(String[] args) {
        if (args.length != 1) {
            System.out.printf("Usage: java %s \n", Sample08.class.getName());
            System.exit(1);
        }
        
        ClusterManager manager = new HazelcastClusterManager();
        
        VertxOptions options = new VertxOptions().setClusterManager(manager);
        
        Vertx.clusteredVertx(options, cluster -> {
            if (cluster.succeeded()) {
                cluster.result().deployVerticle(new MsgConsumerVerticle(args[0]), res -> {
                    if (res.succeeded()) {
                        LOGGER.log(Level.INFO, "Deployed consumer <" + args[0] + "> with instance ID: " + res.result());
                    } else {
                        res.cause().printStackTrace();
                    }
                });
            } else {
                cluster.cause().printStackTrace();
            }
        });
    }
}

Let us explain and understand the code from Sample08 listed above.

The interface io.vertx.core.spi.cluster.ClusterManager must be implemented by a cluster provider so that it can be plugged and used in Vertx as a cluster manager.

The class io.vertx.spi.cluster.hazelcast.HazelcastClusterManager is the default cluster manager in Vertx implemented by Hazelcast.

An instance of the class io.vertx.core.VertxOptions allows one to programatically configure Vertx.

The call to the method setClusterManager() on the instance of VertxOptions allows one to programatically set the cluster manager. In our example, we are using an instance of Hazelcast cluster manager.

The call to the static method clusteredVertx() on Vertx takes two arguments - an instance of type VertxOptions and a callback handler of type io.vertx.core.Handler<E>, where <E> is of type io.vertx.core.AsyncResult<Vertx<T>>. The callback handler is invoked once a clustered instance of Vertx is created.

Once a clustered version of Vertx is created, one can used it to deploy verticle instance(s) like the regular Vertx instance. In this example, we deploy an instance of a message cunsumer.

The following is the listing for the message producer verticle Sample09.java:

Sample09.java
/*
 * Topic:  Introduction to Vert.x
 * 
 * Name:   Sample 9
 * 
 * Author: Bhaskar S
 * 
 * URL:    https://www.polarsparc.com
 */

package com.polarsparc.Vertx;

import java.util.logging.Level;
import java.util.logging.Logger;

import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.core.spi.cluster.ClusterManager;
import io.vertx.spi.cluster.hazelcast.HazelcastClusterManager;

public class Sample09 {
    private static Logger LOGGER = Logger.getLogger(Sample09.class.getName());
    
    private static String ADDRESS = "msg.address";
    private static String MESSAGE = "Vert.x is Reactive";
    
    // Publisher verticle
    private static class MsgPublisherVerticle extends AbstractVerticle {
        @Override
        public void start(Future<Void> fut) {
            vertx.eventBus().publish(ADDRESS, String.format("[1] => %s", MESSAGE));
            vertx.eventBus().publish(ADDRESS, String.format("[2] => %s", MESSAGE));
            vertx.eventBus().publish(ADDRESS, String.format("[3] => %s", MESSAGE));
            
            vertx.eventBus().send(ADDRESS, String.format("[4] -> %s", MESSAGE));
            vertx.eventBus().send(ADDRESS, String.format("[5] -> %s", MESSAGE));
            
            LOGGER.log(Level.INFO, String.format("Messages published to address %s", ADDRESS));
            
            fut.complete();
        }
    }
    
    public static void main(String[] args) {
        ClusterManager manager = new HazelcastClusterManager();
        
        VertxOptions options = new VertxOptions().setClusterManager(manager);
        
        Vertx.clusteredVertx(options, cluster -> {
            if (cluster.succeeded()) {
                cluster.result().deployVerticle(new MsgPublisherVerticle(), res -> {
                    if (res.succeeded()) {
                        LOGGER.log(Level.INFO, "Deployed publisher instance ID: " + res.result());
                    } else {
                        res.cause().printStackTrace();
                    }
               });
            } else {
                cluster.cause().printStackTrace();
            }
        });
    }
}

The code from Sample09 listed above is similar to the code from Sample08 and hence will not need any further explanation.

To demonstrate the distributed clustering feature in Vertx, we will execute two instances of the consumer class Sample08 and one instance of the producer class Sample09 .

To make it easy to launch the Java programs, we create the following shell script called run.sh as shown below:

#!/bin/sh

JARS=""

for f in `ls ./lib/jackson*`

do

    JARS=$JARS:$f

done

for f in `ls ./lib/netty*`

do

    JARS=$JARS:$f

done

JARS=$JARS:./lib/vertx-core-3.7.0.jar:./lib/vertx-config-3.7.0.jar:./lib/hazelcast-3.10.5.jar:./lib/vertx-hazelcast-3.7.0.jar

echo $JARS

java -Dvertx.hazelcast.config=./resources/my-cluster.xml -cp ./classes:./resources:$JARS com.polarsparc.Vertx.$1 $2

Open a new Terminal window (referred to as Terminal-C1) and execute the following command:

./bin/run.sh Sample08 C1

The following would be the typical output:

Output.1

:./lib/jackson-annotations-2.9.0.jar:./lib/jackson-core-2.9.8.jar:./lib/jackson-databind-2.9.8.jar:./lib/netty-buffer-4.1.30.Final.jar:./lib/netty-codec-4.1.30.Final.jar:./lib/netty-codec-dns-4.1.30.Final.jar:./lib/netty-codec-http2-4.1.30.Final.jar:./lib/netty-codec-http-4.1.30.Final.jar:./lib/netty-codec-socks-4.1.30.Final.jar:./lib/netty-common-4.1.30.Final.jar:./lib/netty-handler-4.1.30.Final.jar:./lib/netty-handler-proxy-4.1.30.Final.jar:./lib/netty-resolver-4.1.30.Final.jar:./lib/netty-resolver-dns-4.1.30.Final.jar:./lib/netty-transport-4.1.30.Final.jar:./lib/vertx-core-3.7.0.jar:./lib/vertx-config-3.7.0.jar:./lib/hazelcast-3.10.5.jar:./lib/vertx-hazelcast-3.7.0.jar
May 26, 2019 7:30:24 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [polarsparc] [3.10.5] Interfaces is enabled, trying to pick one address matching to one of: [127.0.0.1]
May 26, 2019 7:30:24 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [polarsparc] [3.10.5] Picked [127.0.0.1]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
May 26, 2019 7:30:24 PM com.hazelcast.system
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Hazelcast 3.10.5 (20180913 - 6ffa2ee) starting at [127.0.0.1]:5701
May 26, 2019 7:30:24 PM com.hazelcast.system
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
May 26, 2019 7:30:24 PM com.hazelcast.system
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Configured Hazelcast Serialization version: 1
May 26, 2019 7:30:24 PM com.hazelcast.instance.Node
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
May 26, 2019 7:30:24 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Backpressure is disabled
May 26, 2019 7:30:24 PM com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Running with 2 response threads
May 26, 2019 7:30:25 PM com.hazelcast.instance.Node
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Creating TcpIpJoiner
May 26, 2019 7:30:25 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Starting 16 partition threads and 9 generic threads (1 dedicated for priority tasks)
May 26, 2019 7:30:25 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
May 26, 2019 7:30:25 PM com.hazelcast.core.LifecycleService
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] [127.0.0.1]:5701 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/home/polarsparc/Vertx/lib/hazelcast-3.10.5.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
May 26, 2019 7:30:25 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Connecting to /127.0.0.1:5703, timeout: 0, bind-any: true
May 26, 2019 7:30:25 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Could not connect to: /127.0.0.1:5703. Reason: SocketException[Connection refused to address /127.0.0.1:5703]
May 26, 2019 7:30:25 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Connecting to /127.0.0.1:5702, timeout: 0, bind-any: true
May 26, 2019 7:30:25 PM com.hazelcast.cluster.impl.TcpIpJoiner
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] [127.0.0.1]:5703 is added to the blacklist.
May 26, 2019 7:30:25 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Could not connect to: /127.0.0.1:5702. Reason: SocketException[Connection refused to address /127.0.0.1:5702]
May 26, 2019 7:30:25 PM com.hazelcast.cluster.impl.TcpIpJoiner
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] [127.0.0.1]:5702 is added to the blacklist.
May 26, 2019 7:30:26 PM com.hazelcast.system
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Cluster version set to 3.10
May 26, 2019 7:30:26 PM com.hazelcast.internal.cluster.ClusterService
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] 

Members {size:1, ver:1} [
    Member [127.0.0.1]:5701 - 980a27d1-643b-49e1-b147-13643622374e this
]

May 26, 2019 7:30:26 PM com.hazelcast.core.LifecycleService
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] [127.0.0.1]:5701 is STARTED
May 26, 2019 7:30:26 PM com.hazelcast.internal.partition.impl.PartitionStateManager
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Initializing cluster partition table arrangement...
May 26, 2019 7:30:26 PM com.polarsparc.Vertx.Sample08 lambda$1
INFO: Deployed consumer <C1> with instance ID: 51066155-f4fc-46d4-9baf-bc5e685516e3

To start another consumer, open another new Terminal window (referred to as Terminal-C2) and execute the following command:

./bin/run.sh Sample08 C2

The following would be the typical output:

Output.2

:./lib/jackson-annotations-2.9.0.jar:./lib/jackson-core-2.9.8.jar:./lib/jackson-databind-2.9.8.jar:./lib/netty-buffer-4.1.30.Final.jar:./lib/netty-codec-4.1.30.Final.jar:./lib/netty-codec-dns-4.1.30.Final.jar:./lib/netty-codec-http2-4.1.30.Final.jar:./lib/netty-codec-http-4.1.30.Final.jar:./lib/netty-codec-socks-4.1.30.Final.jar:./lib/netty-common-4.1.30.Final.jar:./lib/netty-handler-4.1.30.Final.jar:./lib/netty-handler-proxy-4.1.30.Final.jar:./lib/netty-resolver-4.1.30.Final.jar:./lib/netty-resolver-dns-4.1.30.Final.jar:./lib/netty-transport-4.1.30.Final.jar:./lib/vertx-core-3.7.0.jar:./lib/vertx-config-3.7.0.jar:./lib/hazelcast-3.10.5.jar:./lib/vertx-hazelcast-3.7.0.jar
May 26, 2019 7:33:30 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [polarsparc] [3.10.5] Interfaces is enabled, trying to pick one address matching to one of: [127.0.0.1]
May 26, 2019 7:33:30 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [polarsparc] [3.10.5] Picked [127.0.0.1]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true
May 26, 2019 7:33:30 PM com.hazelcast.system
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Hazelcast 3.10.5 (20180913 - 6ffa2ee) starting at [127.0.0.1]:5702
May 26, 2019 7:33:30 PM com.hazelcast.system
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
May 26, 2019 7:33:30 PM com.hazelcast.system
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Configured Hazelcast Serialization version: 1
May 26, 2019 7:33:30 PM com.hazelcast.instance.Node
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
May 26, 2019 7:33:31 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Backpressure is disabled
May 26, 2019 7:33:31 PM com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Running with 2 response threads
May 26, 2019 7:33:31 PM com.hazelcast.instance.Node
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Creating TcpIpJoiner
May 26, 2019 7:33:31 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Starting 16 partition threads and 9 generic threads (1 dedicated for priority tasks)
May 26, 2019 7:33:31 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
May 26, 2019 7:33:31 PM com.hazelcast.core.LifecycleService
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] [127.0.0.1]:5702 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/home/polarsparc/Vertx/lib/hazelcast-3.10.5.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
May 26, 2019 7:33:31 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Connecting to /127.0.0.1:5703, timeout: 0, bind-any: true
May 26, 2019 7:33:31 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Could not connect to: /127.0.0.1:5703. Reason: SocketException[Connection refused to address /127.0.0.1:5703]
May 26, 2019 7:33:31 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Connecting to /127.0.0.1:5701, timeout: 0, bind-any: true
May 26, 2019 7:33:31 PM com.hazelcast.cluster.impl.TcpIpJoiner
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] [127.0.0.1]:5703 is added to the blacklist.
May 26, 2019 7:33:31 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Established socket connection between /127.0.0.1:48609 and /127.0.0.1:5701
May 26, 2019 7:33:32 PM com.hazelcast.system
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Cluster version set to 3.10
May 26, 2019 7:33:32 PM com.hazelcast.internal.cluster.ClusterService
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] 

Members {size:2, ver:2} [
    Member [127.0.0.1]:5701 - 980a27d1-643b-49e1-b147-13643622374e
    Member [127.0.0.1]:5702 - ea52a237-be4d-4726-a02d-c1aed93e706d this
]

May 26, 2019 7:33:33 PM com.hazelcast.core.LifecycleService
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] [127.0.0.1]:5702 is STARTED
May 26, 2019 7:33:33 PM com.polarsparc.Vertx.Sample08 lambda$1
INFO: Deployed consumer <C2> with instance ID: c7ea9c19-ccde-40d1-b5c8-d1025976486e

From the above Output.2, we see the two consumer instances have found each other.

To start the publisher, open yet another new Terminal window and execute the following command:

./bin/run.sh Sample09

The following would be the typical output:

Output.3

:./lib/jackson-annotations-2.9.0.jar:./lib/jackson-core-2.9.8.jar:./lib/jackson-databind-2.9.8.jar:./lib/netty-buffer-4.1.30.Final.jar:./lib/netty-codec-4.1.30.Final.jar:./lib/netty-codec-dns-4.1.30.Final.jar:./lib/netty-codec-http2-4.1.30.Final.jar:./lib/netty-codec-http-4.1.30.Final.jar:./lib/netty-codec-socks-4.1.30.Final.jar:./lib/netty-common-4.1.30.Final.jar:./lib/netty-handler-4.1.30.Final.jar:./lib/netty-handler-proxy-4.1.30.Final.jar:./lib/netty-resolver-4.1.30.Final.jar:./lib/netty-resolver-dns-4.1.30.Final.jar:./lib/netty-transport-4.1.30.Final.jar:./lib/vertx-core-3.7.0.jar:./lib/vertx-config-3.7.0.jar:./lib/hazelcast-3.10.5.jar:./lib/vertx-hazelcast-3.7.0.jar
May 26, 2019 7:38:47 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [polarsparc] [3.10.5] Interfaces is enabled, trying to pick one address matching to one of: [127.0.0.1]
May 26, 2019 7:38:47 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [polarsparc] [3.10.5] Picked [127.0.0.1]:5703, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5703], bind any local is true
May 26, 2019 7:38:47 PM com.hazelcast.system
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Hazelcast 3.10.5 (20180913 - 6ffa2ee) starting at [127.0.0.1]:5703
May 26, 2019 7:38:47 PM com.hazelcast.system
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Copyright (c) 2008-2018, Hazelcast, Inc. All Rights Reserved.
May 26, 2019 7:38:47 PM com.hazelcast.system
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Configured Hazelcast Serialization version: 1
May 26, 2019 7:38:47 PM com.hazelcast.instance.Node
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] A non-empty group password is configured for the Hazelcast member. Starting with Hazelcast version 3.8.2, members with the same group name, but with different group passwords (that do not use authentication) form a cluster. The group password configuration will be removed completely in a future release.
May 26, 2019 7:38:47 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Backpressure is disabled
May 26, 2019 7:38:47 PM com.hazelcast.spi.impl.operationservice.impl.InboundResponseHandlerSupplier
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Running with 2 response threads
May 26, 2019 7:38:47 PM com.hazelcast.instance.Node
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Creating TcpIpJoiner
May 26, 2019 7:38:47 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Starting 16 partition threads and 9 generic threads (1 dedicated for priority tasks)
May 26, 2019 7:38:47 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
May 26, 2019 7:38:47 PM com.hazelcast.core.LifecycleService
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] [127.0.0.1]:5703 is STARTING
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/home/polarsparc/Vertx/lib/hazelcast-3.10.5.jar) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Connecting to /127.0.0.1:5702, timeout: 0, bind-any: true
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpConnector
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Connecting to /127.0.0.1:5701, timeout: 0, bind-any: true
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Established socket connection between /127.0.0.1:41305 and /127.0.0.1:5701
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Established socket connection between /127.0.0.1:40071 and /127.0.0.1:5702
May 26, 2019 7:38:48 PM com.hazelcast.system
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] Cluster version set to 3.10
May 26, 2019 7:38:48 PM com.hazelcast.internal.cluster.ClusterService
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] 

Members {size:3, ver:3} [
    Member [127.0.0.1]:5701 - 980a27d1-643b-49e1-b147-13643622374e
    Member [127.0.0.1]:5702 - ea52a237-be4d-4726-a02d-c1aed93e706d
    Member [127.0.0.1]:5703 - e15d985b-5338-4805-acdb-1a51199897a3 this
]

May 26, 2019 7:38:49 PM com.hazelcast.core.LifecycleService
INFO: [127.0.0.1]:5703 [polarsparc] [3.10.5] [127.0.0.1]:5703 is STARTED
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample09$MsgPublisherVerticle start
INFO: Messages published to address msg.address
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample09 lambda$1
INFO: Deployed publisher instance ID: 99a0db89-4e11-4fe8-acbe-b9f01e413aec

Moving to Terminal-C1, we should typically see the following additional output:

Output.4

May 26, 2019 7:33:32 PM com.hazelcast.internal.partition.impl.MigrationManager
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Re-partitioning cluster data... Migration queue size: 271
May 26, 2019 7:33:34 PM com.hazelcast.internal.partition.impl.MigrationThread
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] All migration tasks have been completed, queues are empty.
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Accepting socket connection from /127.0.0.1:41305
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Established socket connection between /127.0.0.1:5701 and /127.0.0.1:41305
May 26, 2019 7:38:48 PM com.hazelcast.internal.cluster.ClusterService
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] 

Members {size:3, ver:3} [
    Member [127.0.0.1]:5701 - 980a27d1-643b-49e1-b147-13643622374e this
    Member [127.0.0.1]:5702 - ea52a237-be4d-4726-a02d-c1aed93e706d
    Member [127.0.0.1]:5703 - e15d985b-5338-4805-acdb-1a51199897a3
]

May 26, 2019 7:38:48 PM com.hazelcast.internal.partition.impl.MigrationManager
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] Re-partitioning cluster data... Migration queue size: 271
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C1] :: Received message - [1] => Vert.x is Reactive
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C1] :: Received message - [2] => Vert.x is Reactive
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C1] :: Received message - [3] => Vert.x is Reactive
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C1] :: Received message - [4] -> Vert.x is Reactive
May 26, 2019 7:38:50 PM com.hazelcast.internal.partition.impl.MigrationThread
INFO: [127.0.0.1]:5701 [polarsparc] [3.10.5] All migration tasks have been completed, queues are empty.

Similarly, moving to Terminal-C2, we should typically see the following additional output:

Output.5

May 26, 2019 7:35:34 PM com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Invocations:1 timeouts:1 backup-timeouts:0
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpAcceptor
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Accepting socket connection from /127.0.0.1:40071
May 26, 2019 7:38:47 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] Established socket connection between /127.0.0.1:5702 and /127.0.0.1:40071
May 26, 2019 7:38:48 PM com.hazelcast.internal.cluster.ClusterService
INFO: [127.0.0.1]:5702 [polarsparc] [3.10.5] 

Members {size:3, ver:3} [
    Member [127.0.0.1]:5701 - 980a27d1-643b-49e1-b147-13643622374e
    Member [127.0.0.1]:5702 - ea52a237-be4d-4726-a02d-c1aed93e706d this
    Member [127.0.0.1]:5703 - e15d985b-5338-4805-acdb-1a51199897a3
]

May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C2] :: Received message - [1] => Vert.x is Reactive
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C2] :: Received message - [2] => Vert.x is Reactive
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C2] :: Received message - [3] => Vert.x is Reactive
May 26, 2019 7:38:50 PM com.polarsparc.Vertx.Sample08$MsgConsumerVerticle lambda$0
INFO: [C2] :: Received message - [5] -> Vert.x is Reactive

An interesting observation from Output.4 and Output.5 is that, all consumers receive messages dispatched via the publish() method, while only one consumer will receive a message dispatched via the send() method.

More to be covered in the next part of this series ... 😎

References

[1] Introduction to Vert.x - Part-1

[2] Introduction to Vert.x - Part-2

[3] Introduction to Vert.x - Part-3

[4] Vert.x Core Manual (Java)

[5] Vert.x Hazelcast Cluster Manager Manual (Java)



© PolarSPARC