Clustering

What does Nexaweb Clustering do?

You can cluster Nexaweb applications to run on multiple JVM instances and multiple machines to meet the load balancing and failover requirements of enterprise applications. The Nexaweb Server clustering support augments an application server's native clustering support and is not meant to replace it. The main features of Nexaweb clustering include:

  1. Efficient in-memory replication of the ServerSession objects, which may contain attributes, application specific XML documents registered with the session, and other Nexaweb specific documents such as the UI Document Object Model (DOM).
  2. Ability to failover a ServerSession to the new server node,  transparent to the Nexaweb application users, in the event of a server failure.
  3. Replicated Hashtables (SharedStore) - allows creation of named SharedStore instances whose contents are replicated across all cluster nodes.
  4. Messaging between clients connected to different servers in the cluster.


Session Replication

Session replication in the context of the Nexaweb server refers to in-memory replication of the ServerSession data across clustered servers. When a new ServerSession is created, one server in the cluster is assigned to serve as that session's backup. Any changes to that session are then replicated to the backup server. If the primary server goes down (for example, due to a hardware failure, network failure, and so forth) then the session's data will be preserved on the backup server. If the backup server fails, a new backup server is assigned to serve as the session's backup.

In a clustered environment, ServerSession attributes function similar to the way HttpSession attributes do. In order for the ServerSession attribute to get backed up, the attribute value must implement java.io.Serializable. In addition, every time you change the attribute value, the setAttribute method has to be called again in order for the change to propagate to the backup node.

If clustering and failover are enabled, all changes to the XML documents registered for the server side only or both client and server will be backed up. Any changes to these documents (that is, attribute set/remove, element append/removal) will automatically propagate to the backup node and will be applied on the backup session.

Failover

Nexaweb's clustering failover capability is related to the session replication and refers to the ability to resume an application after a server failure in a way which is transparent to the application's user. During a failover, the contents of the ServerSession that the client is connected to are retrieved from the backup server and made local on the new server. The new server can be any live server in the cluster. The backup server for the session will be located automatically and the contents of the session will be transferred to the new server. The Nexaweb server retrieves only the ServerSession object from the backup server. The application server's clustering (if enabled) must retrieve the contents of the corresponding HttpSession from a backup source (i.e. database, other servers) and make it local on the new server. If the HttpSession replication is not enabled on the application server, a new HttpSession will be created for the failed over ServerSession.

Nexaweb's clustering failover capability is related to the session replication and refers to the ability to resume an application after a server failure in a way which is transparent to the application's user. During a failover, the contents of the ServerSession that the client is connected to are retrieved from the backup server and made local on the new server. The new server can be any live server in the cluster. The backup server for the session will be located automatically and the contents of the session will be transferred to the new server. The Nexaweb server retrieves only the ServerSession object from the backup server. The application server's clustering (if enabled) must retrieve the contents of the corresponding HttpSession from a backup source (i.e. database, other servers) and make it local on the new server. If the HttpSession replication is not enabled on the application server, a new HttpSession will be created for the failed over ServerSession.

Example 1: 

Consider a simple failover scenario with three clustered servers: A, B, and C, with a load balancer in front of them. A user creates a session on machine A, server B is assigned to back up the session. After some use, if server A goes down, the user's next request will be routed by the load balancer to either server B or server C. If server C receives the request, it will retrieve the session from server B, and process the request normally. Since server B has a complete backup, this transition is transparent to the application user. From this point on C will be the new 'owner' of the server. Server B will remain the backup server.

If server B received the first request for the session after server A went down, it would become the session owner, and assign server C to back up the session. Again, this transition is transparent to the user.

Example 2:

Consider a two-server cluster with servers A and B. Both are configured as backups and are serving requests. If server A goes down, all session data from A will be retained on server B. B can now be used to serve any session in the cluster, however there is now no backup for any session until a new server joins the cluster. 

In this scenario, server B does not automatically assume that it owns all of server A's sessions. The ownership change is on demand, that is, if server B receives a request for a session that it doesn't own, it will attempt to retrieve it from a backup server. In this example server B is the only backup server so it will retrieve the session from its own backup store and take ownership of it. If a healthy server A is brought back into the cluster and receives a request for a session it used to own, it will retrieve the session from server B and assume ownership. 

Note: At no time can two servers have ownership of a session. Ownership changes are expensive and therefore load balancers should be configured with sticky-sessions enabled.

SharedStore

Nexaweb clustering provides functionality to create named replicated Hashtables (SharedStore instances). All SharedStore instances that exist within the same application cluster and have the same name are connected; that is, they represent the same virtual SharedStore. So modifying a named SharedStore in one application instance causes the change to propagate to the rest of the clustered application instances automatically and be available there as well.

If you are using your own classes for the keys and values, make sure they are serializable. Whenever put (key, value) gets called on the SharedStore instance, only that key/value pair is serialized and sent to the cluster members in real time. If you modify the value object, you will need to call the put method again if you want the changes to propagate to the other clustered application instances.

If multiple value objects have a reference to the same object, when the values are deserialized on the remote side they will have their own copy of the shared object since cross-referencing is not detected. 

The performance of the shared store depends on the number of messages that Nexaweb sends, the size of individual messages and network bandwidth. In addition to the messages that Nexaweb sends to maintain the state of the SharedStore objects, Nexaweb also uses messaging to backup session UI DOM, and attributes and registered XML documents and to keep them in sync when changes occur. 


Here is an example of creating and using a SharedStore object:

SharedStore s = ServiceManager.getSharedStoreManager().getSharedStore( "MyStore" ); 
s.put( "key", "value" );

If a local ShareStore instance named "MyStore" does not exist, one will be created automatically. When events start propagating to the remote servers, as a result of modifying this local instance, if a SharedStore with the name "MyStore" does not exist on a remote server, it will be created there the moment the first event reaches the server.

Messaging

All of the benefits of the MessagingService framework transparantly extend to the clustering environment.  This means if client1 and client2 subscribe to a topic 'MyTopic', and it so happens that client1 is connected to the server1, while client2 is connected to the server2 where both server1 and server2 belong to the same cluster,  both clients will receive messages published on "topic1" as long as the message is published by one of the servers belonging to the same cluster or any client connected to some server that belongs to the same cluster.

 

How to configure Nexaweb Clustering?

General Requirements

Before you setup a cluster of Nexaweb Servers, follow the processes and steps required to setup clustering for your specific application server. For example, if you want to cluster a Nexaweb application running in an IBM Websphere environment, configure your Websphere environment for normal Websphere clustering. 

in addition, you must: 

  1. Set up a load balancer using either a dedicated load balancer or the HTTP server.
  2. Configure the replication of HttpSession objects.

Refer to your application server's documentation for more information on configuring a cluster.

Nexaweb Configuration

To configure the Nexaweb Server to support clustering, you must:

  • Specify clustering related settings in the <clustering/> element of the nexaweb.xml file.
  • Configure the messaging factory you would like to use for the inter-server communication.

Changes to nexaweb.xml file must be propagated throughout all the nodes on the cluster. You can propagate these changes using the following methods:

  • If your application server supports a central deployment mechanism for all cluster nodes, this file will get propagated automatically.
  • You can build an EAR (Enterprise Application Resource) or a WAR (Web Application Resource) file that already contains the necessary modifications to nexaweb.xml, so you do not have to modify them after deployment. 

The following sections describe the various configuration settings in detail.

Enabling Clustering and Failover

The <clustering/> element in he nexaweb-server.xml file contains all of the clustering and failover related settings.  The <clustering/> element is a child of the root <server> element. The <clustering/> element contains two attributes:

Attributes        Settings

enabled           true | false
failover             true | false

To enable clustering for your application instance, set 'enabled=true' . Nexaweb then considers this instance as part of the cluster.  This instance appears in the list of cluster members (which you can verify by looking at the AppNodeManager service status details available at /yourApp/Nexaweb/Services/index.jsp page). You can use SharedStore objects and register MessageListener objects that can receive messages published on other clustered servers as well as by clients connected to those clustered servers. 

To enable failover for your application instance, set 'failover=true'. This enables backup assignment for the local ServerSession objects created on your application instance and allows this instance to serve as a backup node for the other nodes in the cluster.

You cannot enable failover without also enabling clustering. However, you can enable clustering without enabling failover.

Configuring Application Address

When running in a clustered environment, each application instance has an address that uniquely identifies it among other clustered application instances. Nexaweb uses the address as a messaging destination for inter-server communication. The address has the following format: 

<domain>.<cluster>.<server>.<application> 

Consider the following guidelines for configuring application addresses:

  • All application instances must have the same domain and cluster names to be considered part of the same cluster. 
  • The server name has to be unique across physical machines/JVM instances running on the same machine. 
    Note: Nexaweb recommends that you either remove or comment out the server element setting (don't leave an empty element) thus letting Nexaweb generate a unique identifier for each server instance. If you specify the server name in the configuration, when you deploy your application to a cluster, every server instance will have the same value for the server which will lead to incorrect behavior.
  • The application part of the address has to be the same for all instances of the same application. If you have multiple web applications deployed, you must make sure that each has a unique application name. Failure to do so will lead to incorrect behavior.

The following elements, children of the <clustering> element, make up the destination address of an application instance:

  • Values for these elements cannot contain spaces or dots.
  • Nexaweb recommends that you makes these values as short as possible, since the address is part of every message sent in the cluster. 

<domain>MyDomain</domain> 

The domain of your server's cluster.

In a development environment, Nexaweb recommends that you do not specify a value for this element. Then, the Nexaweb application uses the machine's localhost address. This avoids collisions and versioning problems in a development environment while allowing developers to run a cluster on their own machine.

In a production environment, set this element to some specific value.

<cluster>MyCluster</cluster> 

The name of the cluster to which this server belongs.

All application instances that you want to cluster together have to have the same cluster name.

Set this element to some specific value.

<server>MyServer1</server> 

A unique identifier for this server + JVM instance.

Nexaweb recommends that you comment out the server element or completely remove it from your application's nexaweb-server.xml configuration file. This allows Nexaweb to generate a unique server name for every server instance in your cluster.

<application>MyApp</application> 

Unique id for your application.

The value of this elemenet must be unique for each application deployed on your application server. It's important for all instances of the same application to have the same application name in order to be considered part of the same cluster.

Using Application Address to Partition a Cluster

You can partition your cluster into multiple disjoint subclusters in a number of ways:

  • Specify a different domain name 
    Since all application instances must have the same domain name to be considered part of the same cluster,  you can change the domain name to partition your cluster.
     
  • Specify a different cluster name 
    Since all application instances must have the same cluster name to be considered part of the same cluster,  you can change the cluster name to partition your cluster.
    Note: You can have multiple subclusters within the same domain. 
  • Use Multiple Messaging Factory Instances
    Nexaweb will filter out the messages that do not belong to a specific subcluster instance. If necessary, you can further separate the messaging traffic using multiple messaging factory instances. For more information, see Configuring Inter Server Messaging section below.

Configuring AppNodeManager

The AppNodeManager service manages application instance nodes that belong to the same application cluster. When part of a cluster, every application instance has an AppNode object that uniquely identifies the application instance. 

AppNodeManager keeps track of the local AppNode object and all remote AppNodes that have the same domain, cluster and application name as the local AppNode. The collection of the AppNodes that the AppNodeManager keeps track of constitutes the current application cluster view. 

The AppNodeManager detects cluster membership changes and notifies AppNodeListeners. When the local AppNode starts up, the AppNodeManager schedules a heartbeat task that periodically broadcasts a heartbeat message that serves as an indication to all remote nodes that this node is up and running. If clustering is disabled on this node, this manager is inactive. 

The <app-node-manager/> element, a child of the <clustering/> element, contains the following settings:

<heartbeat-period>5 sec</heartbeat-period>

Specifies how often the heartbeat message is broadcast to the other nodes in the cluster. The AppNodeManager uses heartbeats to determine when nodes join or leave the cluster. The messages also contain metadata describing the server's current state. That information helps to balance available resources more evenly. The valid value is any time interval as described in nexaweb.xml.

<max-missed-heartbeats>10</max-missed-heartbeats>

The maximum number of heartbeats any node is allowed to miss before it is considered gone.

<cluster-synchronization-delay>10 sec</cluster-synchronization-delay>

The time used to sync up with the rest of the cluster nodes at startup.

<machine-id>Machine1</machine-id>

When multiple server instances are running on the same machine, it is important to guarantee that the backup copies originating on all of these instances are stored on server instances running on some other physical machine. You can accomplish that by setting the same machine ID for all instances running on the same machine. When selecting a backup node for local sessions, the backup selection algorithm will make sure that the backup node has a different machine ID from the one used by the local node. This helps to prevent the local session and its backup residing on the same physical machine. If the machine id is not specified, a globally unique ID is generated.

Configuring BackupAssignmentManager

The BackupAssignmentManager service assigns backup nodes for:

  • The ServerSession objects created on the local node. 
  • The ServerSession objects that end up on this node as a result of a failover.

If failover is not enabled on this node, the manager is not active.

The <backup-assignment-manager/> element, a child of the <clustering/> element,  contains the following settings:

<backup-selector class="some.package.SomeClass" /> 

Specifies the fully qualified name of a class that implements some specific algorithm for selecting the 'best' backup node out of the list of available backup nodes. The best backup node is based on some particular criteria. For example, the algorithm that specifies the 'best' node as the node that has the least number of sessions (local + backups).

<backup-session-task-period>1 min</backup-session-task-period> 

Nexaweb assigns a backup node to a new session immediately upon creation of that session. However, occasionally, there are no available backup nodes at the time of a session's creation. Therefore, Nexaweb periodically runs a task to attempt to back up the local ServerSessions that haven't been assigned a backup. This setting specifies the task's period. The valid values for this setting include any time interval as described in nexaweb.xml.

<backup-verification-task-period>5 min</backup-verification-task-period> 

Specifies the period for the task that periodically compares the local session with its backup copy to make sure the two are in sync. The session components that are out of sync (i.e. datasets, session attributes) are resent to the backup node at that time. The task's goal is to recover from failures that never got detected (i.e. a message got sent out, never made it to the destination, and neither the sending nor the receiving side detected the failure and no new messages were ever sent out). Note that this type of failure is not very likely and comparing the backup with the local session is rather expensive. So the period should be reasonably large. The valid values for this setting include any time interval as described in nexaweb.xml.

Configuring BackupRepositoryManager

The BackupRepositoryManager service manages the backup repository.  The backup repository contains the backup sessions stored on this node. If failover is disabled on this node, this manager is inactive. The <backup-repository-manager/> element, a child of the <clustering/> element, contain the following settings:

<backup-store class="some.package.SomeClass"/> 

Specifies the backup repository implementation used to store backup sessions. Currently, Nexaweb supports only in memory backup store implementation.

<backup-cleanup-period>5 min</backup-cleanup-period> 

Specifies the period for the task that examines the backup repository and attempts to clean up backups that are no longer needed. The cleanup task uses the following criteria for determing what to clean up: if the backup copy has not been modified in more than the original maximum inactive interval of the ServerSession's http session and the ServerSession (with the same id as the backup one) does not exist anywhere in the cluster.

Configuring BackupSynchronizatioinManager

The BackupSynchronizationManager service makes sure that the local sessions stay in sync with their backup copies. The manager schedules two tasks:

  • A task that resends session components that got out of sync to the backup nodes.
  • A task that requests component resends from the nodes hosting the local sessions.

Here is how the "out-of-sync" state is detected: all messages containing DOM changes for a specific DOM are stamped with monotonically increasing IDs. When the backup node detects a lapse in the message ID sequence for a particular DOM (each session attribute has its own sequence as well), the dataset name is added to the 'request' list of the BackupSynchronizationManager. When the request task (scheduled by the manager) runs, all of these dataset names and session attributes are 'requested' from the session's origin node. 

On the origin node, the 'resend' task (scheduled by the BackupSynchronizationManager as well) resends all of the requested datasets and session attributes to their respective backup nodes. 
If failover is disabled on this node, this manager is inactive. 

The <backup-synchronization-manager/> element, a child of the <clustering/> element, contains the following settings:

<resend-task-period>30 sec</resend-task-period> 

The period for the task that resends session components (datasets, ServerSession attributes, and so forth) to the respective backup nodes. The components resent are the ones that got out of sync. The valid value is any time interval as described in nexaweb.xml.

<request-task-period>30 sec</request-task-period> 

The period for the task that requests session component resends from the nodes hosting the local sessions. The valid value is any time interval as described in nexaweb.xml.

Configuring Inter-Server Messaging

Nexaweb's inter-server messaging framework is independent of any specific messaging implementation; however, it requires an underlying messaging implementation that it is able to send messages asynchronously to every member of the group. Nexaweb 4.1 provides three messaging factory implementations:

  • In-memory
  • JMS
  • Continuum 

Specifying a Message Factory Implementation

Specify all messaging factory settings for inter-server communications under the <messaging/> element, child of the root <server> element, inside nexaweb-server.xml.

Under the <messaging> element you can specify multiple <factory-implementation/> elements, one element per messaging factory implementation (Continuum, JMS and so on). 

Under each <factory-implementation/> element you can specify multiple <factory-instance/> elements. Multiple instances provide additional flexibility when trying to fine tune your application's performance. You can compare them to having multiple communication pipes. When you configure multiple factory instances you need to further specify the type of traffic with which to use each instance. You do this in the Nexaweb server using keys. You can associate a single instance with any number of keys, however you can associate only one messaging factory instance to a specific key. When a Nexaweb Server component needs to use a messaging factory, it provides a key. If there are no factory instances associated with that key, the application creates and uses an instance of the default messaging factory. You can use the following element, a child of the <messaging/> element, to specify the default messaging factory: 

<default-messaging-factory class="MessagingFactoryClassName" /> 

 Nexaweb Server components use the following keys: 

AppNode 
DocumentEvents 
SessionEvents
SessionAttributeEvents

BackupSyncMessages 
RpcExecutor 
MessagingService 
SharedStore 


When you configure your application with clustering and default messaging, the aplication uses Continuum - Nexaweb's own UDP based messaging implementation that can be configured to use either broadcast or multicast.  Multicast is used by default.  The application creates three factory instances of that factory, each using a separate multicast port. The keys above are spread over the 3 instances in the following way: 

Instance 1 keys (heartbeat messages): AppNode 

Instance 2 keys (backup events): DocumentEvents, SessionEvents, SessionAttributeEvents, BackupSyncMessages, RpcExecutor

Instance 3 keys: MessagingService, SharedStore 

You can use a single Continuum factory instance in your application by including the following in your nexaweb-server.xml: 

<messaging> 
<factory-instance> 
            <keys> 
                    <key>DocumentEvents</key> 
                    <key>SessionAttributeEvents</key> 
                    <key>SessionEvents</key> 
                    <key>BackupSyncMessages</key> 
                    <key>RpcExecutor</key> 
                    <key>AppNode</key> 
                    <key>MessagingService</key> 
                    <key>SharedStore</key> 
            </keys> 
             <config>
                    <!-- if set to true, old IO will be used in JVM 1.4+ environments.-->
                    <!-- Otherwise new IO (nio) will be used.                         -->
                    <use-old-io>false</use-old-io>
                    <!--The priority for the thread that takes incoming messages off  -->
                    <!-- the network and places them into subscriber queues.          -->
                    <receive-thread-priority>6</receive-thread-priority>
                    <!-- multicast or broadcast port used to send and receive messages -->
                    <!-- delivered through this factory instance. This setting is      -->
                    <!-- required and has to be unique across factory instances.       -->
                    <port>45001</port>
                    <!-- Multicast address (group) to use when using multicast.        -->
                    <!-- Important: if you specify a multicast address, multicast      -->
                    <!-- will be used as the method for sending and receiving messages.-->
                    <!-- This is the default configuration.  If you want to use        -->
                    <!-- broadcast instead, you have to comment out the                -->
                    <!-- multicast-address setting completely and specify the bind     -->
                    <!-- address and the subnet-mask.                                  -->
                    <multicast-address>228.1.2.3</multicast-address>
                    <!-- The buffer size for the DatagramSocket used to receive messages-->
                    <receive-buffer-size>100 KB</receive-buffer-size>
                    <!-- The buffer size for the DatagramSocket used to send messages   -->
                    <send-buffer-size>100 KB</send-buffer-size>
            </config>
</factory-instance>
</messaging> 


 

 

We are now going to take a look at the three concrete implementations provided with the Nexaweb Server.

In-Memory Messaging Factory

When Nexaweb Server is running in a standalone mode (no clustering) the In-Memory messaging factory is in use. This factory cannot be used in a clustered environment since all messages are sent and received within the same JVM instance. In-Memory messaging factory is also used for debugging. The fully qualified name for the In-Memory messaging factory is:com.nexaweb.server.messaging.inmemory.InMemoryMessagingFactory

Continuum Messaging Factory

Continuum is a UDP multicast based messaging implementation that can be used for inter-server communication. All servers participating in the same cluster must reside on the same subnet. When a new Nexaweb Server node starts, it will automatically discover other nodes in the same cluster and join the cluster automatically. If you are using Continuum, the network adapter that the application server is bound to must support IP multicast addressing. The fully qualified class name for the factory is: com.nexaweb.server.messaging.continuum.ContinuumMessagingFactory

JMS Messaging Factory

If you would like to use your application server's Java Message Service (JMS) for the inter-server communication, you need to configure JMS messaging factory as the default factory. The fully qualified class name for the factory is: com.nexaweb.server.messaging.jms.JMSMessagingFactory

Configuring JMS Messaging Factory Instances

As mentioned earlier, each messaging factory implementation can have multiple factory instances, each with their own configuration settings. You have to specify at least one instance of the factory. In this section we describe how to configure JMS messaging factory instances. 

You specify a single JMS factory instance as follows: 

<messaging> 
    <factory-implementation class="com.nexaweb.server.messaging.jms.JMSMessagingFactory">  
         <factory-instance jmsConfigName="jms-config1" /> 
     </factory-implementation> 
</messaging> 



The 'jmsConfigName' attribute specifies the name of a JMS configuration block used to configure the particular instance of the JMS messaging factory. Since JMS configuration is quite large nexaweb-server.xml has a separate section for it, which allows multiple factory instances to reuse the same configuration if desired. You can configure multiple instances of the JMS messaging factory using the same JMS configuration block, or different ones. 

Each JMS configuration block is specified inside nexaweb.xml with a single <jms> element under the <server> element and is used to configure an instance(s) of the JMS messaging factory. Every <jms> element has a 'name' attribute which can be any string that uniquely identifies this JMS configuration section (for example, the following block would be used by the factory instance defined above: 
<jms name="jms-config1"/>). 

You can have multiple <jms> blocks in your nexaweb-server.xml (make sure to specify a unique name attribute for each of them). 

Following is the description of the children of a JMS configuration element.

<jndi> 

Nexaweb uses the Java Naming and Directory Interface (JNDI) to look up JMS specific objects; therefore, confirm that your application server supports JNDI. Following are the children of the <jndi> element. 

<connection-factory-jndi-name> 

The JNDI name for the JMS connection factory. This is the full name that will be used to lookup the connection factory object in the JNDI environment. For example, jms/NexawebConnectionFactory. Connection factories are usually configured through the administration console of your application server.

<destination-jndi-name> 

The JNDI name for the destination used by the message producer and consumer for sending and receiving messages. 

<initial-context-environment> 
<param name="paramName" value="paramValue/> 
</initial-context-environment>
 

Initial context JNDI environment. The parameters specified in this block will be passed in when creating the initial JNDI context.

<connection>

The following are the JMS Connection specific settings. 

<client-id>SomeId</client-id> 

If you want to use a durable subscription, each JMS client has to be associated with a unique client ID. Usually this ID is configured through the administration console. If your JMS provider does not have a way to set it administratively, the JMS API provides a method to do that. You should be aware however that the J2EE specification prohibits using this method and any J2EE compliant container will throw an IllegalStateException. Nexaweb will attempt to set the provided value as the client ID. If the value is not specified and the ID is not set, Nexaweb will generate a unique ID and attempt to set it as the client ID.

<session> 

The following are the JMS Session specific settings.

<transacted>false</transacted> 

Transacted session mode. The valid values are true | false. For more information see the JMS specification.

<acknowledge-mode>javax.jms.Session.AUTO_ACKNOWLEDGE </acknowledge-mode> 

Session acknowledge mode. You can use the fully qualified way shown above or specify an integer value in the range allowed by the JMS API. For more information see the JMS specification.

<message-producer> 

The following are the settings for the message producer. 

<delivery-mode>javax.jms.DeliveryMode.NON_PERSISTENT</delivery-mode> 

Delivery mode for the message producer. You can use the fully 
qualified way shown above or specify an integer value in the range allowed by the JMS API. 

<priority>javax.jms.Message.DEFAULT_PRIORITY</priority> 

Priority for the message delivery. You can use the fully qualified way shown above or specify an integer value in the range allowed by the JMS API. 

<time-to-live>jms.Message.DEFAULT_TIME_TO_LIVE</time-to-live>

Message time-to-live. You can use the fully qualified way shown above or specify a long value (number of milliseconds) 

<disable-message-id>false</disable-message-id> 

Whether message IDs are disabled for the JMS messages. Allowed values are: true | false

<disable-message-timestamp>false</disable-message-timestamp> 

Whether message timestamps are disabled for the JMS messages. Allowed values are: true | false.

<subscriber> 

The following are the settings for the message subscriber.

<durable>false</durable> 

Whether this subscriber durable or not. The allowed values are: true | false. The default value is false.

<subscription-name>MySubscription</subscription-name> 

The name for the durable subscription. This setting is only required if the subscriber is durable. Durable subscribers normally have to be configured through the JMS administration console. By default a non-durable subscriber is created.

Miscellaneous Settings

The request timeout is used by all inter-server synchronous message exchanges. The element below is a child of the <messaging> element.

<request-timeout>15 sec</request-timeout> 

The value for this parameter is any valid time interval as described in nexaweb.xml.

Sample nexaweb-server.xml with Clustering and Failover Enabled

The following nexaweb-server.xml configuration:

  • Enables clustering and failover
  • Uses JMS messaging factory for inter-server communication
  • Specifies only one instance of the created factory created
  • Specifies the JMS configuration of the created factory as a named <jms> block

 

<server>
<clustering enabled="true" failover="true"> 
<domain>MyDomain</domain> 
<cluster>MyCluster</cluster> 
<application>MyApp</application> 
</clustering>
<messaging>
<factory-implementation 
class="com.nexaweb.server.messaging.jms.JMSMessagingFactory">
<factory-instance jmsConfigName="config1">
          <keys> 
               <key>DocumentEvents</key> 
               <key>SessionAttributeEvents</key> 
               <key>SessionEvents</key> 
               <key>BackupSyncMessages</key> 
               <key>RpcExecutor</key> 
               <key>AppNode</key> 
               <key>MessagingService</key> 
               <key>SharedStore</key> 
          </keys> 
     </factory-instance> 
      
</factory-implementation> 
</messaging>
<jms name="config1"> 
<jndi> 
<connection-factory-jndi-name>jms/NexawebConnectionFactory 
</connection-factory-jndi-name> 
<destination-jndi-name>jms/NexawebTopic</destination-jndi-name> 
</jndi>
<session> 
<transacted>false</transacted> 
<acknowledge-mode>javax.jms.Session.AUTO_ACKNOWLEDGE 
</acknowledge-mode> 
</session>
<message-producer> 
<delivery-mode>javax.jms.DeliveryMode.NON_PERSISTENT</delivery-mode> 
<priority>javax.jms.Message.DEFAULT_PRIORITY</priority> 
<time-to-live>javax.jms.Message.DEFAULT_TIME_TO_LIVE 
</time-to-live> 
<disable-message-id>false</disable-message-id> 
<disable-message-timestamp>false</disable-message-timestamp> 
</message-producer>
</jms> 
</server> 

 

 

How to Verify and Test Nexaweb Clustering?

Nexaweb Services JSP page

Nexaweb provides a set of JSP pages that help monitor the health of your Nexaweb application by providing details about each service running within your application, the state of your local sessions, performance meter statistics and so on. The page that displays all Nexaweb Services details in use by your application is located under: 

<app-context>/Nexaweb/Services/index.jsp 

If clustering is enabled you should see AppNodeManager in the list of services. In addition, if failover is enabled you should also see the 3 backup/failover related services: 

BackupAssignmentManager 
BackupRepositoryManager 
BackupSynchronizationManager 

If you do not see the above services, that means that Nexaweb clustering and/or failover are not enabled in your application. Check the nexaweb-server.xml located under WEB-INF/ directory within your application to make sure clustering and failover are enabled as described above. If you do not have that file you will need to create one and add the necessary settings to your file. 

You should also check the LicensingService details (available in the same JSP page) to make sure the license in use allows clustering. 

Finally, if the license looks correct and you have nexaweb-server.xml with the right settings in the right place, make sure that the directory where you modified or added nexaweb-server.xml is in fact the one being used by your application server (i.e. if you have multiple environments set up it is easy to keep modifying files in one place but running a server that's using a different location).

AppNodeManager details will display the application address of the local node as well as all remote nodes that belong to the same application cluster.

BackupAssignmentManager details will show each local session and the application address of the backup node (the application node that is holding the backup copy of the local session) if one has been assigned. All sessions that have not been assigned a backup will appear in the "Unassigned" list.

BackupRepositoryManager details will show all backup sessions that this application node is storing. These are the backups of primary sessions that reside on other nodes in the same application cluster.

BackupSynchronizationManager details will show any work that needs to be done to synchronize a local session with its backup copy. The work can include ServerSession attributes and documents that need to be either resent to the backup or requested from the primary servers. Under normal conditions this service should not have much to do since all of the primary session state is kept in sync in real time. However, if for some reasons messages between the primary and backup servers get lost, this service is responsible for correcting the errors by resynching the primary session with its backup.

Using 2 Tomcat Servers to Simulate Failover

You can simulate a ServerSession failover with 2 tomcat server instances. You do not need to set up Tomcat clustering if you don't care about HttpSession replication. 

  • Start 2 tomcat servers (let's call them A and B).
  • Deploy your Nexaweb application on both of them ensuring that clustering and failover are enabled. You can check the services JSP page on both of the servers. Look at the AppNodeManager details. It should display the address of its local server as well as 1 remote server.
  • Launch an application session on server A.
  • Check the BackupAssignmentManager details on A to make sure your session got backed up on server B.
  • When you see that the session is backed up, shut down A but keep your browser window open.
  • Restart server A.
  • Send a request to the server (if your application was configured to use a Push Connection or polling, the Nexaweb Client will automatically poll the server periodically trying to reestablish the connection which will cause a session failover. However, if you have not configured either one, do something with your application that will cause the Nexaweb Client to send a request to the server).

Your application should continue to operate normally. You can add server side code to check the contents of your ServerSession after the failover, making sure they are the same as before server A was brought down.

Using a TCP Tunnel to Simulate Failover

You can simulate a ServerSession failover with any number of servers and a TCP tunnel.

  • Launch your servers with your Nexaweb application.
  • Launch a TCP tunnel pointing to one of the servers.
  • Start your application using the TCP tunnel's inbound port inside your browser.
  • Restart TCP tunnel pointing it to a different server.
  • Make your application send a request to the server.

The new server should retrieve your session from the backup and make it local. Your application should continue to operate normally. Note that if your application sends any type of messages during a failover (i.e. as you are taking the primary server down or restarting a TCP tunnel), some messages will get lost which is to be expected.

Troubleshooting

My servers do not see each other.

Make sure that the domain, cluster and application elements have the same value on all instances of your application. <br />Make sure you do not specify the server element (remove it, don't leave it empty). If you are using Continuum make sure that all servers are on the same subnet.  

My sessions do not get backed up

Go through the steps specified above in the "My servers do not see each other" problem.
Make sure you did not specify the same machine-id on all instances of your application.

My backup server went down and came back up but other servers in the cluster that had backups on the restarted server are not reassigning the backups for their sessions.

If a server goes down and comes back up faster than the time to receive max-missed-heartbeats, then the other servers will not detect the server's disappearance until the BackupVerificationTask runs on all respective servers. When the task runs backup sessions will be reassigned.