茫茫網海中的冷日
         
茫茫網海中的冷日
發生過的事,不可能遺忘,只是想不起來而已!
 恭喜您是本站第 1669346 位訪客!  登入  | 註冊
主選單

Google 自訂搜尋

Goole 廣告

隨機相片
IMG_00050.jpg

授權條款

使用者登入
使用者名稱:

密碼:


忘了密碼?

現在就註冊!

爪哇咖啡屋 : [轉貼]實現Tomcat 8 Clustering/Session Replication 環境

發表者 討論內容
冷日
(冷日)
Webmaster
  • 註冊日: 2008/2/19
  • 來自:
  • 發表數: 15771
[轉貼]實現Tomcat 8 Clustering/Session Replication 環境
實現Tomcat 8 Clustering/Session Replication環境
2015-05-12 17:53:39

目錄

1、環境概述

2、時間同步

3、實現過程

4、測試

5、討論及總結

1、環境概述

測試環境拓撲如下:

                       clients

                           |

                        httpd

                       /       \

                  tomcat1     tomcat2

tomcat、httpd的安裝已不再重複,但版本在這裡說明:

httpd服務器(以負載均衡方式工作)


[root@lb conf.d]# cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m
[root@lb conf.d]# uname -r
2.6.32-358.el6.x86_64
[root@lb conf.d]# ifconfig | grep Bcast:
          inet addr:192.168.0.200  Bcast:192.168.0.255  Mask:255.255.255.0
[root@lb conf.d]# rpm -q httpd
httpd-2.2.15-39.el6.centos.x86_64

後端tomcat服務器環境,操作系統版本與httpd服務器相同


[root@master ~]# catalina.sh version
Using CATALINA_BASE:   /usr/local/tomcat
Using CATALINA_HOME:   /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME:        /usr/java/latest
Using CLASSPATH:       /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
Server version: Apache Tomcat/8.0.21
Server built:   Mar 23 2015 14:11:21 UTC
Server number:  8.0.21.0
OS Name:        Linux
OS Version:     2.6.32-358.el6.x86_64
Architecture:   amd64
JVM Version:    1.8.0_45-b14
JVM Vendor:     Oracle Corporation

2、時間同步

請參照此博文的相關部份配置 http://zhaochj.blog.51cto.com/368705/1635982

3、實現過程

    在之前的博文中已實現了apache以mod_jk和mod_proxy兩種類型實現對後端tomcat進行負載均衡的部署,但都有一個缺陷,在不啟用sticky session時,客戶端訪問前端apache後被調度到後端的tomcat節點,當訪問節點發生改變時此負載均衡系統不能保證用戶session信息不變;如果啟用sticky session,在一定程度上解決了用戶session信息的丟失問題,但是當用戶所訪問的節點宕機時,用戶的session信息依然會丟失,那是否有方法解決這個問題?

    當然,常用有兩種方案,方案一:採用節點間session複製技術,當用戶被調度到一tomcat節點時,用戶的session信息會被同步到其他節點,一般採用組播的方式傳遞session信息,只要在同一個組播裡的成員都會有一份session複製器,這樣不管是哪個tomcat節點宕機,其他節點上也有一份接近(如果session信息變化又沒來得及同步就宕機,那其他節點的session就不完整)完整的session信息,對用戶來說是透明的;方案二就是單獨部署session服務器,利用google的MSM(Memcached Session Manager)項目來實現,這個此博文不討論,將在後邊博文中來呈現。


前提:此次環境是在上次博文「
apache以mod_proxy實現負載均衡集群」的基礎上來實現集群的session複製機制。

因tomcat的版本不同,在實現集群session複製時有些細微的區別,所以根據官方文檔來配置,我的環境是tomcat是8.0.21的版本,官方文檔這裡 http://tomcat.apache.org/tomcat-8.0-doc/cluster-howto.html#Bind_session_after_crash_to_failover_node


配置tomcat,實現集群的session複製:


[root@tomcat1 ~]# vim /usr/local/tomcat/conf/server.xml
#以下代碼加入到Engine容器中
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                 channelSendOptions="8">
                <Manager className="org.apache.catalina.ha.session.DeltaManager"
                        expireSessionsOnShutdown="false"
                        notifyListenersOnReplication="true"/>
                <Channel className="org.apache.catalina.tribes.group.GroupChannel">
                        <Membership className="org.apache.catalina.tribes.membership.McastService"
                                address="228.0.0.4"
                                port="45564"
                                frequency="500"
                                dropTime="3000"/>
                        <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                                address="192.168.0.201"
                                port="4000"
                                autoBind="100"
                                selectorTimeout="5000"
                                maxThreads="6"/>
                        <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
                                <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
                        </Sender>
                        <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
                        <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
                </Channel>
                <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                        filter=""/>
                <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
                <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                        tempDir="/tmp/war-temp/"
                        deployDir="/tmp/war-deploy/"
                        watchDir="/tmp/war-listen/"
                        watchEnabled="false"/>
                <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>


註:以上內容定義在Engine容器中,則表示對所有主機均啟動用集群功能。如果定義在某Host容器中,則表示僅對此主機啟用集群功能。此外,需要注意的是,Receiver中的address="auto"一項的值最好改為當前主機集群服務所對應的網絡接口的IP地址。

接下來更改web應用程序根目錄下「WEB-INF」目錄下的web.xml文件,必須添加<distributable/>這個元素,如果web應用程序沒有這個目錄及文件,則可複製默認站點的web.xml文件。


[root@tomcat1 ~]# vim /tomcat/webapps/test/WEB-INF/web.xml
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
                      http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
  version="3.1"
  metadata-complete="true">
  <display-name>Welcome to Tomcat</display-name>
  <description>
     Welcome to Tomcat
  </description>
<distributable/>
</web-app>
#在web-app容器中加入<distributable/>元素即可。

到此,tomcat1配置完成,在tomcat2上也作上邊的這些修改後重新啟動tomcat服務,重啟後注意觀察服務器的監聽端口,會有server.xml配置文件中配置的端口出現,如下:


[root@master ~]# netstat -tnlup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name  
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1100/sshd          
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      1184/master        
tcp        0      0 :::8080                     :::*                        LISTEN      3422/java          
tcp        0      0 :::22                       :::*                        LISTEN      1100/sshd          
tcp        0      0 ::1:25                      :::*                        LISTEN      1184/master        
tcp        0      0 ::ffff:192.168.0.201:4000   :::*                        LISTEN      3422/java          
tcp        0      0 ::ffff:127.0.0.1:8005       :::*                        LISTEN      3422/java          
tcp        0      0 :::8009                     :::*                        LISTEN      3422/java          
udp        0      0 192.168.0.201:123           0.0.0.0:*                               1108/ntpd          
udp        0      0 127.0.0.1:123               0.0.0.0:*                               1108/ntpd          
udp        0      0 0.0.0.0:123                 0.0.0.0:*                               1108/ntpd          
udp        0      0 fe80::20c:29ff:fec6:f77a:123 :::*                                    1108/ntpd          
udp        0      0 ::1:123                     :::*                                    1108/ntpd          
udp        0      0 :::123                      :::*                                    1108/ntpd          
udp        0      0 :::45564                    :::*                                    3422/java

4、測試

打開瀏覽器訪問前端的httpd監聽的地址,得到的結果如下圖:

wKiom1VRzEjQmPz7AAC5Y8TiYTk721.jpg

wKioL1VRzeGBkC_pAADCn2Z6Uaw781.jpg

看兩次訪問到的頁面,第一次訪問到了tomcat1節點上的測試頁面,第二訪問到了tomcat2上的測試頁面,但兩次訪問的session信息是保持不變的。測試說明兩個節點間的session已完成了複製。

5、討論及總結


    官方建議此Clustering/Session Replication技術只能用在小規模的集群環境中,有網友做過測試,當後端的tomcat節點數大於5個時,整個集群的性能不會再上升。

    這裡還有一個問題值得討論,在生產環境下實現了session在各節點間的複製,在httpd上的配置中是否要啟用session粘性或綁定呢,即使不啟用綁定session,mod_proxy或mod_jk會根據配置的調度算法來把客戶端的請求分發到後端的tomcat節點上,如果啟用session的綁定功能,那客戶端在一段時間內會被分發到同一個tomcat節點上,如果是以mod_proxy方式實現負載集群時要記得啟用故障轉移選項(nofailover=Off),不然,客戶端之前問題的tomcat節點發生故障,那客戶端將無法實現故障轉移,不過這好像是默認選項。

    在集群session複製的負載均衡環境中是否啟用session粘性功能我個人認為需要不斷的監控調度節點、後端的tomcat各節點的負載情況來再做調整。

本文出自 「 專注Linx,與Linux共舞」 博客,請務必保留此出處 http://zhaochj.blog.51cto.com/368705/1650728


原文出處:实现Tomcat 8 Clustering/Session Replication环境 - 专注Linx,与Linux共舞 - 51CTO技术博客
冷日
(冷日)
Webmaster
  • 註冊日: 2008/2/19
  • 來自:
  • 發表數: 15771
Re: [轉貼]Tomcat 集群的配置說明
貼一個集群的配置說明,感覺這個寫的具體
對於WEB應用集群的技術實現而言,最大的難點就是如何能在集群中的多個節點之間保持數據的一致性,會話(Session)信息是這些數據中最重要的一塊。要實現這一點,大體上有兩種方式,一種是把所有Session數據放到一台服務器上或者數據庫中,集群中的所有節點通過訪問這台Session服務器來獲取數據;另一種就是在集群中的所有節點間進行Session數據的同步拷貝,任何一個節點均保存了所有的Session數據。兩種方式都各有優點,第一種方式簡單、易於實現,但是存在著Session服務器發生故障會導致全系統不能正常工作的風險;第二種方式可靠性更高,任一節點的故障不會對整個系統對客戶訪問的響應產生影響,但是技術實現上更複雜一些。常見的平台或中間件如microsoft asp.net和IBM WAS都會提供對兩種共享方式的支持,tomcat也是這樣,但是一般採用第二種方式。

集群說明:
1. 負載均衡(Load Balance):當同一客戶端發起一個請求時,apache始終將請求轉發到同一個節點 (sticky session),當另一個客戶端或同一客戶端從一個新的瀏覽器窗口發起請求時,apache會把請求分發到另一節點 上,依次輪詢,當然,可以在apache上設置後端tomcat的分發權重。從而達到負載均衡的效果。
2. 高可用(High availablity):當其中一台tomcat server 突然crash時,apache會將這在進行的請求分發到集群中其他tomcat server上,由於集群member 之間已經session replication,所以原來的session 會在另外一節點上繼續進行,此時,請求已發生了無縫轉移,在客戶端完全感覺不到故障已發生。
* Tomcat 通過SimpleTcpcluster類進行基於內存的會話複製(in-memory replication)。Tomcat Cluster 通過組播(心跳包)方式決定組成員關係(通過TCP協議進行數據傳輸和其他交流),每一個節點在啟動時和運行時都會有規律地(默認500毫秒)發送組播心跳包,同一個Cluster內的節點會在相同的組播地址和端口監聽這些信息;在一定的dropTime內(默認3S)不發送組播報的節點就會被認為是死去並被從cluster刪去;Session replication 請求和session 更新通過直接TCP 連接在cluster成員間傳送,也就是說當replication session 時,節點會生成一個直接向其他節點的TCP連接。

配置:

1.Apache的配置
Listen 8051
<VirtualHost *:8051>
   ServerAdmin root@99bill.com
   ServerName  localhost
   ServerAlias localhost
   ProxyPass /myweb  balancer://cluster/myweb stickysession=JSESSIONID|jsessionid lbmethod=byrequests timeout=5 maxattempts=3
   ProxyPassReverse  / balancer://cluster
   ProxyRequests Off
   ProxyPreserveHost On
   ErrorLog "logs/tctest_error.log"
   CustomLog  "logs/tctest_access.log" commmon
<proxy balancer://cluster>
BalancerMember ajp://192.168.55.229:8009 route=jvm_a
BalancerMember ajp://192.168.55.231:8009 route=jvm_b
</proxy>
</VirtualHost>

Tomcat會在創建session時會根據根據jvmRoute的值在sessionID後面追加route值(接下來將要配置),例如167A7621C8ACEF496A0E3D7720F7C35E.jvm1。客戶端訪問時,如果是已建立的session,有route值,apache就sticky session,使該請求一直分發到上次訪問的tomcat server上,如果是第一次請求則根據既定規則分發。

2.Tomcat的配置

2-1.修改server.xml
分別修改為:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm_a">
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm_b">

2-2 在server.xnml的 之間添加
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
       channelSendOptions="8">
      <Manager className="org.apache.catalina.ha.session.DeltaManager"
       expireSessionsOnShutdown="false"
       notifyListenersOnReplication="true"/>

Manager用來在節點間拷貝Session,默認使用DeltaManager,DeltaManager採用的一種all-to-all的工作方式,即集群中的節點會把Session數據向所有其他節點拷貝,而不管其他節點是否部署了當前應用。當集群中的節點數量很多並且部署著不同應用時,可以使用BackupManager,BackManager僅向部署了當前應用的節點拷貝Session。但是到目前為止BackupManager並未經過大規模測試,可靠性不及DeltaManager。
       <Channel className="org.apache.catalina.tribes.group.GroupChannel">
       <MemberShip className="org.apache.catalina.tribes.membership.McastService"
                   address="228.0.0.4"
                   port="45564"
                   frequency="500"
                   droptTime="3000"/>

Membership用於發現集群中的其他節點,這裡的address用的是組播地址使用同一個組播地址和端口的多個節點同屬一個子集群,因此通過自定義組播地址和端口就可將一個大的tomcat集群分成多個子集群。
       <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                 address="auto"
                 port="4000"
                 autoBind="100"
                 seceltorTimeout="5000"
                 maxThreads="6"/>

receiver用於各個節點接收其他節點發送的數據,在默認配置下tomcat會從4000-4100間依次選取一個可用的端口進行接收,自定義配置時,如果多個tomcat節點在一台物理服務器上注意要使用不同的端口。
       <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
       <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
       </Sender>

Sender用於向其他節點發送數據,具體實現通過Transport配置
      <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
      <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
      </Channel>

Channel 是一個抽像的端口,和socket類似,集群member通過它收發信息。
      <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                 filter=""/>

Valve用於在節點向客戶端響應前進行檢測或進行某些操作,ReplicationValve就是用於檢測當前的響應是否涉及Session數據的更新,如果是則啟動Session拷貝操作,filter用於過濾請求,如客戶端對圖片,css,js的請求就不會涉及Session,因此不需檢測,默認狀態下不進行過濾,監測所有的響應。
在生產環境中使用以下選項:
<Valve className=」org.apache.catalina.ha.tcp.ReplicationValve」
filter=」.*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;」/>

即當對靜態頁面圖片等訪問時不進行session replication。
      <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

設置此選項是,當一個節點crash時,訪問跳到另一個節點,此時session ID 會將jvmRoute值和以前的session Id 綁定在一起想·
      <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
      <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
      </Cluster>

2-3.修改webapps/myweb/WEN-INF/web.xml
添加 標籤
即只在之間添加
<distributable/>

2-4.在weapps創建一個測試應用myweb(此次試驗可以直接cp –rf examples myweb)
修改index.jsp 為以下內容:
Tomcat_a:
<%@ page contentType="text/html; charset=GBK" %>
<%@ page import="java.util.*" %>
<html><head><title>Cluster Test</title></head>
<body>
<%
  //HttpSession session = request.getSession(true);
  System.out.println(session.getCreationTime());
  out.println("<br> SESSION ID:" + session.getId()+"<br>";
  out.println("Session serviced by master"+"<br>";
  out.println("Session created time is :"+session.getCreationTime()+"<br>";
%>
</body>
</html>

(訪問時是會顯示session ID ,server name,session 創建的時間,格式如下:)
=============================================================
=============================================================

Tomcat_b:
<%@ page contentType="text/html; charset=GBK" %>
<%@ page import="java.util.*" %>
<html><head><title>Cluster Test</title></head>
<body>
<%
  //HttpSession session = request.getSession(true);
  System.out.println(session.getCreationTime());
  out.println("<br> SESSION ID:" + session.getId()+"<br>";
  out.println("Session serviced by node2"+"<br>";
  out.println("Session created time is :"+session.getCreationTime()+"<br>";
%>
</body>
</html>

(訪問時是會顯示session ID ,server name,session 創建的時間,格式如下:)
Note :1、如果用的兩台server,必須保證兩台機器的時間是完全同步的,用ntp服務器同步。
2、設置hostname並修改/etc/hosts文件
Jul 7, 2011 12:53:38 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance
in production environments was not found on the java.library.path:
/usr/local/jdk1.6.0_05/jre/lib/amd64/server:/usr/local/jdk1.6.0_05/jre/lib/amd64:/usr/local/jdk1.6.0_05/jre/../lib/amd64:/usr/java/packages/lib/amd64:/lib:/usr/lib
Jul 7, 2011 12:53:38 PM org.apache.tomcat.util.digester.Digester endElement
WARNING:   No rules found matching 'Server/Service/Engine/Cluster/Channel/MemberShip'.
Jul 7, 2011 12:53:38 PM org.apache.tomcat.util.digester.SetPropertiesRule begin
WARNING: [SetPropertiesRule]{Server/Service/Engine/Cluster/Channel/Receiver} Setting property 'seceltorTimeout' to '5000' did not find a matching property.
Jul 7, 2011 12:53:38 PM org.apache.coyote.http11.Http11Protocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
Jul 7, 2011 12:53:38 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 778 ms
Jul 7, 2011 12:53:38 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Jul 7, 2011 12:53:38 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.30
Jul 7, 2011 12:53:38 PM org.apache.catalina.ha.tcp.SimpleTcpCluster start
INFO: Cluster is about to start
Jul 7, 2011 12:53:38 PM org.apache.catalina.tribes.transport.ReceiverBase bind
INFO: Receiver Server Socket bound to:/192.168.55.231:4000
Jul 7, 2011 12:53:38 PM org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
INFO: Setting cluster mcast soTimeout to 500
Jul 7, 2011 12:53:38 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Sleeping for 1000 milliseconds to establish cluster membership, start level:4
Jul 7, 2011 12:53:38 PM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member addedrg.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 229}:4000,{192, 168, 55, 229},4000, alive=147616,id={-115 -53 23 90 -40 -79 74 -54 -90 115 -116 85 81 -106 51 73 }, payload={}, command={}, domain={}, ]
Jul 7, 2011 12:53:38 PM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member addedrg.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 231}:4001,{192, 168, 55, 231},4001, alive=8077,id={90 53 3 75 21 83 64 89 -74 -72 34 -92 -19 -97 93 12 }, payload={}, command={}, domain={}, ]
Jul 7, 2011 12:53:39 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Done sleeping, membership established, start level:4
Jul 7, 2011 12:53:39 PM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member addedrg.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 231}:4000,{192, 168, 55, 231},4000, alive=1008,id={-15 24 -37 103 96 125 77 20 -79 -51 38 52 38 101 -128 -108 }, payload={}, command={}, domain={}, ]
Jul 7, 2011 12:53:39 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Sleeping for 1000 milliseconds to establish cluster membership, start level:8
Jul 7, 2011 12:53:39 PM org.apache.catalina.tribes.io.BufferPool getBufferPool
INFO: Created a buffer pool with max size:104857600 bytes of typerg.apache.catalina.tribes.io.BufferPool15Impl
Jul 7, 2011 12:53:40 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Done sleeping, membership established, start level:8
Jul 7, 2011 12:53:40 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor host-manager.xml
Jul 7, 2011 12:53:40 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor manager.xml
Jul 7, 2011 12:53:41 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory moni2
Jul 7, 2011 12:53:41 PM org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(/usr/local/apache-tomcat-6.0.30/webapps/moni2/WEB-INF/lib/j2ee.jar) -
jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Jul 7, 2011 12:53:41 PM org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(/usr/local/apache-tomcat-6.0.30/webapps/moni2/WEB-INF/lib/servlet-api-2.4.jar) -
jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Jul 7, 2011 12:53:41 PM org.apache.catalina.ha.session.DeltaManager start
INFO: Register manager /moni2 to cluster element Engine with name Catalina
Jul 7, 2011 12:53:41 PM org.apache.catalina.ha.session.DeltaManager start
INFO: Starting clustering manager at /moni2
Jul 7, 2011 12:53:41 PM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
WARNING: Manager [localhost#/moni2], requesting session state from org.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 229}:4000,{192, 168, 55, 229},4000, alive=150126,id={-115 -53 23 90 -40 -79 74 -54 -90 115 -116 85 81 -106 51 73 }, payload={}, command={}, domain={}, ].
This operation will timeout if no session state has been received within 60 seconds.
Jul 7, 2011 12:53:41 PM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
INFO: Manager [localhost#/moni2]; session state send at 7/7/11 12:53 PM received in 113 ms.0.30
Jul 7, 2011 12:55:24 PM org.apache.catalina.ha.tcp.SimpleTcpCluster start
INFO: Cluster is about to start
Jul 7, 2011 12:55:24 PM org.apache.catalina.tribes.transport.ReceiverBase bind
INFO: Receiver Server Socket bound to:/192.168.55.231:4000
Jul 7, 2011 12:55:24 PM org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
INFO: Setting cluster mcast soTimeout to 500
Jul 7, 2011 12:55:24 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Sleeping for 1000 milliseconds to establish cluster membership, start level:4
Jul 7, 2011 12:55:24 PM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member addedrg.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 231}:4001,{192, 168, 55, 231},4001, alive=114038,id={90 53 3 75 21 83 64 89 -74 -72 34 -92 -19 -97 93 12 }, payload={}, command={}, domain={}, ]
Jul 7, 2011 12:55:25 PM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member addedrg.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 229}:4000,{192, 168, 55, 229},4000, alive=254053,id={-115 -53 23 90 -40 -79 74 -54 -90 115 -116 85 81 -106 51 73 }, payload={}, command={}, domain={}, ]
Jul 7, 2011 12:55:25 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Done sleeping, membership established, start level:4
Jul 7, 2011 12:55:25 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Sleeping for 1000 milliseconds to establish cluster membership, start level:8
Jul 7, 2011 12:55:25 PM org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
INFO: Replication member addedrg.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 231}:4000,{192, 168, 55, 231},4000, alive=1007,id={-105 -85 -108 -38 -90 97 71 126 -124 -104 86 -113 42 -65 -116 85 }, payload={}, command={}, domain={}, ]
Jul 7, 2011 12:55:25 PM org.apache.catalina.tribes.io.BufferPool getBufferPool
INFO: Created a buffer pool with max size:104857600 bytes of typerg.apache.catalina.tribes.io.BufferPool15Impl
Jul 7, 2011 12:55:26 PM org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
INFO: Done sleeping, membership established, start level:8
Jul 7, 2011 12:55:26 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor host-manager.xml
Jul 7, 2011 12:55:26 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Deploying configuration descriptor manager.xml
Jul 7, 2011 12:55:27 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory moni2
Jul 7, 2011 12:55:27 PM org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(/usr/local/apache-tomcat-6.0.30/webapps/moni2/WEB-INF/lib/j2ee.jar) -
jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Jul 7, 2011 12:55:27 PM org.apache.catalina.loader.WebappClassLoader validateJarFile
INFO: validateJarFile(/usr/local/apache-tomcat-6.0.30/webapps/moni2/WEB-INF/lib/servlet-api-2.4.jar) -
jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: javax/servlet/Servlet.class
Jul 7, 2011 12:55:27 PM org.apache.catalina.ha.session.DeltaManager start
INFO: Register manager /moni2 to cluster element Engine with name Catalina
Jul 7, 2011 12:55:27 PM org.apache.catalina.ha.session.DeltaManager start
INFO: Starting clustering manager at /moni2
Jul 7, 2011 12:55:27 PM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
WARNING: Manager [localhost#/moni2], requesting session state from org.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 229}:4000,{192, 168, 55, 229},4000, alive=256061,id={-115 -53 23 90 -40 -79 74 -54 -90 115 -116 85 81 -106 51 73 }, payload={}, command={}, domain={}, ].
This operation will timeout if no session state has been received within 60 seconds.
Jul 7, 2011 12:55:27 PM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
INFO: Manager [localhost#/moni2]; session state send at 7/7/11 12:55 PM received in 113 ms.
log4j:WARN No appenders could be found for logger (org.springframework.web.context.ContextLoader).
log4j:WARN Please initialize the log4j system properly.
Jul 7, 2011 12:55:28 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory examples
Jul 7, 2011 12:55:28 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory docs
Jul 7, 2011 12:55:28 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory myweb
Jul 7, 2011 12:55:28 PM org.apache.catalina.ha.session.DeltaManager start
INFO: Register manager /myweb to cluster element Engine with name Catalina
Jul 7, 2011 12:55:28 PM org.apache.catalina.ha.session.DeltaManager start
INFO: Starting clustering manager at /myweb
Jul 7, 2011 12:55:28 PM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
WARNING: Manager [localhost#/myweb], requesting session state from org.apache.catalina.tribes.membership.MemberImpl
[tcp://{192, 168, 55, 229}:4000,{192, 168, 55, 229},4000, alive=257568,id={-115 -53 23 90 -40 -79 74 -54 -90 115 -116 85 81 -106 51 73 }, payload={}, command={}, domain={}, ].
This operation will timeout if no session state has been received within 60 seconds.
Jul 7, 2011 12:55:28 PM org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
INFO: Manager [localhost#/myweb]; session state send at 7/7/11 12:55 PM received in 104 ms.
Jul 7, 2011 12:55:28 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory ROOT
Jul 7, 2011 12:55:28 PM org.apache.catalina.ha.session.JvmRouteBinderValve start
INFO: JvmRouteBinderValve started
Jul 7, 2011 12:55:28 PM org.apache.coyote.http11.Http11Protocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Jul 7, 2011 12:55:28 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
Jul 7, 2011 12:55:28 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/20  config=null
Jul 7, 2011 12:55:28 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 4309 ms

信息顯示:已將node-2加入組成員中

測試
1.在瀏覽器中輸入:
說明訪問的是tomcat_a這台server
cat /usr/local/apache-tomcat-6.0.30/logs/ localhost.2011-07-07.log

可以看到以下信息:
Jul 7, 2011 12:55:30 PM org.apache.catalina.core.ApplicationContext log
INFO: SessionListener: sessionDestroyed('1260F261596C49A76141FA3949F0E02B.jvm_a')

表明session replication 成功。

2.模仿事故
把tomcat_a這台機器先停掉,不停的刷新瀏覽器,則頁面一直為:
說明session ID 和 session 創建時間沒變,而jvmRoute 和tomcat Server變了,試驗成功。

原文出處:笔记-tomcat的Host配置 - 分布式系统架构-炼数成金-Dataguru专业数据分析社区
冷日
(冷日)
Webmaster
  • 註冊日: 2008/2/19
  • 來自:
  • 發表數: 15771
[轉貼]Tomcat 集群配置筆記

Tomcat集群配置筆記

kjfcpua 2012-03-11

一. 環境說明1. 服務器3 台,一台安裝apache 服務,兩台安裝tomcat

2. 使用到的各個組件及下載地址:

apache_2.2.11-win32-x86-no_ssl.msi

http://httpd.apache.org/download.cgi

tomcat 6 壓縮版
http://tomcat.apache.org/download-60.cgi
apache 的JK 連接器(Windows 版本),作為apache 的一個module ,網站同時提供了配置文件的使用方法

mod_jk-1.2.28-httpd-2.2.3.so


http://www.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/win32/jk-1.2.28/

3.ip 配置

一台安裝apache 的ip 為192.168.1.50 ,兩台安裝tomcat 的ip 分別為192.168.1.24 和192.168.1.52 (根據各自情況分配)


二.安裝過程

1. 在兩台安裝tomcat 的機器上安裝jdk6 (至少jdk5 )

2. 配置jdk 的安裝路徑, 在環境變量path 中加入jdk 的bin 路徑, 新建環境變量JAVA_HOME 指向jdk 的安裝路徑

3. 安裝tomcat ,並測試tomcat 的啟動是否正常

http://192.168.1.24:8080

http://192.168.1.50:8080


三.負載均衡配置過程

1. 在192.168.1.50 機器上安裝apache 服務,我的安裝路徑默認為:D:/Program
Files/Apache Software Foundation/Apache2.2

2. 安裝完成後測試能否正常啟動,http://192.168.1.50 ,如圖所示:







3. 將mod_jk-1.2.28-httpd-2.2.3.so 拷貝到apache 的modules 目錄下

2. 修改apache 的配置文件 httpd.cof ,在最後加上下面這段話:

LoadModule jk_module modules/mod_jk-1.2.28-httpd-2.2.3.so # 載入jk 連接器

JkWorkersFile conf/workers.properties # 設置負載均衡的配置文件,即定義均衡規則

JkLogFile logs/mod_jk.log # 指定日誌文件

JkLogLevel debug # 指定日誌級別

# 配置apache 將哪些請求轉發給JK 進行均衡

JkMount /*.jsp loadbalancer

JkMount /test/* loadbalancer



3. 分別修改兩個tomcat 的配置文件conf/server.xml 文件

修改前:

<!-- An Engine represents the entry point (within Catalina) that processes

every request. The Engine implementation for Tomcat stand alone

analyzes the HTTP headers included with the request, and passes them

on to the appropriate Host (virtual host).

Documentation at /docs/config/engine.html -->



<!-- You should set jvmRoute to support load-balancing via AJP ie :


<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">

-->

<Engine name="Catalina" defaultHost="localhost">

修改後:

<!-- An Engine represents the entry point (within Catalina) that processes

every request. The Engine implementation for Tomcat stand alone

analyzes the HTTP headers included with the request, and passes them

on to the appropriate Host (virtual host).

Documentation at /docs/config/engine.html -->



<!-- You should set jvmRoute to support load-balancing via AJP ie : -->

<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">

<Engine name="Catalina" defaultHost="localhost" >

將其中的jvmRoute="jvm1" 分別修改為jvmRoute="node1" 和jvmRoute="node2"



4. 在apache 的conf 目錄下創建workers.property 文件,輸入以下內容:

# fine the communication channel


# 定義負載均衡的所有主機名,和前面tomcat 配置文件的JVMroute 屬性相對應

# 其中的loadbalancer 是虛擬的主機,負責負載均衡,姑且當成是apache 本身

worker.list=node1,node2,loadbalancer

# node1 使用AJP 與JK 通訊

worker.node1.port=8009 # 工作端口,若沒佔用則不用修改

worker.node1.host=192.168.1.24 # node1 的地址為localhost ,如果tomcat 在其他服務器則輸入該服務器的地址

worker.node1.type=ajp13 # 類型

worker.node1.lbfactor=100 # 負載平衡因數



worker.node2.port=9009 # 工作端口,若沒佔用則不用修改

worker.node2.host=192.168.1.52 #node2 服務器的地址

worker.node2.type=ajp13 # 類型

worker.node2.lbfactor=100 # 負載因子,值越大則JK 分發到此tomcat 的負荷越多



# 定義loadbalancer 類型為「負載均衡器(lb )」

worker.loadbalancer.type=lb

# 定義loadbalancer 進行負載均衡的對象

worker.loadbalancer.balanced_workers=node1,node2

worker.loadbalancer.sticky_session=false

worker.loadbalancer.sticky_session_force=false




5. 在兩個tomcat 的安裝目錄中的webapps 建立相同的應用,我的應用名為test ,在兩個應用目錄中建立相同WEB-INF 目錄和頁面test.jsp 的頁面內容如下

<%@ page language = "java" contentType = "text/html;
charset=ISO-8859-1"

pageEncoding = "GBK" %>

<! DOCTYPE html PUBLIC "-//W3C//DTD
HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd" >

< html >

< head >

< meta http-equiv = "Content-Type" content = "text/html;
charset=ISO-8859-1" >

< title > helloapp </ title >

</ head >

< body >

<%

System.out.println( "call test.jsp" ); // 在Tomcat 控制台上打印一些跟蹤數據

%>

SessionID: <%= session.getId() %>

</ body >

</ html >

6. 重啟 apache 服務器和兩個 tomcat 服務器,到此負載均衡已經配置完成,測試負載均衡: http://192.168.1.50/test/test.jsp , 運行正常則已建立負載均衡


四.集群配置

1. 負載均衡的條件下配置tomcat 集群

2 .分別修改兩個tomcat 的配置文件conf/server.xml, 修改內容如下

修改前:

<!--

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

-->

修改後:

< Cluster className = "org.apache.catalina.ha.tcp.SimpleTcpCluster"

channelSendOptions = "8" >

< Manager className = "org.apache.catalina.ha.session.DeltaManager"

expireSessionsOnShutdown = "false" notifyListenersOnReplication = "true" />

< Channel className = "org.apache.catalina.tribes.group.GroupChannel" >

< Membership

className = "org.apache.catalina.tribes.membership.McastService"

bind = "192.168.1.100" address = "228.0.0.4" port = "45564" frequency = "500"

dropTime = "3000" />

< Receiver

className = "org.apache.catalina.tribes.transport.nio.NioReceiver"


address = "auto" port = "4000" autoBind = "100" selectorTimeout = "5000"

maxThreads = "6" />

< Sender

className = "org.apache.catalina.tribes.transport.ReplicationTransmitter" >

< Transport

className = "org.apache.catalina.tribes.transport.nio.PooledParallelSender" />

</ Sender >

< Interceptor

className = "org.apache.catalina.tribes.group.interceptors.TcpFailureDetector" />

< Interceptor

className = "org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor" />

</ Channel >

< Valve className = "org.apache.catalina.ha.tcp.ReplicationValve" filter = "" />

< Valve className = "org.apache.catalina.ha.session.JvmRouteBinderValve" />

< Deployer className = "org.apache.catalina.ha.deploy.FarmWarDeployer"

tempDir = "/tmp/war-temp/" deployDir = "/tmp/war-deploy/"


watchDir = "/tmp/war-listen/" watchEnabled = "false" />

< ClusterListener

className = "org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener" />

< ClusterListener

className = "org.apache.catalina.ha.session.ClusterSessionListener" />

</ Cluster >

注意:bind 是tomcat 服務器所在的機器的ip 地址

3. 重啟兩個tomcat ,到此tomcat 的集群配置完成


五.應用配置

對於要進行負載和集群的的tomcat 目錄下的webapps 中的應用中的WEB-INF 中的web.xml 文件要添加如下一句配置

<distributable/>

配置前:

<? xml version = "1.0" encoding = "UTF-8" ?>

< web-app id = "WebApp_ID" version = "2.4"

xmlns = "http://java.sun.com/xml/ns/j2ee"

xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"

xsi:schemaLocation = "http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd" >


< display-name > test </ display-name >

< distributable />

< welcome-file-list >

< welcome-file > index. html </ welcome-file >

< welcome-file > index. htm </ welcome-file >

< welcome-file > index. jsp </ welcome-file >

< welcome-file > default. html </ welcome-file >

< welcome-file > default. htm </ welcome-file >

< welcome-file > default. jsp </ welcome-file >

</ welcome-file-list >

</ web-app >


六.測試集群

重新啟動apache 服務器和兩個tomcat 服務器,輸入地址: http://192.168.1.50/test/test.jsp

運行結果如下:





如果同一個瀏覽器的sessionID 不變,則集群配置成功


原文出處:Tomcat集群配置笔记 - Lai18.com IT技术文章收藏夹*/
冷日
(冷日)
Webmaster
  • 註冊日: 2008/2/19
  • 來自:
  • 發表數: 15771
[轉貼]Tomcat 負載均衡和集群筆記



Tomcat負載均衡和集群筆記






liuyun2026上傳於
2012-11-28

|




  T​o​m​c​a​t​負​載​均​衡​和​集​群​筆​記








1 安裝 apache
在 windows 下就不用說了。
在 linux 下,首先下載 apache 安裝包,可能是 httpd-2.2.13.tar.bz2。
運行
 bunzip2 httpd-2.2.13.tar.bz2 

,解壓出 tar 文件 運行
 tar – xvf http-2.2.13.tar

,把文件夾解壓出來切換到 http-2.3.13 目錄,運行
./configure – prefix=目標路徑 --enable-modu les='proxy proxy_ajp proxy_balancer proxy_connect proxy_http '

運行
make

運行
make install apache

就安裝在了目標路徑下面 進入安裝路徑的 bin 下,運行
 ./apachectl – k start

可以啟動 apache
2 使用 proxy 方式進行負載均衡
模塊的加載在 linux 下是需要靜態編譯的,在 windows 下則可以使用動態方式加載。如果是在 linux 下,上面的 步驟就已經把 proxy 模塊加載進來了。在 windos 下,需要進入 apache 安裝路徑 /conf 下,打開 httpd.conf 文件,把下 面幾行的註釋打開:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_http_module modules/mod_proxy_http.so

無論 windows 還是 linux ,還要打開如下註釋:
Include conf/extra/httpd-vhosts.conf

然後在 conf/extra 目錄下找到 httpd-vhosts.conf 文件,備份一下,增加如下內容:
<VirtualHost *:8000>
ProxyRequests Off
<Proxy balancer://cluster>
BalancerMember ajp://10.8.10.115:8009 loadfactor=1 route=jvm1
BalancerMember ajp://10.8.10.115:9009 loadfactor=1 route=jvm2
</Proxy>
ProxyPass / balancer://cluster/ lbmethod=byrequests stickysession=JSESSIONID nofailover=off timeout=5 maxattempts=3
ProxyPassReverse / balancer://cluster/
ErrorLog "logs/study.com-error_log"
CustomLog "logs/study.com-access_log" common
</VirtualHost>

上面的 8000 是 apache 服務器監聽的端口號。
配置了兩個結點。
這樣,apache 服務器的配置就完成了,接下來還要配置 tomcat 與上面的配置相適應。


3 Tomcat 配置
的註釋打開;
的註釋放開為 tomcat 配置 cluster :
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
mcastBindAddress="127.0.0.1"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>

按照 tomcat 的官方文檔,只需要打開 cluster 的註釋即可,實際使用中發現在本機集群不行,同步不了 session 。需要加上上面代碼的紅色加粗部分。
配置完了這些之後,還有一點很重要:需要在發佈工程的 web.xml 文件中加入:
<distributable/>

好了,現在已經實現了集群和負載均衡。
他是 session 粘性的,但是當一個 tomcat 死去之後,請求會立即轉發到另一個 tomcat 並複製 session 。這是非常愉快的事情。 如果是多台機器之間構建集群環境,需要把上面配置中 mcastBindAddress 和 address 都配置成本機的地址。在 linux 環境下,還要把上面的


4 使用 mod_jk 進行集群和負載均衡
下載 mod_jk-1.2.28-httpd-2.2.3.so ,放到 apache 安裝目錄的 modules 目錄下 打開 httpd.conf ,增加
LoadModule jk_module modules/mod_jk-1.2.28-httpd-2.2.3.so
JkWorkersFile conf/workers.properties
JkMount /*.jsp controller

創建一個 workers.properties 文件,放到 apache 安裝目錄的 conf 下,文件中包含如下內容:
worker.list = controller,tomcat1,tomcat2
#========tomcat1========
worker.tomcat1.port=8009
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor = 1
#========tomcat2========
worker.tomcat2.port=9009
worker.tomcat2.host=localhost
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor = 1
#========controller, 負載均衡控制器 ========
worker.controller.type=lb
worker.controller.balanced_workers=tomcat1,tomcat2
worker.controller.sticky_session=1

這就可以了。 Tomcat 的配置與上面所說的一樣。

原文出處: Tomcat负载均衡和集群笔记_百度文库 */
前一個主題 | 下一個主題 | 頁首 | | |



Powered by XOOPS 2.0 © 2001-2008 The XOOPS Project|