SG Cluster - PowerPoint PPT Presentation

1 / 61
About This Presentation
Title:

SG Cluster

Description:

Make current application into a scalable system with little or no modification ... ICP to query its neighbor and parnet proxys for a specific object and use HTTP ... – PowerPoint PPT presentation

Number of Views:194
Avg rating:3.0/5.0
Slides: 62
Provided by: chungk
Category:
Tags: cluster | proxys

less

Transcript and Presenter's Notes

Title: SG Cluster


1
SG Cluster
  • Friendly adm user interface
  • Load balancing (based on various policies)
  • Fault tolerant (no single failure point)
  • Client transparent
  • Robust to Denied-of-Service attack
  • Make current application into a scalable system
    with little or no modification
  • Minimize the effort required in designing new
    scaleable service (various read/write model,
    clean API)

2
SG Cluster
  • Client Transparent
  • Scalable
  • Extensible
  • Manageable
  • Load Balancing
  • Fault Tolerant
  • High Availability

3
Physical Wiring
192.168.1.1
140.116.72.114
192.168.1.2
SG Load balancer (primary)
Internet access
192.168.1.3
HighSpeed Switch
HighSpeed Switch
192.168.1.4
SG Load balancer (backup)
192.168.1.5
Server Pool
4
Logical View
SG Load balancer
192.168.1.1
140.116.72.219
192.168.1.2
192.168.1.223
user request
140.116.72.114
192.168.1.3
192.168.1.323
user request
140.116.72.11523
192.168.1.423
192.168.1.523
Virtual Server
5
SG process interaction
Heartbeat to other bidds for SG failover
140.116.72.219
bidd
Alive?
192.168.1.1
NATD
mrouted
IP packet
192.168.1.2
Server Group Properties
SGmon
Server
Alive?
140.116.72.114
Alive?
feedback protocol
SGhb
SGctrld
feedback protocol
feedback protocol
SGcmd
192.168.1.3
6
bidd - Symmetric IP failover machnism for SG
  • using only 1 private ip for all node
  • communicating through multicast ip address
  • using bid model in master election
  • fully symmetric, each node could have exactly the
    same configuration
  • independent with the service to support
  • monitoring service process through process table
  • Syntax
  • bidd group_ip port heartbeat_interval
    master_timeout bid_timeout start_script
    stop_script continue_script process_name_to_be_mo
    nitored

7
bidd startup scenery
C
A
Initial IP 10.0.0.1 Group IP 224.1.1.1
B
Initial IP 10.0.0.1 Group IP 224.1.1.1
Initial IP 10.0.0.1 Group IP 224.1.1.1
Node start up sequence A -gt B -gt C Each node
set initial ip open socket on
the interface with initial ip
join group ip with that socket The initial ip of
a node will be disabled by other node because of
ARP The group ip will keep active because group
ip communication doesnt use ARP
8
Mapping multicast IP into Ethernet address
These 5 bits in the multicast group ID are Not
used to form the Ethernet address
0
7
8
15
16
23
24
31
1
1
1
0
Class D IP address 224.0.0.0 --- 239.255.255.255
Low-order 23 bits of multicast Group ID copied
to Ethernet address
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
1
1
1
1
0
0
01
00
5e
48 bit Ethernet address 01005e000000 ---
01005e7fffff
Since mapping is not unique, the device or IP
module must perform filtering
9
State transition in bidd
Master
other lower bid
Start script
No other higher bid
Other lower master
Bid
Other bid
Continue script
initial
Master heartbeat
Other higher master
Stop script
Other higher bid
Server process failure
Master timeout
Signal HUP/INT
Other lower bid
Slave
Signal HUP Signal INT
Master heartbeat
Other higher bid
0xff01
Group
Id
exit
Price
Heartbeat
0xff02
Group
Price
Id
Bid
10
Stateful recovery of bidd (?)
Normal
multicast state
state
state
multicast message
msg log
msg log
slave bidd
master bidd
state transfer
log full! signal service progrm to do state
transfer
send message
running service
11
Stateful recovery of bidd (?)
Recovering
state
state
request state, msglog
msg log
msg log
recovering bidd
master bidd
request state, msglog
starting service
12
NATD and the IP packet flow
b
NATd
write
read
c
Kernel
a
2
3
divert socket
Routing Table
1
4
Interface
network
in 1-a-b-c-2 out 3-a-b-c-4
Struct sockaddr.sin_addr.s_addr IP of interface
Struct sockaddr.sin_addr.s_addr INADDR_ANY
IPFW is used to redirect packet from kernel
space to user space (dirvet socket)
13
ReadWrite Model
  • ReadAny
  • for TCP/UDP readonly service
  • ReadOne/WriteAll
  • for UDP read/write service
  • data is identical
  • ReadFirst/WriteAll
  • for UDP read/write service
  • data maybe partitioned

14
ReadAny
192.168.1.1
SG Load balancer
192.168.1.2
request
140.116.72.114
192.168.1.3
Virtual Server
  • Operation
  • When connection is created, SG tries to select a
    real server to serve this request
  • Benefit
  • TCP, UDP and ICMP are supported
  • No modification of the service program is
    required
  • Requirement/Limit
  • The data must be fully identical on all servers
  • If any data modification is required, it must be
    handled by using a centrical database or file
    server on the backend

15
ReadOne/WriteAll
192.168.1.1
SG Load balancer
read request
192.168.1.2
write request
140.116.72.114
192.168.1.3
Virtual Server
  • Operation
  • ReadOne
  • If no write is processing, read any
  • If other write is processing, change to read
    prefered to garentee a consistency view from
    client
  • WriteAll
  • Multicast write
  • collect all replies
  • Any one replying failure is turned off
    immediately
  • Servers reply success will be grouped by return
    value
  • return the one with most copies.

MulticastGroup 234.116.72.114
16
ReadOne/WriteAll (continued)
SG
S1
S2
S3
SG
S1
S2
S3
Write request
Read request
Read reply
t
t
Write reply
ReadOne
WriteAll
17
ReadOne/WriteAll (continued)
  • Benefit
  • Support both read and write operation
  • Service program doesnt to care the membership of
    service group
  • Requirement/Limit
  • Data must be identical on all servers
  • Any session key generated by service program must
    be deterministic
  • Service program needs little modification
  • Join multicast group at startup
  • When doing write reply, serivice program has to
    set ip option to represent the return status (SG
    uses this info to determine wheather a write
    request is success or failed)
  • New packet analyzer needs to be implemented to
    support protocol other than RPC.

18
ReadFirst/WriteAll
192.168.1.1
SG Load balancer
read request
192.168.1.2
write request
140.116.72.114
192.168.1.3
Virtual Server
  • Operation
  • ReadFirst
  • Multicast read
  • Return the earliest reply to client
  • WriteAll
  • Multicast write
  • collect all replies
  • Any one replying failure will be turned off
    immediately
  • Servers reply success will be grouped by return
    value
  • return the reply with the most copies.

MulticastGroup 234.116.72.114
19
ReadFirst/WriteAll (continued)
SG
S1
S2
S3
SG
S1
S2
S3
Write request
Read request
Read reply
t
t
Write reply
ReadFirst
WriteAll
20
ReadFirst/WriteAll (continued)
  • Benefit
  • Support both read and write operation
  • Data can be partitioned
  • Requirement/Limit
  • Service program have know the membership for job
    assignment
  • Any session key generated by service program must
    be deterministic
  • Service program needs little modification
  • Join multicast group at startup
  • When doing read reply, use membership info to
    determine which server is responsible for this
    request. (server not responsible for this request
    just drop it)
  • When doing write reply, serivice program has to
    set ip option to represent the return
    status(server no responsible for this request
    just return ok)
  • New packet analyzer needs to be implemented to
    support protocol other than RPC.

21
Problem in using multicast
  • IGMP
  • IGMP uses a delay timer(max 10 sec) when making
    report to a membership query. It means a mrouter
    may takes 10 seconds to learn the membership.
  • Members of same group in same subnet would report
    only once.
  • Mrouted
  • Mrouted wont route multicast packets for subnet
    of an alias ip by default.
  • Mrouted generates IGMP membership queries at
    initial stage but doesnt guarentee to catch the
    report to that query

SG
192.168.1.1
mrouted
10.0.0.1
write request
192.168.1.2
140.116.72.114
10.0.0.1 initial ip on public interface 140.116.7
2.114 alias ip on public interface 234.116.72.114
member ship in multicast write
Virtual Server
192.168.1.3
MulticastGroup 234.116.72.114
22
Problem in using multicast (continued)
  • Problem
  • SG uses alias IP for it server group
  • SG would not serve multicast request before
    mrouted is ready
  • SG failover wont work for multicast service
    since mrouted needs time to learn member ship.
    The transition time would be too long.
  • Solution
  • Dynamic generate the configuration file for
    mrouted based on the ip alias confuguration of
    interface
  • Add a signal handler to mrouted, enable it to
    learn packet forwarding from SG data structure on
    share memory
  • Send the signal to mrouted in case SG changes its
    multicast address.

23
Multicast service checklist
  • UDP based
  • Syncronized request-reply
  • Deterministic session handle/key
  • Unique ID for each pair of request and reply
  • (packet analyzer API)
  • Set IP option when making write reply
  • (multicast support routine)
  • Update properties on SG
  • (feedback protocol)

24
Multicast service checklist (continued)
packet analyzer routeine
192.168.1.1
140.116.72.219
NATD
mrouted
IP packet
192.168.1.2
mcast service support routine
Server Group Properties
SGmon
Server
Alive?
140.116.72.114
Alive?
feedback protocol
SGhb
SGctrld
feedback protocol
feedback protocol
SGcmd
192.168.1.3
25
Packet Analyzer API
A packet analyzer is used by NATD to parse the
request/reply packet of a multicast service,
return the unique id of them and check wheather a
request is write or not. It should have follwing
API
  • int mcast_init_xxxx(void)
  • Initialize internal data structure
  • int mcast_check_port_xxxx(u_short port)
  • return wheather the serivce is located on a
    special port
  • int mcast_check_request_xxxx(struct ip pip, int
    id, int rwmode)
  • validate the structure of a request packet
  • get unique id and readwrite mode of this request
  • int mcast_check_reply_xxxx(struct ip pip, int
    id)
  • Validate the structure of a reply packet
  • Get unique id of this reply

26
Mcast Service support Routine
These support routines are used by service
program to set the status and return value of a
mcast request into the IP option which can be
used by SG to determine wheather a reply is
successful or not
  • int sock_joingroup(int sockfd, struct in_addr
    groupaddr, int ttl)
  • join a sockfd into the groupaddr
  • used after the creation of server socket
  • int prepare_ipopt_mcast(u_short type, int
    retval)
  • set the return type and return value into a
    global
  • used before return from a wrtie function of a
    mcast service
  • int sock_set_ipopt_mcast(int sockfd)
  • set ip option with the value set in
    prepare_ipopt_mcast
  • used before send reply
  • int sock_clear_ipopt_mcast(int sockfd)
  • clear ip option
  • used after send reply

27
Feedback Protocol
It is used to update group or server properties
on SG
  • UDP based
  • Command message

id
handle
class
op
group
server
property
datalen
data
  • Result message

id
datalen
data
status
  • A library libsgmsg.a is available for scalable
    server developer, which eases the use of feedback
    protocol.
  • An executable sgcmd is available for system
    administrator, which can be used by shell script,
    so existing application can make use of feedback
    protocol too.
  • A web interface for feedback protocol is also
    available for interactive user

28
Load Balancing
  • Balancing Type
  • whole server
  • a specific service port
  • Balancing Policy
  • by RoundRobin
  • by Connection Count
  • by Packet Traffic
  • by External Counter
  • service program can make its own load definition
    and update it to this external counter
  • Weighted on above counter

29
Load Balancing (continued)
Link creation
  • The load balancing is done by making selection on
    the target servers when a link is created.
  • A link becomes active when a response packet is
    found on this link
  • Once a link is active, the mapping of this link
    wont be changed until it is closed or removed
  • If the target server for a link is dead before
    the link becomes active, sg will remap this link
    to another target server

192.168.1.1
192.168.1.123
request
192.168.1.2
140.116.72.114
140.116.72.1181029
140.116.72.11423
192.168.1.3
Creation of link (140.116.72.1181029,
140.116.72.11423, 192.168.1.123)
30
Load Balancing (continued)
Load calaulation
req2
req3
req1
t
?t for each request
  • When a new up server joins a group under heavy
    network load, this new server may be bombed to
    death since it has very low or even no load in
    the beginning.
  • To avoid the bombing problem, SG uses the
    variation of the count within a period of time,
    not the total count, for the load calcation

Packet Load ?packet
Connection Load ?connection2 connection
31
Load Balancing (continued)
Keep Same Server
  • The target server for a new created link is
    chosen based on the balancing policy. But
    sometimes two different links are actually
    related and should be redirected to the same
    target server
  • Example
  • Port mapper a RPC client will ask port mapper
    which port a specific service is binding to and
    the client then sends request to that port
  • Squid a squid proxy will use ICP to query its
    neighbor and parnet proxys for a specific object
    and use HTTP to get that object from others if
    any cache hit.
  • Keep Same Server will redirect packets to the
    same target server if any link from same client
    is still available in the SG internal link table.

32
Load Balancing (continued)
2. Which port is server ZZ?
port mapper
RPC client
3. server ZZ is port 2345
1. ZZ is port 2345
4. request sent to port 2345
RPC server ZZ
packets to port mapper and packets to RPC server
ZZ are related
1. ICP request do you have xx.html
squid
squid
2. ICP reply yes
3. HTTP get xx.html
packets in ICP and packets in HTTP are related
33
Fault Tolerant
  • Fault detection
  • Packet snoop
  • Port test
  • Heartbeat monitor
  • Multicast write result comparison
  • Fault Recovery
  • The recovery happened on the real server, so SG
    system can just wait the recovery to complete
  • Recovery detection
  • Packet snoop
  • Port test
  • Heartbeat monitor
  • Triggered by server through feedback protocol

34
Server status transition
Alive
Keyport pkt delta in gt P timeout gt T
sgmon_porttest_error gtE
Keyport pkt responsed
heartbeat timeout gt H
Sgmon_porttest ok
mcast_errort gt M
heartbeat received
Pending
User recovery
Dead
Keyport pkt delta in gt P timeout gt 2T
sgmon_porttest_error gt 2E
heartbeat timeout gt 2H
mcast_errort gt 2M
D packet delta threshold T response timeout
threshold E porttest_error_threshold H
heartbeat timeout threshold M mcast_error_thresho
ld
35
Server status transition (continued)
  • A server has three state Alive, Pending or Dead.
  • Various fault/recovery detecting machinisms are
    used in SG system. The server status is
    calculated by sorting all fault/recovery event
    with timestamp. The latest event would decides
    the result.
  • Candidate for load balancing selection
  • Alive default candidate
  • Pending the candidate if no alive is available
  • Dead the candidate if all server are dead
  • Why Pending State?
  • A server not responding to clients request or
    monitors test may crashed or be busy in serving
    others under heavy load. We put a server into
    pending state at the beginning instead of dead
    state to expect it to come back later.

36
Packet Snoop
  • Keyport list
  • The ports that are critical to the availability
    of a server
  • 0 represents icmp echo test (ping)
  • Packet_delta_in
  • count of packets targeted to a keyport on a
    server with no response
  • Fault detection
  • Port dead packet_delta_in gt thresholdP and
    timeout gt thresholdT
  • Host dead if any one of keyport is dead
  • Recovery detection
  • Host alive all keyport are alive

37
Packet Snoop (continued)
  • Tcp reset handling
  • Packets with TCP_RST set wont be counted as a
    response (connection refused)
  • Advantage
  • Very low overhead
  • Reactive to server status in heavy traffic
  • No modification to client/server program
  • Disadvantage
  • Unable to detect failure/recovery if no packet is
    transferred
  • SG has to sacrifice some packets to do detection
    (it is packet lost from client point of view)
  • Since packet wont be deliverd to a failure
    server, it can hardly be used for server recovery
    detection

38
Port Test
  • How to determine a port is alive or dead?

UDP reply
alive
Connect ok
alive
UDP send
ICMP Port Unreachable
TCP connect
Connect refused
dead
Connect timeout
Connect ok
alive
TCP connect
Connect refused
alive
No response
(udp server accepts this msg or host is down)
Connect timeout
dead
  • If all keyport on a server are alive, the server
    is port test ok, else it is port test error

39
Port Test (continued)
140.116.72.219
192.168.1.1
NATD
mrouted
IP packet
192.168.1.2
Server Group Properties
SGmon
Server
Alive?
140.116.72.114
Alive?
feedback protocol
SGhb
SGctrld
feedback protocol
feedback protocol
SGcmd
192.168.1.3
40
Port Test (continued)
  • Failure Detection
  • Porttest_error gt thresholdE
  • Recovery Detection
  • Porttest ok
  • Advantage
  • Can be used to detect failure/recovery even no
    packet traffic
  • Failure could be detected without the price of
    client packet lost
  • No modification to server program
  • Disadvantage
  • Extra traffic is introduced due to ICMP/UDP/TCP
    connect test
  • The test packet may disturb the operation of
    server program

41
Heartbeat monitor
  • Server uses feedback protocol to generate
    heartbeat
  • int cmd_server_heartbeat_count (group_ip,
    group_port, server_ip, server_port,
    heartbeat_count, OP_SET)
  • SGhb
  • Used to monitor alive status of a process with
    specific process name

192.168.1.2
Server
SG Load balancer
Exist in process table? Response to signal?
140.116.72.219
SGhb
heartbeat
42
Heartbeat monitor (continued)
  • Failure Detection
  • Heartbeat timeout gt thresholdH
  • Recovery Detection
  • Any Heartbeat received
  • Advantage
  • Can be used with quiet server (which makes no
    replies to incoming packet)
  • Can be used to detect failure/recovery even no
    packet traffic
  • Failure could be detected without the price of
    client packet lost
  • Disadvantage
  • Sghb must be executed to generate heartbeat or
    modification to server program is needed

43
Multicast write result comparison
  • Failure detection
  • Compare the write reply status embeded in ip
    option (multicast write only)
  • Recovery detection
  • There is no way for SG to know if the recoevry
    process on a mcast server is complete or not.
  • Advantage
  • Can be used to detect server state inconsistence
  • Disadvantage
  • Modification to server program is needed

44
SUMMARY of Detection
Result comparison
Heartbeat monitor
Port test
Packet snoop
Detection
State inconsistence
Failed stop
Failed stop
Failed stop
Fault type
Y
Y
Y
Y
Failure detection
N
Y
Y
Almost N
Recovery detection
Y
depends
N
N
Modification to source
High
Normal
Normal
High
Detection precision
45
Fault Recovery
  • SG deosnt do recovery
  • The recovery happened on the real server, SG
    system can just wait the recovery to complete
  • The recovering server should not response to
    requests before the recovery is done
  • Since the detection in SG is targeted on
    failed-stop fault
  • The group should be in RDonly mode when doing
    state transfer
  • For read only service, the dead server can do
    state transfer from alive server directly.
  • For readone/writeall service, the dead server
    should turn server group into readonly before
    state transfer and turn server group back to
    readwrite mode when transfer is complete.

46
Deny-Of-Service attack
DOS attack comes in 2 flavor
  • Process saturation
  • Some tcp server have a limitation on the
    connections it can handle, it will stop response
    to client if this limit is reached.
  • Servers using fork() to handle new connections
    would consume system resource (exprocess table)
  • Mbuf exhaustion
  • A connection related mbuf wont be released if
    the connection stays in FIN_WAIT_1 state
  • a BSD machine have only 1536 mbuf when maxuser64
  • a Linux machine doesnt has such a limit, but
    since mbuf and mbuf cluster are non-pageable, an
    eval client can lock out lots of physical memory
    from others

47
Protection against DOS attack
  • Per client limitation
  • Max connections
  • Max connection rate
  • Max tcp connections in FIN_WAIT_1 state.
  • Any client breaking the above limitation will be
    denied for new connectios. The deny interval can
    be specified by SG admin.
  • Per server based ACL
  • Allow/deny clients requests based on it
    ip/subnet
  • Servers in same group can have different ACL to
    provide differential service for different
    clients.
  • Ex reserving the best computer in a group for
    internal use in a computing cluster

48
The connection rate calaulation
  • SG uses a low overhead but quite acccrate method
    to maintain the last 1 minute connection rate for
    each client

C
T
Tearily
Tlate
Tcurrent
60 sec
Torigin
Recording Whenever a new connection comes If (
Tcurrent Tlate gt60) Tearily Tlate
Tlate Tcurrent Cearily Clate Clate
Ccurrent
Calculating Torigin Tcurrent -60 Calc Corigin
with Cearily ,Tearily and Clate ,Tlate Ratemin
Ccurrent - Corigin
49
Example a distributed nfs server
  • UDP service
  • Based on syncronized RPC
  • ReadOne/WriteAll
  • Modification
  • make filehandle from pathname, this guarentees
    same handle will map to same file on different
    server
  • After server socket creation
  • sock_joingroup(int sockfd, struct in_addr
    groupaddr, int ttl)
  • At end of each write function,
  • prepare_ipopt_mcast(MCAST_SUCCESS, return_value)
  • before send reply
  • sock_set_ipopt_mcast(int sockfd)
  • After send reply
  • sock_clear_ipopt_mcast(int sockfd)

50
NFS filehandle mapping
NFS Client
Kernel Space NFS Server
Unix FS
local access
file handle
(dev, inode)
Kernel Space NFS server can access file system
through (dev, inode), and NFS file handle 32
byte is enough to contain this information.
NFS Client
User Space NFS Server
Unix FS
local access
file handle
pathname
User Space NFS server has to access file system
through path name, and NFS file handle 32 byte
is not enough to cantain the whole path, So we
encode pathname....
51
NFS filehandle mapping
/ a / b / c / file
32bit number
32bit number
32bit number
32bit number
8bit number
8bit number
8bit number
32bit number
8bit number
0...........0
32 byte file handle
Using filename to generate file handle, not using
inode Same file has same handle on different
server
52
NFS filehandle mapping
Rebuild Pathname
file handle
First, get all children of / directory,
a1,a2,a3..., and we find encoded a2 is the same
as second element of file handle. Next, get all
children of a2, b1,b2..., we find encoded b2 is
the same as third element of file handle. Repeat
above comparison until all members of file handle
are matched, then we get the path name!
a2
b2
c1
/
a1
/
a2
b1
a3
b2
c1
c2
directory tree
53
Performance Test
SG Load balancer
192.168.1.2
581.44K/s
559.77K/s
192.168.1.3
140.116.72.114
421.13K/s
140.116.72.128 100Mb/s lan
373.40K/s
192.168.1.4
MulticastGroup 234.116.72.114
NFS Write Efficience 373.40/421/1388.66
54
Performance Test
SG Load balancer
0.489ms
0.293ms
140.116.72.115
192.168.1.1
140.116.72.128 100Mb/s lan
0.891ms
Ping Echo Efficience (0.2930.489)/0.89187
55
Performance Test
SG Load balancer
2.67MB/s
4.33MB/s
140.116.72.115
192.168.1.1
140.116.72.128 100Mb/s lan
2.24MB/s
Ftp download Efficience 2.24/2.6783.89
56
192.168.1.1
SG Load balancer
192.168.1.2
Public interface
Private Interface
Public IP 140.116.72.211
Private IP 192.168.1.253
192.168.1.3
Group IP 140.116.72.114
140.116.72.115
192.168.1.4
Internet
Private subnet
57
Proxy Server Cluster
cache
192.168.1.1
SG Load balancer
cache
192.168.1.2
request
140.116.72.114
cache
192.168.1.3
Virtual Proxy Server
  • Configuration
  • Each proxy server has its own disk for the cache
    pool
  • Each proxy server set others as its sibling.
  • Data Consistance
  • each proxy server uses ICP protocol to query
    objects on others cache pool and fetch the
    object from others if needed

58
Web Server Cluster
dsk
DB
192.168.1.1
SG Load balancer
DB server
dsk
192.168.1.2
dsk
request
140.116.72.114
NFS server
dsk
192.168.1.3
Virtual Web Server
  • Configuration
  • Each web server has its own disk to store static
    datas (web pages, images)
  • Common db server and nfs server in backend to
    store dynamic datas (customer input, session)
  • Data Consistance
  • Multiple copies of static data are maintained by
    administrator
  • There is only one copy of dynamic data in central
    db/nfs server, no maintainance is required

59
Telnet Server Cluster
account
192.168.1.1
SG Load balancer
NIS server (accounts)
192.168.1.2
request
dsk
140.116.72.114
NFS server
192.168.1.3
Virtual Telnet Server
  • Configuration
  • NIS is server used to store accounts for users
  • NFS server is used to store the user homedir and
    mail spool(/var/mail)
  • Data Consistance
  • There is only one copy of user data/mail, no
    maintainance is required

60
Mail Server Cluster
Sendmail
account
192.168.1.1
SG Load balancer
Sendmail
NIS server
192.168.1.2
request
dsk
Sendmail
140.116.72.114
NFS server
192.168.1.3
Virtual Mail Server
  • Configuration
  • NIS is server used to store accounts for users
  • NFS server is used to store the user homedir and
    mail spool(/var/mail)
  • Sendmail daemon on each server must accepts mails
    targeted on virtual mail server
  • Sendmail daemon on each server does masquerade on
    each outgoing mail as they were sent from virtual
    mail server

61
Mail Server Cluster
  • Data Consistance
  • There is only one copy of user data/mail, no
    maintainance is required
  • Sendmail Setup
  • Accept mails targetd on virtual mail server
  • Search sendmail.cf, find a line like Fw-o
    /etc/mail/sendmail.cw
  • Add the hostname of virtual mail server to
    sendmail.cw
  • Masquerade outgoing mails as they sent from
    virtual mail server
  • Seach sendmail.cf, find a line containing only
    DM
  • Change the line to DM you.virtual.mail.server
Write a Comment
User Comments (0)
About PowerShow.com