Mirror File System A Multiple Server File System - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Mirror File System A Multiple Server File System

Description:

File System Operation calls. Page 12 of. TWIN PEAKS. TWIN PEAKS. oftware inc. MFS Framework ... Calls. File System Operation calls. Other System calls. read ... – PowerPoint PPT presentation

Number of Views:105
Avg rating:3.0/5.0
Slides: 33
Provided by: john3165
Category:

less

Transcript and Presenter's Notes

Title: Mirror File System A Multiple Server File System


1
Mirror File SystemA Multiple Server File System
John Wong CTO John.Wong_at_TwinPeakSoft.com Twin
Peaks Software Inc.
2
Multiple Server File System
  • Conventional File System UFS, EXT3 and NFS
  • Manage and store files on a single server and
    its storage devices
  • Multiple Server File system
  • Manage and store files on multiple servers and
    their storage devices

3
Problems
  • Single resource is vulnerable
  • Redundancy provides a safety net
  • Disk level gt RAID
  • Storage level gt Storage Replication
  • TCP/IP level gt SNDR
  • File System level gt CFS, MFS
  • System level gt Clustering system
  • Application gt Database

4
Why MFS?
  • Many advantages over existing technologies

5
Unix/Linux File System
Application 1
Application 2
User Space
UFS/EXT3
Kernel Space
Disk Driver
Data
6
Network File System
Application
Application
Application
Application
UFS/EXT3
NFS (Client mount)
NFSD
Data
7
UFS NFS
Application
Application
Application
UFS/EXT3
NFS (Client mount)
UFS/EXT3
NFSD
Data B
Data B
8
UFS NFS
  • UFS manages data on the local servers storage
    devices
  • NFS manages data on remote servers storage
    devices
  • Combine these two file systems to manage data on
    both local and remote servers storage devices

9
MFS UFS NFS
Passive MFS Server
Active MFS Server
Application
Application
Application
Application
MFS
UFS/EXT3
NFS
UFS/EXT3
Data
Data
10
Building Block Approach
  • MFS is a kernel loadable module
  • MFS is loaded on top of UFS and NFS
  • Standard VFS interface
  • No change to UFS and NFS

11
File System Framework
File System Operation calls
File System Operation calls
File Operation System Calls
Other System calls
link ()
mount ()
Statfs()
umount ()
rmdir ()
sync ()
read ()
open ()
ioctl ()
creat ()
lseek ()
write ()
close ()
mkdir ()
VFS interfaces
Vnode interfaces
PCFS
NFS (2)
HSFS
NFS (1)
PCFS
UFS (2)
VxFS
UFS (1)
QFS
Data
Data
Data
Data
Optical drive
Network
SOLARIS Internal, Core Kernel Architecture, Jim
Mauro. Richard McDougall, PRENTICE HALL
12
MFS Framework
File System Operation calls
File Operation System Calls
Other System calls
umount ()
read ()
lseek ()
mount ()
mkdir ()
rmdir ()
creat ()
sync ()
write ()
open ()
close ()
Statfs()
ioctl ()
link ()
Vnode interfaces
VFS interfaces
MFS
PCFS
Vnode VFS interface
QFS
UFS(1)
VxFS
PCFS
HSFS
NFS (1)
NFS (2)
UFS (2)
Data
Data
Data
Data
Optical drive
Network
13
Transparency
  • Transparent to users and applications
  • - No re-compilation or re-link needed
  • Transparent to existing file structures
  • - Same pathname access
  • Transparent to underlying file systems
  • - UFS, NFS

14
Mount Mechanism
  • Conventional Mount
  • - One directory, one file system
  • MFS Mount
  • - One directory, two or more file systems

15
Mount Mechanism
  • mount F mfs host/ndir1/ndir2 /udir1/udir2
  • First mount the NFS on a UFS directory
  • Then mount the MFS on top of UFS and NFS
  • Existing UFS tree structure /udir1/udir2 becomes
    a local copy of MFS
  • Newly mounted host/ndir1/ndir2 becomes a remote
    copy of MFS
  • Same mount options as NFS except no -o hard
    option

16
MFS mfsck Command
  • /usr/lib/fs/mfs/mfsck mfs_dir
  • After MFS mount succeeds, the local copy may not
    be identical to the remote copy.
  • Use mfsck (the MFS fsck) to synchronize them.
  • The mfs_dir can be any directory under MFS mount
    point.
  • Multiple mfsck commands can be invoked at the
    same time.

17
READ/WRITE Vnode Operation
  • All VFS/vnode operations received by MFS
  • READ related operation read, getattr,.
  • those operations only need to go to local copy
    (UFS).
  • WRITE related operation write, setattr,..
  • those operations go to both local (UFS) and
    remote (NFS) copy simultaneously (using threads)

18
Mirroring Granularity
  • Directory Level
  • Mirror any UFS directory instead of entire UFS
    file system
  • Directory A mirrored to Server A
  • Directory B mirrored to Server B
  • Block Level Update
  • Only changed block is mirrored

19
MFS msync Command
  • /usr/lib/fs/mfs/msync mfs_root_dir
  • A daemon that synchronizes MFS pair after a
    remote MFS partner fails.
  • Upon a write failure, MFS
  • - Logs name of file to which the write operation
    failed
  • - Starts a heartbeat thread to verify the remote
    MFS server is back online
  • Once the remote MFS server is back online, msync
    uses the log to sync missing files to remote
    server.

20
Active/Active Configuration
Server
Server
Active MFS Server
Active MFS Server
Application
Application
Application
Application
MFS
MFS
UFS
UFS
NFS
NFS
Data B
Data A
21
MFS Locking Mechanism
MFS uses UFS, NFS file record lock. Locking is
required for the active-active configuration. Lock
ing enables write-related vnode operations as
atomic operations. Locking is enabled by
default. Locking is not necessary in
active-passive configuration.
22
Real -Time and Scheduled
  • Real-time
  • -- Replicate file in real-time
  • Scheduled
  • -- Log file path, offset and size
  • -- Replicate only changed portion of a file

23
Applications
  • Online File Backup
  • Server File Backup, active ? passive
  • Server/NAS Clustering, active ?? Active

24
MFS NTFS CIFS
Remote Server
Window Desktop/Laptop
Application
Application
Application
Application
MFS
NTFS
CIFS
NTFS
Data
Data
25
Online File BackupReal-time or Scheduled time
MFS
MFS
LAN or Wan
Folder
Folder
Folder
MFS
User Desktop/Laptop
ISP Server
26
Server Replication
Primary
Secondary
Heartbeat
App
Email
Mirror File System
Mirror File System
Mirror File System
Mirroring Path /home /var/spool/mail
27
Enterprise Clusters
App
App
App
App
App
Mirror File System
Mirror File System
Mirror File System
Mirror File System
28
Advantages
  • Building block approach
  • -- Building upon existing UFS, EXT3 , NFS, CIFS
    infrastructures
  • No metadata is replicated
  • -- Superblock, Cylinder group, file allocation
    map are not replicated.
  • Every file write operation is checked by file
    system
  • -- file consistency, integrity
  • Live file, not raw data replication
  • -- The primary and backup copy both are live
    files

29
Advantages
  • Interoperability
  • -- Two nodes can be different systems
  • -- Storage systems can be different
  • Small granularity
  • -- Directory level, not entire file system
  • One to many or many to one replication

30
Advantages
  • Fast replication
  • -- Replication in Kernel file system module
  • Immediate failover
  • -- No need to fsck and mount operation
  • Geographically dispersed clustering
  • -- Two nodes can be separated by hundreds
    of miles
  • Easy to deploy and manage
  • -- Only one copy of MFS running on primary
    server is
  • needed for replication

31
Why MFS?
  • Better Data Protection
  • Better Disaster Recovery
  • Better RAS
  • Better Scalability
  • Better Performance
  • Better Resources Utilization

32
Q A
Application
Application
Application
Application
MFS
MFS
Data A
Data B
Write a Comment
User Comments (0)
About PowerShow.com