SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each...

54
PRIMERGY-CLUSTER MIT MSCS PRIMERGY-CLUSTER MIT MSCS PRIMERGY-CLUSTER MIT MSCS PRIMERGY-CLUSTER MIT MSCS PRIMERGY CLUSTER WITH MSCS PRIMERGY CLUSTER WITH MSCS PRIMERGY CLUSTER WITH MSCS PRIMERGY CLUSTER WITH MSCS BENUTZERHANDBUCH BENUTZERHANDBUCH BENUTZERHANDBUCH BENUTZERHANDBUCH USER USER USER USER GUIDE GUIDE GUIDE GUIDE

Transcript of SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each...

Page 1: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

SE

RV

ER

SE

RV

ER

SE

RV

ER

SE

RV

ER

PRIMERGY-CLUSTER MIT MSCSPRIMERGY-CLUSTER MIT MSCSPRIMERGY-CLUSTER MIT MSCSPRIMERGY-CLUSTER MIT MSCSPRIMERGY CLUSTER WITH MSCSPRIMERGY CLUSTER WITH MSCSPRIMERGY CLUSTER WITH MSCSPRIMERGY CLUSTER WITH MSCS

BENUTZERHANDBUCHBENUTZERHANDBUCHBENUTZERHANDBUCHBENUTZERHANDBUCHUSERUSERUSERUSER GUIDE GUIDE GUIDE GUIDE

Page 2: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Sie haben ...

... technische Fragen oder Probleme?

Wenden Sie sich bitte an:• einen unserer Servicepartner• Ihren zuständigen Vertriebspartner• Ihre Verkaufsstelle

Die Adressen Ihrer Servicepartner finden Sie im Garantieheft oder im Service-Adressenheft.

Aktuelle Informationen zu unseren Produkten, Tips, Updates usw. finden Sie im Internet:http://www.fujitsu-siemens.com

Is there ...

... any technical problem or otherquestion you need clarified?

Please contact:• one of our service partners• your sales partner• your sales outlet

The addresses of your service partners are contained in the guarantee booklet or in the serviceaddress booklet.

The latest information on our products, tips, updates, etc., can be found on the Internet under:http://www.fujitsu-siemens.com

Page 3: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of
Page 4: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Dieses Handbuch wurde auf Recycling-Papier gedruckt.This manual has been printed on recycled paper.Ce manuel est imprimé sur du papier recyclé.Este manual ha sido impreso sobre papel reciclado.Questo manuale è stato stampato su carta da riciclaggio.Denna handbok är tryckt på recyclingpapper.Dit handboek werd op recycling-papier gedrukt.

Herausgegeben von/Published byFujitsu Siemens Computers GmbH

Bestell-Nr./Order No.: S26361-F1790-Z101-1-7419S26361-F1790-Z101-1-7419S26361-F1790-Z101-1-7419S26361-F1790-Z101-1-7419Printed in the Federal Republic of GermanyAG 0500 05/00

S26361-F1790-Z101-1-7419

PRIMERGY-CLUSTER MIT MSCSPRIMERGY CLUSTER WITH MSCS

UMRÜSTANLEITUNGCONVERSION GUIDE

Page 5: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY Cluster withMSCS

User Guide

German

English

May 2000 edition

Page 6: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Microsoft, MS, MS-DOS and Windows are registered trademarks of Microsoft Corporation.

All other trademarks referenced are trademarks or registered trademarks of their respectiveowners, whose protected rights are acknowledged.

All rights, including rights of translation, reproduction by printing, copying or similar methods,even of parts are reserved.

Intel and Pentium are registered trademarks and OverDrive is a trademark of IntelCorporation, USA.

Offenders will be liable for damages.

All rights, including rights created by patent grant or registration of a utility model or design,are reserved.

Delivery subject to availability. Right of technical modification reserved.

Copyright � Fujitsu Siemens Computers 2000

Microsoft, MS, MS-DOS and Windows are registered trademarks of Microsoft Corporation.

All other trademarks referenced are trademarks or registered trademarks of their respectiveowners, whose protected rights are acknowledged.

All rights, including rights of translation, reproduction by printing, copying or similar methods,even of parts are reserved.

Offenders will be liable for damages.

All rights, including rights created by patent grant or registration of a utility model or design,are reserved.

Delivery subject to availability. Right of technical modification reserved.

Page 7: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419

ContentsIntroduction .....................................................................................................................................1Notational conventions ......................................................................................................................2Important notes .................................................................................................................................2

PRIMERGY MSCS hardware ...........................................................................................................3General requirements........................................................................................................................3Cluster server ....................................................................................................................................4Shared SCSI .....................................................................................................................................4Fiber channel.....................................................................................................................................5Networks ...........................................................................................................................................6PRIMERGY MSCS configurations .....................................................................................................7

PRIMERGY 560/760 with PXRC RAID subsystem ....................................................................8PRIMERGY 560/760 with PRIMERGY 502DS/702DS RAID subsystem..................................10PRIMERGY 460 with PRIMERGY 502DS/702DS RAID subsystem ........................................11PRIMERGY 460/560/760 with RAID subsystem PRIMERGY 502DF/702DF ...........................12

Operating system installation.......................................................................................................15Requirements ..................................................................................................................................15Example configuration .....................................................................................................................15ServerStart ......................................................................................................................................16Configuration of local system drives ................................................................................................16Server Configuration Utility ..............................................................................................................16Installation instructions ....................................................................................................................17Setting up additional partitions.........................................................................................................17

Shared SCSI configuration ...........................................................................................................19Preparations on Cluster Server 1.....................................................................................................19

CAN bus settings.....................................................................................................................19Configuration of Symbios Logic 22802 SCSI controller............................................................20Configuration of Mylex DAC960SX RAID controller .................................................................20CD-ROM drive .........................................................................................................................20RemoteView hard disk.............................................................................................................20

Instructions on setting up shared hard disks on Cluster Server1 .....................................................21RAIDFX ...................................................................................................................................21Global Array Manager (GAM) ..................................................................................................22

Preparations on Cluster Server 2.....................................................................................................23Instructions on setting up shared hard disks on Cluster Server2 .....................................................23

MSCS software ..............................................................................................................................25Installation conditions ......................................................................................................................25Cluster Administrator .......................................................................................................................26Standard resources .........................................................................................................................27

MSCS error situations...................................................................................................................29

ServerView integration..................................................................................................................31Installation of ServerView agents.....................................................................................................31

Overview .................................................................................................................................31Setup type ...............................................................................................................................31Selecting agents ......................................................................................................................31Additional information ..............................................................................................................32

MSCS in ServerView Manager ........................................................................................................32Inserting Cluster Server in computer list ..................................................................................33Cluster View ............................................................................................................................33

Page 8: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Contents

S26361-F1790-Z101-2-7419

Traps....................................................................................................................................... 35Alarm Manager ....................................................................................................................... 36

MSCS applications........................................................................................................................ 37Requirements for application........................................................................................................... 37Failover of applications ................................................................................................................... 37Virtual server................................................................................................................................... 37Example applications ...................................................................................................................... 39

Oracle Failsafe........................................................................................................................ 39Microsoft SQL Server Enterprise Edition ................................................................................. 39SAP R/3 .................................................................................................................................. 40Microsoft Exchange ................................................................................................................ 42

Index .............................................................................................................................................. 43

Page 9: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 1

IntroductionThis manual describes the installation and characteristics of the Microsoft cluster servers (MSCS)on PRIMERGY. It explains all MSCS-specific aspects which go beyond the description of thehardware and the operating system. In addition, it summarizes the results of release tests forvarious cluster configurations.

The structure of the manual is modeled on an MSCS installation. The installation is described stepby step. In addition, the manual contains references to other sources of information.

The chapter "PRIMERGY MSCS hardware" describes the basic hardware requirements of theMSCS. In particular, it also covers the certified PRIMERGY cluster configurations.

The chapter "Operating system installation" explains the MSCS-specific aspects of the operatingsystem installation.

The setting up of the shared hard disks of the cluster servers is described in the chapter "SharedSCSI configuration".

The chapter "MSCS software" provides an overview of the cluster-specific software.

Typical fault situations are discussed in the chapter "MSCS error situations".

The chapter "ServerView integration" explains the connection of the MSCS to the PRIMERGYserver management and covers the installation of the required SNMP agents.

The most important aspects of the software installation in the cluster is summarized in the chapter"MSCS applications". Finally, several MSCS applications are considered in greater detail.

Page 10: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Introduction

2 - English S26361-F1790-Z101-2-7419

Notational conventionsThe meanings of the symbols and fonts used in this manual are as follows:

!Pay particular attention to texts marked with this symbol. Failure to observe this warningendangers your life, destroys the system, or may lead to loss of data.

iThis symbol is followed by supplementary information, remarks and tips.

Ê Texts which follow this symbol describe activities that must be performed in the order shown.

Ë This symbol means that you must enter a blank space at this point.

ÚÚÚÚ This symbol means that you must press the Enter key.

Texts in this typeface are screen outputs from the PC.

Texts in this bold typeface are the entries you make via the keyboard.

Texts in italics indicate commands or menu item.

"Quotation marks" indicate names of chapters and terms that are being emphasized.

Important notes

!Observe the safety precautions in the chapter "Important notes" in the system operatingmanual.

Page 11: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 3

PRIMERGY MSCS hardwareA cluster presents special requirements for the servers and components used. High availability andthe related redundancy of certain components play a major role. In addition, a cluster requirestechnologies which were not necessary in the single-server mode up until now, for example shared-SCSI.

In the following the hardware requirements of the MSCS and the resulting different PRIMERGYcluster configurations are presented.

General requirementsThe following illustration shows the schematic structure of an MSCS configuration: Two clusterservers are connected to shared hard disks with shared-SCSI or fiber channel. The two clusterservers are connected and the client PCs linked with two separate networks.

shared-SCSI oder Fibre channel

LAN (Client-Netzwerk)

Interconnect

Client PCs

ClusterServer 1

ClusterServer 2

Subsystem

Schematic structure of an MSCS configuration

Cluster server, shared-SCSI, fiber channel and networks play a major role with MSCS and aretherefore described in detail here.

Additional information on the hardware requirements of the MSCS are contained in the MSCSAdministrator Guide and on the Internet pages of Microsoft in the category MSCS Frequently AskedQuestions.

Page 12: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

4 - English S26361-F1790-Z101-2-7419

Cluster serverEach server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of twonodes. Each node in the cluster has its own operating system, which is located on an internal harddisk. Each cluster server should be equipped with its own RAID controller for the internal hard disks.In this way a server failure due to a hard disk error can be eliminated.

MSCS places no special requirements on the system BIOS on the cluster server.

The use of RemoteView is also independent of MSCS.

A server started with RemoteView does not act as a cluster server, however can still access theshared hard disks. As a result, there is a danger of cluster data being falsified or deleted.

!Before you work with RemoteView, interrupt the SCSI connection of the cluster server tothe shared hard disks by pulling the SCSI plug on the disk subsystem. In this way anaccidental manipulation of the cluster data can be eliminated.

The CAN bus for the transfer of information for the server management must be correctlyconfigured. The two cluster servers and all disk subsystems are connected to the same CAN busand must therefore have different device IDs. On the PRIMERGY server, you set the device ID viathe system BIOS. With the PRIMERGY 502/702 you set the device ID with the rotary switch on thecontroller.

For additional information on the CAN bus, see the manual "CAN-Bus/CAN-MMF".

Shared SCSIBoth cluster servers share a hard disk area for shared cluster data. These hard disks can beconnected to the cluster servers via shared-SCSI (Small Computer System Interface) with thePRIMERGY MSCS configurations.

iIn its original form SCSI is a parallel I/O bus for the connection of mass memory drivesand other peripheral devices. In this manual SCSI or shared-SCSI always means thisclassical form of SCSI.In addition, SCSI can also be used as a command protocol in the serial fiber channel bus(see the section "Fiber channel")

Shared-SCSI differs from SCSI in that several host controllers are connected in a shared-SCSI bus.One controller in the cluster servers and one controller in the hard disk subsystem each isconnected to the PRIMERGY MSCS shared-SCSI bus. Although both servers are continuouslyconnected to the shared hard disks, MSCS Version 1.0 does not allow simultaneous access of bothcluster servers (shared nothing principle).

!Always ensure the correct termination of the shared-SCSI bus and the assignment ofdifferent SCSI IDs for the SCSI devices. Switch off the BIOS of the shared-SCSIcontrollers in the cluster servers to exclude ”Device Not Ready” errors during the SCSIscan.

Page 13: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

S26361-F1790-Z101-2-7419 English - 5

MSCS places no special requirements on the type of SCSI signal transfer. Due to the low maximumbus length of an SCSI single-ended bus (SE), shared-SCSI differential ended (DE) is used for thePRIMERGY MSCS configurations. The SCSI wiring of the PRIMERGY cluster is described in thefollowing sections.

The internal hard disks in the cluster servers are each connected via a separate SCSI bus.

Fiber channelThe shared hard disks can also be connected to the cluster servers via a serial fiber channel businstead of via the parallel shared-SCSI bus.

Fiber channel is divided into different layers like a network. The interface level defines the physicaltransfer medium. Next comes the protocol level with the Fiber Channel Protocol (FCP). The protocollevel is followed by the command level. Communication with the I/O devices is carried out via serialSCSI.

Compared to the classic parallel SCSI bus, fiber channel offers a higher data throughput, bridgesgreater distances, enables the connection of more drives and increases the error tolerance.However, the cost level of fiber channel is currently higher than that of SCSI.

Just as with shared-SCSI, several host bus adapters are also located in a shared bus with MSCSconfigurations with fiber channel. With the PRIMERGY MSCS configurations one fiber channelcontroller is used per cluster server and one fiber bridge RAID controller in the subsystem. Adifferentiation is made between the direct wiring of the cluster servers and subsystem (point-to-point) and the connection with a fiber channel hub.

Depending on the requirement, copper cables or fiber-optic cables are used. Fiber-optic cablesenable the bridging of several kilometers, whereas the length of copper cables for fiber channel islimited to 30 meters. Media Interface Adapters (MIA) allow the transition between these two transfermedia. The cables are connected to the hub and to the fiber bridge RAID controller with GigabitInterface Convertors (GBIC) for copper or fiber-optic cable.

The internal hard disks in the cluster servers are also connected with the MSCS configurations withfiber channel via an SCSI bus.

The applications for fiber channel are not limited to the I/O area. Fiber channel also supportsnetwork technologies such as Internet and ATM on the command level and in particular can also beused as a cluster interconnect (see section entitled Networks).

Page 14: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

6 - English S26361-F1790-Z101-2-7419

NetworksWith an MSCS configuration the two cluster servers are connected to each other and to the clientPCs with two different networks.

Communication between clients and the servers takes place via the client (or public) network. Inaddition, the two cluster servers communicate via another dedicated (or private) networkconnection. These connection is also called an interconnect and is used for the mutual servermonitoring (heartbeat) and the exchange of cluster management information. MSCS also requiresthis interconnect in addition to the client network for reasons of high availability. This means that atleast two network cards must be installed in each cluster server. However, the cluster is can also berun with only one network. As a result, the server monitoring is, for example, automatically handledvia the client network card following a failure of an interconnect network card in a cluster server.

MSCS places not special requirements on the client and interconnect network. Ethernet is used inthe PRIMERGY MSCS configurations described here. The interconnect can be established veryeasily via a crossed Ethernet cable.

Ethernet only represents one possible realization of a cluster interconnect.

Other interconnect technologies are always also possible. In future MSCS versions thesimultaneous access of the cluster servers to the shared hard disks will be permitted. As thesynchronization of the parallel disk accesses generates a high level of data traffic via theinterconnect, a high-speed connection is then used as an interconnect, for example ServerNet orfiber channel. Today ServerNet is already released by Fujitsu Siemens as an interconnectconnection for MSCS.

Page 15: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

S26361-F1790-Z101-2-7419 English - 7

PRIMERGY MSCS configurationsFujitsu Siemens has conducted the Microsoft Hardware Compatibility Tests from Microsoft forvarious PRIMERGY MSCS configurations. These configurations were certified by Microsoft andpublished in the Cluster Hardware Compatibility List (HCL). The cluster HCL is contained on theMicrosoft Internet pages.

From the scalable line of PRIMERGY server models, disk subsystems and other hardwarecomponents, Fujitsu Siemens is gradually certifying various configurations. Please see the HCLfrom Microsoft for the complete list of certified components and MSCS configurations. In thefollowing sections several examples of PRIMERGY MSCS configurations certified by Microsoft arepresented:

PRIMERGY MSCS configurations with SCSI:

• PRIMERGY 560/760 with PXRC RAID subsystem

• PRIMERGY 560/760 with PRIMERGY 502/702 RAID subsystem

• PRIMERGY 460 with PRIMERGY 502/702 RAID subsystem

PRIMERGY MSCS configurations with fiber channel:

• PRIMERGY 460/560/760 with RAID subsystem PRIMERGY 502DF/702DF

The configurations differ in equipment level with the number of CPUs, the main memory and thecapacity of the external data memory. Here the PRIMERGY 560/760, as a 4-fold SMP server, canbe scaled further than the dual-CPU server PRIMERGY 460. In the same way the PXRC disksubsystem makes it possible to configure a higher memory capacity compared to the PRIMERGYsubsystem 502/702 and the option, 2 RAID controller in the PXRC. In addition to a higherperformance, this possibility also offers a controller redundancy: If one of the two boards fails, thesubsystem operates with the second board without interruption. Compared to the SCSIconfigurations, the PRIMEGY MSCS configuration with fiber channel offers a higher data throughputand a greater upgrade capability.

Page 16: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

8 - English S26361-F1790-Z101-2-7419

PRIMERGY 560/760 with PXRC RAID subsystemThe following illustration shows the schematic structure of the MSCS configuration PRIMERGY560/760 with RAID subsystem PXRC:

RemoteView

2944 UW

PRIMERGY 560/760

NIC

2944 UW

PRIMERGY 560/760

NIC

LAN (Client-Netzwerk)

Mylex DACMylex DAC

interne Festplatten(gespiegelt)

NIC NICLAN (Interconnect)

PXRC

shared-SCSI

RemoteView

interne Festplatten(gespiegelt)

PRIMERGY 560/760 with PXRC RAID subsystem

The MSCS configuration PRIMERGY 560/760 with RAID subsystem PXRC was certified byMicrosoft and released by Fujitsu Siemens for multiprocessor operation with 2-4 processors each inthe cluster servers.

This configuration was certified with network cards of the type Intel Pro 100B for client network andinterconnect. However, other network cards released for PRIMERGY can also be used.

The cluster servers are connected to the shared-SCSI bus via 1-channel DE SCSI controllers of thetype Adaptec 2944UW.

Make sure that jumpers J2 and J4 are mounted on the SCSI controllers so that the shared-SCSI busis also terminated with the server switched off.

Start the SCSI setup on both cluster servers with the key combination Ctrl+A during the systemstart-up. Carry out the following configuration steps in the SCSI setup:

• Deactivate the BIOS of the SCSI controller

• Set the parameter Reset SCSI Bus at IC Initialization to the setting Disabled

• Assign different SCSI IDs for the SCSI controllers on the shared-SCSI bus

Page 17: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

S26361-F1790-Z101-2-7419 English - 9

For additional information on the SCSI setup, see the manuals for the SCSI controller.

iThe SCSI bus must always be correctly terminated. If you wish to separate a clusterserver from the SCSI bus, then disconnect the SCSI cable at the PXRC, not at the server.The PXRC SCSI controller will then handle the termination automatically.

For additional information on the connection and operation of the PXRC, see the operating manual"Memory Extension PXRC on PRIMERGY 560/760 & MS Windows NT".

To connect two PXRC to the cluster, two Adaptec 2944UW SCSI controllers must be operated ineach cluster server. The six PCI slots in the PRIMERGY 560/760 can then be assigned as follows:

PCI slot Device

1 (Slot 4) SCSI controller for internal hard disks (e.g., Mylex DAC960PD)

2 (Slot 5) Adaptec 2944UW (Shared SCSI bus 1)

3 (Slot 6) Adaptec 2944UW (Shared SCSI bus 2)

4 (Slot 7) SCSI controller for CD-ROM, tape drive, etc. (e.g., Adaptec 2940UW)

5 (Slot 8) Network interface card for client network (e.g., Intel Pro100B)

6 (Slot 9) Network interface card for interconnect (e.g., Intel Pro100B)

iMSCS allows only one partition in the shared hard disk area. All RAID disks are thereforeavailable to the cluster as a single physical hard disk. Therefore, configure only onepartition within a RAID system in the PXRC.

Page 18: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

10 - English S26361-F1790-Z101-2-7419

PRIMERGY 560/760 with PRIMERGY 502DS/702DS RAIDsubsystemThe following illustration shows the schematic design of the MSCS configuration PRIMERGY560/760 with RAID subsystem PRIMERGY 502DS/702DS:

PRIMERGY 560/760

Symbios

PRIMERGY 560/760NIC

CAN Symbios

NIC

CAN

LAN (Client-Netzwerk)

CAN Bus

Mylex DACMylex DAC

PRIMERGY 702DS

NIC NICLAN (Interconnect)

shared-SCSI

interne Festplatten(gespiegelt)

interne Festplatten(gespiegelt)

RemoteViewRemoteView

PRIMERGY 560/760 with PRIMERGY 502DS/702DS RAID subsystem

With this configuration the subsystem PRIMERGY 502DS/702DS is used for the shared hard disks.

The cluster servers are connected to the shared-SCSI bus via 2-channel DE SCSI controllers of thetype Symbios Logic SL22802. The SCSI bus is actively terminated here via the signal wire TERM-Power. As a result, no jumpers are required for the SCSI termination.

The hard disks in the subsystem are connected to the cluster servers via the 2-channel RAIDcontrollers Mylex DAC960SX.

The two SCSI channels are used for connecting one server each. They represent two physicallyseparate SCSI buses. The firmware of the RAID controller forms a logical shared-SCSI bus fromthese. In this case the second channel of the SCSI controller is not used in the servers.

iNote that both physical SCSI buses of the RAID controller are terminated with aterminator. Each bus is laid twice on the rear housing panel in the subsystem, whereby onbus each must be terminated with a terminator.

For a firmware flash of the controller in the PRIMERGY502DS/702DS the subsystem must beconnected to the SCSI bus.

Page 19: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

S26361-F1790-Z101-2-7419 English - 11

If one of the two physical SCSI buses fails, for example due to a cable defect or a controller failure,then the second bus is also still correctly terminated.

As the SCSI controllers in the Cluster servers provide two channels, a second subsystem can alsobe integrated in the cluster.

The requirements of the MSCS for the cluster servers and the network cards are the same as withthe configuration of PRIMERGY 560/760 with RAID subsystem PXRC.

The PCI slots in the PRIMERGY 560/760 can be assigned as follows:

PCI slot Device

1 (Slot 4) SCSI controller for internal hard disks (e.g., Mylex DAC960PD)

2 (Slot 5) SL22802 (shared SCSI bus 1 and 2)

3 (Slot 6) SCSI controller for CD-ROM, tape drive, etc. (e.g., Adaptec 2940UW)

4 (Slot 7) (not assigned)

5 (Slot 8) Network interface card for client network (e.g., Intel Pro100B)

6 (Slot 9) Network interface card for interconnect (e.g., Intel Pro100B)

To integrate a PRIMERGY 502DS/702DS into RemoteView or ServerView, the device must beconnected to the CAN bus. For detailed explanations of the CAN bus, see the operating manuals"CAN-Bus/CAN-MMF" and "PRIMERGY 502/702".

PRIMERGY 460 with PRIMERGY 502DS/702DS RAIDsubsystemWith this configuration the PRIMERGY 460 is used as a cluster server. The configurationinstructions with regard to networks, shared-SCSI bus, CAN bus and subsystem are contained inthe section "PRIMERGY 560/760 with RAID subsystem 502DS/702DS".

In contrast to the previously described configurations, with the PRIMERGY 460 an onboard SCSIcontroller can be used to connect the accessible drives. The PCI slots in the PRIMERGY 460 canbe assigned as follows:

PCI slot Device

1 (Slot 2 short) (not assigned)

2 (Slot 3) SCSI controller for internal hard disks (e.g., Mylex DAC960PD)

3 (Slot 4) SL22802 (shared SCSI bus 1 and 2)

4 (Slot 5) (not assigned)

5 (Slot 6) Network interface card for client network (e.g., Intel Pro100B)

6 (Slot 7) Network interface card for interconnect (e.g., Intel Pro100B)

Page 20: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

12 - English S26361-F1790-Z101-2-7419

PRIMERGY 460/560/760 with RAID subsystem PRIMERGY502DF/702DFThe large number of certified fiber channel components result in many possible combinations for thedesign of an MSCS configuration. The following two illustrations shown examples of the schematicdesign of an MSCS configuration with point-to-point copper wiring and an MSCS configuration withfiber-optic wiring via a hub.

PRIMERGY 460/560/760

Qlogic QLA2100

PRIMERGY 460/560/760NIC

CAN Qlogic QLA2100

NIC

CAN

LAN (Client-Netzwerk)

CAN Bus

Mylex DACMylex DAC

PRIMERGY 702DF

NIC NICLAN (Interconnect)

Fibre channel (Point-to-Point, Kupfer)

interne Festplatten(gespiegelt)

interne Festplatten(gespiegelt)

RemoteViewRemoteView

PRIMERGY 460/560/760 with RAID subsystem PRIMERGY 502DF/702DF: point-to-point copperwiring

Page 21: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

S26361-F1790-Z101-2-7419 English - 13

PRIMERGY 460/560/760

Qlogic QLA2100F

PRIMERGY 460/560/760NIC

CAN Qlogic QLA2100F

NIC

CAN

LAN (Client-Netzwerk)

CAN Bus

Mylex DACMylex DAC

PRIMERGY 702DF

NIC NICLAN (Interconnect)

Fibre channel (Glasfaser)

interne Festplatten(gespiegelt)

interne Festplatten(gespiegelt)

RemoteViewRemoteView

Vixel Hub Rapport 1000

Fibre channel (Kupfer)

GBIC Fibre

GBIC Fibre GBIC Fibre

PRIMERGY 460/560/760 with RAID subsystem PRIMERGY 502DF/702DF: Fiber-optic wiring via ahub

With these configurations the subsystem PRIMERGY 502DF/702DF is used for the shared harddisks.

With copper wiring fiber channel controllers of the type QLogic QLA 2100 are used in the clusterservers. With fiber-optic wiring the controller QLogic QLA 2100F is used. These two controllers usethe driver ql2100.sys. This driver is contained on the ServerStart CD.

!Switch off the BIOS of the Qlogic fiber channel controller. This prevents any difficultieswhen booting the cluster servers. MSCS does not require the controller BIOS.

The hard disks in the subsystem are connected to the cluster servers with the fiber bridge RAIDcontroller Mylex DAC960SF.

The wiring via the fiber channel hub enables the connection of several subsystems. The type VixelRapport 1000 is used.

Depending on the configuration, media interface adapter (MIA) are also used as adapters betweendifferent transfer media. The cables are connected to the hub and to the fiber bridge RAID controllerwith Gigabit Interface Convertors (GBIC) for copper or fiber-optic cable.

Page 22: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

PRIMERGY MSCS hardware

14 - English S26361-F1790-Z101-2-7419

The requirements of the MSCS for the cluster servers and the network cards are the same as withthe configuration of PRIMERGY 560/760 with RAID subsystem PXRC.

The PCI slots in the PRIMERGY 560/760 can be assigned as follows:

PCI slot Device

1 (Slot 4) SCSI controller for internal hard disks (e.g. Mylex DAC960PD, DAC960PG orDAC960 PJ)

2 (Slot 5) Fiber channel controller QLogic QLA 2100/2100F

3 (Slot 6) SCSI controller for CD-ROM, tape drive, etc. (e.g., Adaptec 2940UW)

4 (Slot 7) (not assigned)

5 (Slot 8) Network interface card for client network (e.g., Intel Pro100B)

6 (Slot 9) Network interface card for interconnect (e.g., Intel Pro100B)

With the PRIMERGY 460 the slots can be assigned as follows:

PCI slot Device

1 (Slot 2 short) (not assigned)

2 (Slot 3) SCSI controller for internal hard disks (e.g. Mylex DAC960PD, DAC960PG orDAC960 PJ)

3 (Slot 4) Fiber channel controller QLogic QLA 2100/2100F

4 (Slot 5) (not assigned)

5 (Slot 6) Network interface card for client network (e.g., Intel Pro100B)

6 (Slot 7) Network interface card for interconnect (e.g., Intel Pro100B)

To integrate a PRIMERGY 502DF/702DF into RemoteView or ServerView, the device must beconnected to the CAN bus. For detailed explanations of the CAN bus, see the operating manuals"CAN-Bus/CAN-MMF" and "PRIMERGY 502/702".

For additional information, see the manuals for the individual hardware components.

Page 23: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 15

Operating system installation

RequirementsThe PRIMERGY MSCS configurations require the operating system Windows NT Server EnterpriseEdition 4.0.

Before installing the operating system, check whether the following requirements are met:

• The cluster servers have the latest BIOS installed. The current BIOS is available from yourtechnical contact.

• The SCSI controllers in the cluster servers contain the respective current firmware.

• Both cluster servers are connected to the client network.

• Both servers are linked via an interconnect.

• Cluster servers and subsystem(s) are linked via shared SCSI bus.

• The SCSI controllers for the shared-SCSI b are properly configured.

• Each SCSI bus is properly terminated.

Each node of an MSCS cluster contains its own operating system on an internal hard disk. Alwaysuse ServerStart for the configuration of the cluster servers before installing the operating system. Asa result, you have access to all required drivers, tools and utilities. ServerStart is supplied with eachPRIMERGY server.

iThe SCSI subsystem is not configured until after the successful installation of theoperating system on both cluster servers and must remain switched off until then.

Example configurationIn the following the operating system installation for the MSCS configuration PRIMERGY 760 withRAID subsystem PRIMERGY 702 is described.

The cluster servers of this example installation are both equipped as follows:

• 2 PentiumPro 200 MHz processors

• 256 MB main memory

• 2 Intel Pro100B network cards

• Mylex DAC960PD RAID controller, for 2 x 4GB internal hard disks

• Symbios Logic 22802 shared-SCSI controller

• Adaptec 2940UW SCSI controller for CD-ROM and 4 mm DAT

• RemoteView hard disk

Page 24: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Operating system installation

16 - English S26361-F1790-Z101-2-7419

ServerStartBoot the two cluster servers with the ServerStart start-up floppy disk.

The content of the start-up floppy disk is changed by installing the first cluster server, however canbe reset to the original state with the DEFAULT.BAT program. As an alternative, you can generate anew start-up floppy disk with the ServerStart and use it to install the second cluster Server.

Configuration of local system drivesSet up one RAID system each on the local hard disks of both cluster servers. Use the "MylexConfiguration Utility" to do this. A detailed description of the Mylex Configuration Utility is containedin the manual for the Mylex controller.

Start the Mylex Configuration Utility with the ServerStart menu entry Tools.

Combine the two local hard disks to a pack and set up two system drives with the followingparameters:

System Drive 1: RAID 1 1024 MB

System Drive 2: RAID 1 Remaining storage space

Initialize the two system drives. Then exit the Mylex Configuration Utility.

Server Configuration UtilityStart the Server Configuration Utility (SCU) with the ServerStart menu entry Tools. Add theadaptation for ServerView and RemoteView to the system BIOS with the SCU. The required stepsare described in the "ServerView" user manual.

Page 25: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Operating system installation

S26361-F1790-Z101-2-7419 English - 17

Installation instructionsStart the installation of the operating system Windows NT from the ServerStart interface.

When installing the operating system on the two cluster servers, observe the following instructions:

• Install the operating system in an NTFS partition on system drive 0.

• Use current drivers for SCSI controllers. These are contained on the ServerStart CD or onsupplied floppy disks for SCSI controllers. The SCSI drivers on the Windows NT CD are nolonger current.

• Install the SNMP service if you would like to use ServerView.

• Install the WINS service on both servers if no other WINS server is available in your network.

• Install the TCP/IP protocol.

• Use the IP addresses from different subnetworks for client network and interconnect.

• Configure the roll of the two cluster servers in your Windows NT domain. Here you have thefollowing three possibilities:One cluster server is the Primary Domain Controller (PDC) of its own domain. The other serveris the Backup Domain Controller (BDC) of this domain. In this case you can not integrate thetwo cluster servers in an already existent domain later.Both servers are BDCs in an existing domain.Both servers are member servers in an existing domain.

• Install the current Service Pack. This is assumed during the MSCS installation.

Setting up additional partitionsSet up additional partitions on both servers with the "Windows NT Disk Administrator". Assign thesame drive letters for the system partition on the two servers.

Page 26: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of
Page 27: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 19

Shared SCSI configurationIn the following the configuration of the shared-SCSI bus for the PRIMERGY 760 with RAIDsubsystem PRIMERGY 702 is described. Here the same cluster hardware is assumed as in thechapter "PRIMERGY 560/760 with RAID subsystem PRIMERGY 502DS/702DS". The subsystemused here contains a Mylex DAC960SX 2-channel RAID controller and five 4GB hard disks.

The shared-SCSI system is at first configured with one of the two cluster servers (Cluster Server 1).Then the access of the second server (Cluster Server 2) to the shared hard disks is set up. Thefollowing steps are required:

• Preparations on Cluster Server 1

• Setting up of the shared hard disks on Cluster Server 1

• Preparations on Cluster Server 2

• Setting up of the shared hard disks on Cluster Server 2

Preparations on Cluster Server 1

iSwitch off Cluster Server 2 and switch on the SCSI subsystem.

CAN bus settingsIf you want to integrate the PRIMERGY 702 into ServerView, the CAN-bus settings must be stored.Start the utility SEMAN.EXE either via RemoteView or via ServerStart to do this. This utility cannotbe started under Windows NT.

First determine the CAN bus ID of the PRIMERGY 702 with the menu item Automatic Search ForStorage Extensions In Current Cluster. Then enter this ID in the menu item Select Target Clusterand Cabinet Number.

If the firmware on the controller in the PRIMERGY 702 does not correspond to the current version,then update it with the menu item Perform Firmware Flash From Flash-File. For additionalinformation on the CAN bus, see the manual "CAN-Bus/CAN-MMF".

Page 28: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Shared SCSI configuration

20 - English S26361-F1790-Z101-2-7419

Configuration of Symbios Logic 22802 SCSI controllerIf the firmware of the SCSI controller does not correspond to the current version, then update it withthe utility FLASH8X5.EXE. To do this, you must start the system from a DOS boot diskette.

For the cluster mode the BIOS of the Symbios Logic Controller must be switched off. To do this,open the SCSI configuration utility during the system start-up with the key combination Ctrl+C andchange the controller settings accordingly with Change Adapter Status.

Configuration of Mylex DAC960SX RAID controllerIf the firmware of the RAID controller does not correspond to the current version, then update it withthe Windows NT utility SPIF.EXE or the MS-DOS utility FWLOADSX.EXE. However, you can onlyuse SPIF.EXE if a hard disk with drive letters is already configured in the SCSI subsystem underWindows NT.

CD-ROM driveStart the Windows NT Disk Administrator and assign the drive letter Z for the CD-ROM drive.

RemoteView hard diskIf you would like to access the RemoteView hard disk from Windows NT, then first install the driverIDE CD-ROM Atapi via the Windows NT control panel under the menu item SCSI Adapters. Thencarry out a reboot and activate the IDE adapter in the system BIOS. Finally, assign the RemoteViewhard disk a drive letter with the Windows NT Disk Administrator, for example X.

Page 29: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Shared SCSI configuration

S26361-F1790-Z101-2-7419 English - 21

Instructions on setting up shared hard disks onCluster Server1Now the configuration of the shared hard disks in the SCSI subsystem can be started. To do this,use either the utility RAIDFX or the Global Array Manager (GAM software)

Both configuration tools are basically suited for configuring the shared hard disks, however differ intheir range of functions. RAIFX can only be used for initial configuration prior to operating systeminstallation. On the other hand, GAM is a Windows NT application which is also suitable for carryingout later adjustments and can also be used for monitoring purposes.

In this section you will find several special instructions on the use of RAIDFX and GAM.

For detailed information, see the manuals included with the controllers.

!MSCS does not recognize any partitions on the shared disks, only total system drives(physical disks). If smaller partitions are to be created, a pack must already be divided intoseveral system drives with the Mylex Configuration Utility.

Do not write any data to shared disks before the MSCS installation.

RAIDFXWith RAIDFX you can configure hard disks connected to a Mylex RAID controller when access is totake place via another SCSI controller. This utility program is supplied together with the RAIDcontroller and is contained on the ServerStart CD.

The following illustration shows the user interface of RAIDFX.

User interface of RAIDFX

Configure the hard disks in the subsystem in accordance with your requirements and initialize thecreated system drives.

Create and format partitions on the shared hard disks with the Windows NT Disk Administrator.

Page 30: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Shared SCSI configuration

22 - English S26361-F1790-Z101-2-7419

For additional instructions, see the "RAIDfx Manager User Guide", which is also supplied with thecontroller.

Global Array Manager (GAM)The Global Array Manager (GAM) is an extensive program for configuration and administration ofdrives connected to a Mylex controller. GAM consists of a client and a server part. This sectiondescribes the installation and configuration of the GAM server software.

For additional information on GAM, see the following publications, which are available on the MylexInternet pages:

GAM™: Global Array Manager

DAC960 Software Kit Installation Guide and User Manual

Global Array Manager Client Software Installation Guide and User Manual

GAM client program

The GAM client program is used for monitoring and administration. The GAM client program usesthe GAMROOT user account. This account requires no special permissions as long as it is adomain user.

GAM server program

The GAM server program collects information on the RAID controller and the connected hard diskson the server and supplies this information to the GAM client program. In addition, the GAM serverprogram executes instructions initiated by the GAM client program.

GAM requires the Mylex driver GAMDRV.SYS. This driver is contained on the GAM server floppydisk in the directory A:\NT. Log on to the system as a user with administrator authorization andinstall GAMDRV.SYS with Control Panel � SCSI Adapters. Then start the GAM server setupprogram SETUP.EXE from the GAM server floppy disk.

During installation you will be asked to adapt the configuration file GAMSCM.CNF. When doing so,observe the following instructions:

• The GAMSCM.CNF file is located in the directory %SYSTEMROOT%\SYSTEM32\GAMSERV\

• The line with the program call #@gamconfg.exe may not be commented out so that SCSI-to-SCSI connections are supported.It must look like this: @gamconfg.exe -c 1 -t 16 -T 1111111101111111

• The line with the program call #gamevent.exe may not be commented out so that both eventlogging and administration will be activated.It must look like this: gamevent.exe -s 1 -p 158 -h <IP-Address GAM-Client>

• Reboot the server after installing GAM.

Page 31: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Shared SCSI configuration

S26361-F1790-Z101-2-7419 English - 23

Preparations on Cluster Server 2

iSwitch on Cluster Server 2.

Configure the Symbios Logic SCSI controller, the CD-ROM drive and the RemoteView hard diskexactly as for Server 1. When doing so, observe the instructions in the section "Preparations onCluster Server 1".

Instructions on setting up shared hard disks onCluster Server2As the shared hard disks were already set up from Server 1 with RAIDFX or GAM, you must onlyassign the same drive letters for the shared hard disks on Server 2 as on Server 1 with the WindowsNT Disk Administrator. As a result, Cluster Server 2 can also access the shared hard disks.

Page 32: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of
Page 33: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 25

MSCS software

Installation conditionsThe following conditions apply for the installation of the Microsoft cluster server:

• WINS servers are entered on both cluster servers.

• The two cluster servers can reach each other with the command ping, both via interconnectand via the client network.

• SNMP is installed on both servers so that ServerView can be integrated.

• The Primary Domain Controller (PDC) can be reached from both servers.

• Both servers use the same drive letters for the internal system drives.

• A user for the cluster service is set up as a domain account. This user is automaticallyassigned the necessary rights during installation of MSCS.

• There is still no data on the shared disks.

• Both servers must be located in the same domain.

The installation of the MSCS is independent of the hardware used, and is therefore the same for allMSCS configurations.

A detailed description of the installation is contained in the MSCS Administrator's Guide.

Page 34: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

MSCS software

26 - English S26361-F1790-Z101-2-7419

Cluster AdministratorThe Cluster Administrator is a program for administrating MSCS.

The Cluster Administrator runs on any computer under the operating system Windows NTWorkstation or Server. This means that you need not administrate MSCS from one of the two clusterservers, but instead can also use remote administration via the network from another Windows NTcomputer.

The installation and the functionality of the Cluster Administrator is described in the MSCSAdministrator's Guide.

Be sure to observe the following instructions for the administration of MSCS. Otherwise there is adanger that MSCS must be reinstalled.

!Only users who belong to the group of local administrators on the cluster servers canremotely administrate MSCS. Specify the user rights accordingly via the menu itemProperties in the Cluster Administrator.

Note that Service Pack 3 must be reinstalled after the installation of new software orhardware components. For details, see the section Installing Service Packs on ClusterNodes in the MSCS Administrator‘s Guide.

After making changes to the physical disk configuration of the shared disks, both nodesmust be restarted.

Before deleting partitions on the shared SCSI bus, the related disk resources must bedeleted.

IP addresses used by the Resource Network Name may not be changed.

The drive names assigned to the system disks may not be changed.

Page 35: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

MSCS software

S26361-F1790-Z101-2-7419 English - 27

Standard resourcesAny application, hard disk, service, etc. on the cluster is called a resource. A resource can only beassigned to one cluster server at a time. This cluster server then "owns" the resource. A resourcecan have the status online or offline, i.e. may be available or not available in the cluster. Resourcesare implemented as Dynamic Link Libraries (DLL).

If one of the two cluster servers fails, then the other server can take over the resources of the failedserver, for example the applications. This procedure is called Failover.

The Microsoft Cluster Server includes the following resources:

• Time Service• Print Spooler• Physical Disk• Network Name• Microsoft Message Queue Server• IP Address• IIS Virtual Root• Generic Service• Generic Application• File Share• Distributed Transaction Coordinator

The meaning of these resources is explained in the MSCS Administrator's Guide.

The administration of the resources contained in MSCS is integrated in the Cluster Administrator.The following illustration shows the user interface of the resource administration.

User interface of the resource administration

Applications and Windows NT services can be integrated in MSCS with resources. Applicationswritten especially for MSCS (cluster-aware applications, for example Microsoft SQL Server 6.5/EEor Oracle Fail Safe) are supplied together with additional Resource Dynamic Link Libraries by thesoftware manufacturer. The installation of the application then automatically integrates theselibraries. Cluster-aware applications configure their resource groups automatically with a clusterwizard.

Page 36: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of
Page 37: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 29

MSCS error situationsThe following list explains several typical MSCS error situations. For additional information on thistopic, see the section "Troubleshooting" in the "MSCS Administrator's Guide".

Network error Response

Fail Interconnect (e.g. when interconnectcable on a cluster node is pulled off)

No reaction. The cluster communicationcontinues via Clientconnect.

Fail Clientconnect (e.g. when Clientconnectcable on a cluster node is pulled off whileclients are accessing a failsafe ClusterResource File Share on this node)

MSCS does not check the client connections,and therefore also initiates no Failover when thisconnection is lost.

Fail Clientconnect + Interconnect (e.g. whenInterconnect and Clientconnect cables arepulled off a cluster node)

The node which possesses the quorum disk afterinterruption of the cluster communication remainsin the cluster. The quorum disk is used to savecluster information. Only one server can have aquorum disk. If the node which possesses thequorum disk also possesses the networkconnections from the cluster's standpoint,however no longer actually has a connection tothe clients, the following situation results:Although the node remains in the cluster, it canno longer be reached by the clients, whilealthough the other node can be reached by theclients, it is no longer able to offer clusterresources.

Shared SCSI error Response

SCSI error at node (e.g. when SCSI cable ona cluster node is pulled off while on the othernode an application is working with the SCSIsubsystem)

If there is only one physical shared SCSI bus(cluster with PXRC), it is no longer terminated,resulting in SCSI errors. If the nodes areconnected to the subsystem via two physicalshared SCSI buses (cluster withPRIMERGY502/702), one node can continue towork with the subsystem and a Failover of clusterresources follows after several minutes.

SCSI error on subsystem (e.g. when SCSIcable on the subsystem is pulled off while ona cluster node an application is working withthe SCSI subsystem)

The subsystem SCSI controller terminates theshared SCSI bus, and the affected node that isno longer linked to the shared SCSI bus isautomatically removed from the cluster. Itsresources are transferred to the remaining nodeafter several minutes.

Page 38: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

MSCS error situations

30 - English S26361-F1790-Z101-2-7419

Miscellaneous errors Response

Cluster nodes in system-hung status (e.g.when a cluster node in the "debug mode" wasstarted with a connected "debug terminal"and the key combinationCTRL+PRINTSCREEN is pressed)

All cluster resources are taken over by theremaining Node 2; when Node 1 is ready foroperation again, it can no longer access theshared hard disks (access is blocked on theSCSI level) and no errors occur in the runningapplications on Node 2.

Node shutdown while applications arerunning.

All cluster resources are taken over by theremaining node.

Powering OFF a node while applications arerunning.

All cluster resources are taken over by theremaining node.

Failure of a node during a rebuild in theshared SCSI subsystem

All cluster resources are taken over by theremaining node; the rebuild continues withoutinterruption.

Page 39: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 31

ServerView integrationMSCS is supported by the ServerView versions 2.0 and higher. The current ServerView version ispart of ServerStart, which is included with every PRIMERGY server.

The purpose of MSCS integration in ServerView is to automatically pass on changes in the MSCSresources to the ServerView manager.

Installation of ServerView agents

OverviewServerView is based on software agents running in the background, which are each responsible forthe system management of individual server components. For example there are agents for theRAID controller, network and operating system. MSCS is integrated in ServerView with a special NTcluster agent.

The installation of ServerView agents is described in the "ServerView" user manual. This sectiondescribes the MSCS-specific aspects of the agent installation. Here the support of MSCS and MylexRAID controllers is decisive.

Install the ServerView agents on both cluster servers as described in the following.

Setup typeStart the installation of the ServerView agents with the ServerStart CD.

Depending on whether you wish to reinstall ServerView, update an existing installation or deinstallan existing installation, select the option New installation, Update existing installation or Removeinstallation.

Selecting agentsThe ServerView setup offers a number of agents for selection in the Select the agents you want toinstall window. The following illustration shows the agents required for MSCS.

Page 40: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

ServerView integration

32 - English S26361-F1790-Z101-2-7419

Selecting agents

Additional informationFor additional information on installation, see the "ServerView" manual.

Follow the installation instructions in accordance with your requirements, exit the setup and rebootthe system.

Install the ServerView agents on the second cluster server exactly as on the first server.

MSCS in ServerView ManagerThe program ServerView Manager is the graphic server management user interface of ServerView.This program communicates with the agents you have installed on both cluster servers. Thefollowing section describes the integration of MSCS in the ServerView Manager.

Page 41: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

ServerView integration

S26361-F1790-Z101-2-7419 English - 33

Inserting Cluster Server in computer listStart the Server Browser with the menu item File→New Server. Select the Server Address tab andenter the virtual name of the cluster you have specified with the MSCS cluster administration. Enterthe IP address of the cluster and mark the option Cluster. The individual cluster servers are thenintegrated in the ServerView computer overview.

Cluster ViewThe ServerView window Cluster View shows a list of all configured MSCS resources (seeillustration). The view in the Cluster View corresponds to the view in the MSCS cluster administrator.

Cluster View

Page 42: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

ServerView integration

34 - English S26361-F1790-Z101-2-7419

All elements defined in the MSCS are displayed:

• Nodes• Groups• Resources• Resource types• Networks• Network Interfaces

Cluster View contains the information received by the ServerView Manager from NT cluster agents.The view is automatically updated when the NT cluster agent has reported a change. The followingchanges within the cluster are displayed:

• Status change of a group• Status change of a resource• Status change of a network and/or a node• Adding or deleting of groups• Adding or deleting resources• Adding or deleting networks• Adding or deleting nodes

For additional information on the information shown in the Cluster View window, see the"ServerView" manual in the "MSCS Administrator's Guide"

Page 43: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

ServerView integration

S26361-F1790-Z101-2-7419 English - 35

TrapsChanges in the cluster are reported to the ServerView Manager by the NT cluster agents on thecluster nodes. These messages are designated as Traps from the standpoint of the transmitters, i.e.of the NT cluster agent. From the standpoint of the receiver, i.e. of the ServerView Manager, theyare referred to as Alarms. ServerView uses the following messages:

Alarm Name Description

abnormal cluster status SNMP cannot access cluster software. Trap sent from server%s.

normal cluster status SNMP gained access to the cluster software. Trap sent fromserver %s.

abnormal cluster status SNMP lost access to the cluster software. Trap sent from server%s.

node deleted The node %s has been deleted.

node added The node %s has been added.

node state change The node %s has changed its state.

resource type deleted Resource type %s has been deleted.

resource type created Resource type %s has been created.

group deleted Group %s has been deleted.

group created Group %s has been created.

group state change Group %s has changed its state.

group properties change The properties of group %s have changed.

resource deleted Resource %s has been deleted.

resource added Resource %s has been added.

resource state change Resource %s has changed its state.

resource properties change The properties of resource %s have changed.

cluster attributes change The attributes of registry key %s have been changed.

%s is a place holder for the names of the corresponding elements (e.g. resource name, group ornode name)

Page 44: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

ServerView integration

36 - English S26361-F1790-Z101-2-7419

Alarm ManagerThe ServerView window Alarm Manager shows alarm messages on the cluster status which theServerView Manager receives from the NT cluster agents in the form of traps.

The following illustration shows the Alarm Manager window.

Alarm Manager

With the menu item Alarms→Alarm group settings you can set which alarms are to be displayed.Possible alarms include:

• node state change• normal cluster status• abnormal cluster status• resource deleted

You can set how the management console is to react to traps with the ServerView Alarm Manager.For example, service personnel can be informed via a pager in the case of certain alarms.

For additional information on this subject, see the documentation for ServerView.

Page 45: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 37

MSCS applications

Requirements for applicationApplications are integrated in MSCS with special Dynamic Link Libraries (DLL). The resource DLLfor Generic Applications included in the delivery scope of the MSCS is suitable for the integration ofsimple applications, such as Microsoft Clock or Notepad. The condition for this is that the entireapplication can be started via the command line.

However, an optimal support by MSCS requires a special resource DLL tailored to the application.Various software manufacturers have already adapted their products to MSCS, for exampleMicrosoft, Oracle, Baan and SAP.

The interface between MSCS and the application can be programmed as a DLL with the ClusterApplication Programming Interface (Cluster API).

MSCS also enables the replication of registry entries (under HKEY_LOCAL_MACHINE). Forinformation on this, see the "MSCS Administrator's Guide".

The "MSCS Developers Kit" and cluster API descriptions can be obtained via an MSDN subscriptionor via the Microsoft Internet pages.

Failover of applicationsMicrosoft Cluster Server Version 1 supports an "Active/Passive Failover" of applications. Thismeans that an application in the cluster runs at a time exclusively on a cluster server, i.e. is activeon one cluster server node and passive on the other cluster server. It can, however, beautomatically restarted on the other cluster server.

The parallel runnability of an application on several cluster nodes will first be supported by a laterMSCS version. The simultaneous use and optimal loading of both cluster nodes ("load balancing")will then increase the performance of the application.

Virtual serverThe software to be installed on the cluster servers is limited to the server share of an application.Only when the server is also to be used as a client must client software also be installed on theserver. MSCS communicates with the server share of the application, and the server sharecommunicates with the client software. This means the client software has no direct interface toMSCS.

The application becomes a member of a virtual server that is defined in the cluster. In a virtual serveall cluster resources are combined which an application requires in the cluster, for example harddisks, NT services, applications, virtual IP address, virtual NETBIOS name, fileshare. Virtual serversare configured with the MSCS cluster administrator.

Page 46: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

MSCS applications

38 - English S26361-F1790-Z101-2-7419

The following illustration shows the function of the virtual server.

NETBIOS name

IP address

Hard disk 2

NT Service

Fileshare

Application

Hard disk 1

Virtual Server

Hard disk 1

Hard disk 2

NT Service

Application

Virtual IP address

Virtual NETBIOS name

Fileshare

Virtual server

MSCS clients connect to the application via the virtual NETBIOS name defined in MSCS. Here it isirrelevant for the clients which cluster server the virtual server is located on at any give time. Theresources of a virtual server cannot be distributed over both cluster servers, however. If applicationsare to be distributed, the components of the applications must be divided into two virtual servers.This is possible only if the application supports such a distribution.

The "MSCS Administrator's Guide" contains a detailed description of the virtual server function.

Since applications in the cluster are always addressed via a single virtual server while there areactually two servers with different network interface cards, the clients must be able to handledifferent MAC addresses. The client maintains the MAC address to IP address assignments in itsARP cache. After a failover of an application, the new MAC address belonging to the IP address isannounced in the LAN via TCP/IP broadcasts when restarting the virtual IP address. The client thencorrects the MAC in its ARP cache. If a client is linked to the cluster via a router, he will not receivethe broadcast. In this case the MAC address of the virtual server is not saved in the ARP cache ofthe client, but instead the MAC address of the router, and the router corrects its ARP cache.

Page 47: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 39

Example applicationsThe following examples describe applications which can already be integrated in MSCS today with aspecial resource DLL.

Oracle FailsafeWith the software product Oracle Failsafe an Oracle database from Version 7.3.3 can be integratedin MSCS.

To do this, first install Oracle and Oracle Failsafe on the two cluster servers on an internal hard disk.Then create the database from a server. The data and log files must be located on the shared harddisks.

With the Failsafe Manager you can then make the local database available in the cluster. In theprocess a virtual server is created for the database which, in addition to the MSCS resource forOracle, also contains the shared hard disks and the Oracle NT services.

The delivery scope of Oracle Failsafe contains the manual "Concepts and Administration Guide",which describes both the installation of the software and the operation of the Failsafe Manager. Inaddition , you will also find information on Oracle Failsafe on the Oracle Internet pages.

Microsoft SQL Server Enterprise EditionAn SQL server database can be integrated in MSCS with the Microsoft SQL Server 6.5 EnterpriseEdition. In addition to the database software, this version also contains a "Cluster Wizard" for theintegration of local databases in MSCS. Here you can integrate several databases in the cluster.Each SQL server database requires its own virtual server.

For detailed information on the SQL Server 6.5 Enterprise Edition, see the Microsoft Internet pages.

Page 48: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

MSCS applications

40 - English S26361-F1790-Z101-2-7419

SAP R/3The SAP cluster software can be used for test systems from R/3 Version 3.1H. Use for productivesystems is first released from the versions 3.1I and 4.0B.

Currently, a typical R/3 solution consists of a central system, application servers, and a test system.The central system in turn consists of a database and an R/3 central instance. If the central systemfails, the test system can take over the hard disks of the production system, and after a restart theR/3 production system. The high availability solution ServerShield from Fujitsu Siemens automatesthe transition from a central system to a test system in a reliable manner simple for theadministrator. In the process, one or more SCSI switches are used. Without ServerShield the SCSIcables must be reconnected manually. For additional information, see the manual for ServerShield.

With a cluster on the basis of MSCS the situation is simpler here: With the shared connection ofboth server nodes via the shared SCSI bus, an SCSI channel changeover switch with more complexwiring is not necessary. As a result, no switching over is required after a failover, and the SCSI busallows immediate access to the drives. In addition, both servers act as primary systems in an MSCScluster, and a secondary server as a monitoring system is not required here.

With the MSCS version from R/3 the central system, i.e. the database and the R/3 central instance,runs on both cluster servers. Here the database and central instance can be distributed to the twoservers or both can run on the same cluster server. By distributing the R/3 central instance and thedatabase to the two cluster servers, a load balancing is achieved. As a result, the performance ofthe central system can be increased by approximately 10 %. If a server fails, the other dynamicallytakes over, i.e. without a reboot, not only the applications already running, but also the hard disksand applications of the failed server.

As MSCS provides its own virtual server for each application, the MSCS version from R/3 operateswith two virtual servers - one for the database and one for the central instance.

For this reason, a cluster with SAP R/3 requires a total of the following five IP addresses in the clientnetwork:

• IP address of Cluster Server 1• IP address of Cluster Server 2• Virtual IP address of the cluster• Virtual IP address of R/3• Virtual IP address of the database

Clients communicate with the R/3 central instance via the virtual R/3 NETBIOS name. Here it isirrelevant which cluster server the virtual R/3 server for the central instance is located on. If afailover of the R/3 central instance occurs, then the clients must reconnect to the central instance.

Page 49: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 41

The R/3 central instance communicates with the database vie tha virtual DB NETBIOS name. Hereit is irrelevant whether the R/3 and the database are located on the same server or on separateservers.

iNote that the test system and additional application servers may not run on the clusterservers. This would cause a second R/3 system to be started on the other server if onecluster server were to fail. However, R/3 excludes the parallel running of several systemson one server.

The R/3 cluster installation integrates a local R/3 system via a cluster utility in MSCS. Forinformation on the installation, see the SAP manual "R/3 Installation on Windows NT (MSCS)".

The R/3 systems files (directory \usr\sap) are stored on a separate system drive on the SCSIsubsystem that may not be used for any other applications. The quorum disk may not be used forR/3 or the database. With Oracle databases, the Oracle software (directory: ORANT) is stored onthe local hard disks in both cluster servers. The log files and table spaces for Oracle databases aredistributed to at least two system drives in the SCSI subsystem.

With SQL Server databases, all database files are stored in the SCSI subsystem.

The configuration of hard disks and the database is also described in the SAP manual "R/3Installation on Windows NT (MSCS)".

The cluster concept of SAP is described in the document "High Availability of R/3", which isavailable on the SAP Internet pages.

Page 50: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

MSCS applications

42 - English S26361-F1790-Z101-2-7419

Microsoft ExchangeMicrosoft Exchange can be integrated in MSCS with the Version Exchange Server 5.5 EnterpriseEdition.

First configure a virtual server with an IP address, network name and physical disk for the exchangefiles with the MSCS cluster administrator.

Install Exchange on both cluster servers. Before installing in the cluster of Version 5.5, an errorcorrection must be read in. This "Rollup Hot Fix" can be obtained via the Microsoft FTP server.

Then start the Exchange setup program on one of the two nodes. The setup program recognizes thecluster automatically and creates the necessary resources. The Exchange services are integratedwith generic service resources.

Not all Exchange services can be integrated in the cluster. For example, while the Mailbox-Servicescan run in the cluster, additional connector servers must be provided for Gateway Services for MSMail and Lotus Notes.

The Exchange client software must also be restarted in the event of a failover.

For additional information on the integration of Exchange in MSCS, see the Microsoft Internet pagesand the installation manual "Clustering with Microsoft Exchange Server" on the Exchange CD.

Page 51: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 43

Index

AAdaptec 2944UW 8Agents

ServerView 31Alarm 35Alarm Manager 36Application Programming Interface 37Applications 37

BBIOS

SCSI controller 8System 4, 15

CCAN bus 4, 19CD-ROM drive 20Client-network 6Cluster administrator 26Cluster server 4Cluster API 37ClusterView 33Configurations 7

DDomain 17Dynamic Link Libraries 27, 37

EError 29Exchange 42

FFailover 27Failover, applications 37Fibre channel 5, 12

GGAM 22GBIC 5, 12Gigabit Interface Convertor 5, 12Global Array Manager 22

HHardware 3Hardware Compatibility List 7HCL 7Heartbeat 6

Page 52: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Index

44 - English S26361-F1790-Z101-2-7419

IInstallation

MSCS software 25operating system 15, 17

Interconnect 6Introduction 1

LLoad balancing 37

MMedia Interface Adapter 5, 12MIA 5, 12Microsoft Exchange 42Microsoft SQL Server 39MSCS hardware, Example configuration 15MSCS applications 37MSCS Configurations 7MSCS error situations 29MSCS hardware 3MSCS software 25Mylex Configuration Utility 16, 21Mylex DAC960 20Mylex DAC960SX 10, 12

NNetworks 6Node 4Notational conventions 2Notes, Cluster Administrator 26

OOracle Failsafe 39

PPartitions 17Point-to-Point 5, 12PRIMERGY 3, 11PRIMERGY 502DF/702DF 12PRIMERGY 502DS/702DS 10, 11PRIMERGY 560/760 8, 10Private network 6Public network 6PXRC 8

QQLogic QLA2100 12Quorum Disk 29

RR/3 40RAID 16RAIDFx 21

Page 53: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

S26361-F1790-Z101-2-7419 English - 45

RemoteView 4, 20Requirements

Hardware 3operating system installation 15

Resources 27, 33, 37

SSAP R/3 40SCSI 4Server Configuration Utility 16ServerShield 40ServerStart 15, 16ServerView 31ServerView Manager 32shared SCSI 4, 19SQL Server 39Symbios Logic 22802 10, 20Symbols, explanation of 2System Drive 16, 21

TTermination

SCSI 4, 9, 10, 15Trap 35

VVirtual server 37Vixel Rapport 1000 12

WWindows NT 15

Page 54: SERVER - Fujitsumanuals.ts.fujitsu.com/file/4225/ps-cluster-mscs-en.pdf · Cluster server Each server in the cluster is designated as a node. With MSCS 1.0 the cluster consists of

Information on this document On April 1, 2009, Fujitsu became the sole owner of Fujitsu Siemens Compu-ters. This new subsidiary of Fujitsu has been renamed Fujitsu Technology So-lutions.

This document from the document archive refers to a product version which was released a considerable time ago or which is no longer marketed.

Please note that all company references and copyrights in this document have been legally transferred to Fujitsu Technology Solutions.

Contact and support addresses will now be offered by Fujitsu Technology So-lutions and have the format …@ts.fujitsu.com.

The Internet pages of Fujitsu Technology Solutions are available at http://ts.fujitsu.com/... and the user documentation at http://manuals.ts.fujitsu.com.

Copyright Fujitsu Technology Solutions, 2009

Hinweise zum vorliegenden Dokument Zum 1. April 2009 ist Fujitsu Siemens Computers in den alleinigen Besitz von Fujitsu übergegangen. Diese neue Tochtergesellschaft von Fujitsu trägt seit-dem den Namen Fujitsu Technology Solutions.

Das vorliegende Dokument aus dem Dokumentenarchiv bezieht sich auf eine bereits vor längerer Zeit freigegebene oder nicht mehr im Vertrieb befindliche Produktversion.

Bitte beachten Sie, dass alle Firmenbezüge und Copyrights im vorliegenden Dokument rechtlich auf Fujitsu Technology Solutions übergegangen sind.

Kontakt- und Supportadressen werden nun von Fujitsu Technology Solutions angeboten und haben die Form …@ts.fujitsu.com.

Die Internetseiten von Fujitsu Technology Solutions finden Sie unter http://de.ts.fujitsu.com/..., und unter http://manuals.ts.fujitsu.com finden Sie die Benutzerdokumentation.

Copyright Fujitsu Technology Solutions, 2009