Well I had the same issue, there's no offline in 2003 server.But when right-click the disk and choose properties, under Device usage, I could switch it from "Use this device (enable)" to "Do not use this device (disable)".
MPIO installed on windows 2008 server supports iSCSI but I have not been able to find MPIO for Server 2003 R2. I am currently using HPMPIO to manage fiber connections to my EVA but I also need to be able to support multiple connections to my iSCSI storage.
windows server 2003 iscsi
Go to the MicroSoft Download Center and download the current version of iSCSI initiator (free download). (search for iscsi initiator)Download the correct version for your server - there are versions for 32 and 64 bit machines.
The 3.2 version of the company's open-source-based software adds additional support for Windows Server 2003 and XP, along with iSCSI support. googletag.cmd.push(function() googletag.display('div-gpt-ad-1449240174198-2'); ); XenSource, the company started by the original developers of the Xen open-source virtualization project, is gearing up for the release of the latest version of its XenEnterprise product.After several months of beta testing, the Palo Alto, Calif., company will release the full version of XenEnterprise 3.2 on April 2.Some of the most noticeable improvement to the 3.2 release revolves around the company's partnership with Microsoft. On July 18, 2006, the two companies announced a partnership that would provide interoperability between Xen-based Linux and Windows Server virtualization. XenSource went on to offer its first Windows product in December.Although one of the more prominent of the virtualization providers, XenSource is not alone in developing open-source virtualization. On March 28, the OpenVX project, based in Herndon, Va., announced its latest operating system server virtualization software for the next stable Linux kernel - 2.2.60.The latest version of XenEnterprise offers additional multiprocessor support for Windows 2003 and Windows XP guest, as well as new support for Windows 2000 virtual servers, said Gordon Mangione, senior vice president of products for XenSource.That additional Windows guest support, including support for Exchange, SQL Server and other applications, includes enhanced network performance, the ability to for the users to suspend or resume virtual machines within the environment, a maximum of 8GB of RAM for each Windows guest and drivers with WHQL (Windows Hardware Quality Laboratories) certification.The software also supports up to four CPUs per Windows guest, Mangione said.In addition, the virtualization product provides for iSCSI storage, which is an Internet protocol-based storage networking standard for linking data storage facilities. On March 5, Virtual Iron, which uses the open-source Xen hypervisor in its own virtualization product, announced that its latest technology would also support iSCSI."We really see iSCSI as an emerging technology and we can see it replacing a lot of SAN," Mangione said. "Right now, we're seeing a lot of use of iSCSI at the department level."The other updates to the software include new manageability features and serviceability that will make the software and its deployment in the data center easier, Mangione said.As with Virtual Iron, XenSource is also looking to deliver its product as a low-cost alternative to VMware, which owns the lion's share of the growing virtualization market.In addition to a free, 30-day trial of XenEnterprise 3.2, the company is offering an annual subscription license for $488 per dual-socket server and a $750 perpetual license per dual-socket server.Copyright 2007 by Ziff Davis Media, Distributed by United Press International Citation:XenSource Prepares Latest Virtualization Release (2007, April 2)retrieved 7 February 2023from -04-xensource-latest-virtualization.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. 0 shares Facebook
This is a set of best practices to follow when installing a Windows 2003 guest on a Proxmox VE server. Right now it's a work in progress. Please feel free to add to it, even if just to pose a potential best practice.
vikas-node1> lun show -v /vol/iscsi_vol1/test_vol1/lassen_iscsi.lun 70.6m (74027520) (r/w, online, mapped) Serial#: nBLskoNdXEWZ Share: none Space Reservation: enabled Multiprotocol Type: windows_gpt Maps: viaRPC.iqn.1991-05.com.microsoft:lassen.engineering.vicom.com=0vikas-node1>
More App logs: _____________________________ 7/17/2012 1:50:42 PM Failed to retrieve information required for connecting the LUN. LUN Name = TEST01_LOGS Storage Path = /vol/LUNA04_SQLPPRD_02/ Protocol Type = HTTPS Storage System Name = LUNA04 Error code : The specified LUN '/vol/LUNA04_SQLPPRD_02/TEST01_LOGS' is already mapped to at least one initiator 'iqn.1991-05.com.microsoft:lusqlpprd01.xxxxxx.xxxxx.edu'. Try disconnect the corresponding LUN using SnapDrive on the host it is connected to, or from storage system console use 'LUN unmap' command, or unmap the LUN using Storage System View. ______________________ 7/17/2012 9:35:04 AM Network Adapter Event. Portal information has been updated after receiving network adapter arrival/removal event. LUNA04> lun stats /vol/LUNA04_SQLPPRD_02/TEST01_LOGS /vol/LUNA04_SQLPPRD_02/TEST01_LOGS (158 days, 13 hours, 59 minutes, 8 seconds) Read (kbytes) Write (kbytes) Read Ops Write Ops 274986917 2325019007 2228476 46249778 LUNA04> LUNA04> lun stats /vol/LUNA04_SQLPPRD_02/TEST01_DATA /vol/LUNA04_SQLPPRD_02/TEST01_DATA (158 days, 13 hours, 59 minutes, 20 seconds) Read (kbytes) Write (kbytes) Read Ops Write Ops 4731869533 6060622399 130183024 71152885 whatever connects it to the server using Microsoft iscsi initiator or SDW seems to be at fault.
AbstractPresent day servers must support a variety of legacy I/O devices andprotocols that are rarely used in the day to day server operation, ata significant cost in board layout complexity, reliability, powerconsumption, heat dissipation, and ease of management. We present adesign of an IP Only Server, which has a single, unified I/O interface: IPnetwork. All of the server's I/O is emulated and redirected overIP/Ethernet to a remote management station, except for the hard diskswhich are accessed via iSCSI. The emulation is done in hardware,and is available from power-on to shutdown, including the pre-OS andpost-OS (crash) stages, unlike alternative solutions such as VNC thatcan only function when the OS is operational. The server's softwarestack -- the BIOS, the OS, and applications -- will run without anymodifications.We have developed a prototype IP Only Server, based on a COTS FPGA running ourembedded I/O emulation firmware. The remote station is a commodity PCrunning a VNC client for video, keyboard and mouse. Initialperformance evaluations with unmodified BIOS and Windows and Linuxoperating systems indicate negligible network overhead and acceptableuser experience. This prototype is the first attempt to create adiskless and headless x86 server that runs unmodified industrystandard software (BIOS, OS, and applications).1 IntroductionPresent day server systems support the same set of I/O devices,controllers, and protocols as desktop computers, including keyboard,mouse, video, IDE and/or SCSI hard disks, floppy, CD-ROM, USB, serialand parallel ports, and quite a few others. Most of these devices arenot utilized during the normal server operation. The data disks arefrequently remote, and both Fibre Channel and iSCSI now supportbooting off remote disk devices, so directly attached hard disks arenot necessary for operating system (OS) boot either. Removable mediadevices, such as floppies and CD-ROMs, are only used for installationof OS and applications, and that can also be avoided with modernremote storage management systems.Moreover, there are no users who work directly on the server, usingkeyboard, mouse, and display -- normal administrative tasks areusually performed over remote connections, at least while the serveris operational. Remote management is done via protocols such as SecureShell (SSH) and X for Linux/UNIX systems, Microsoft's Windows TerminalServices, and cross-platform protocols such as the Remote Framebuffer(RFB, see [6]), used by the popular Virtual Network Computing(VNC) remote display scheme. However, local console accessis still required for some operations, including low-level BIOSconfiguration (pre-OS environment) and dealing with failures, such asthe Windows ``blue screen of death'' and Linux kernel panics (post-OSenvironments). Local console is usually provided either via theregular KVM (keyboard-video-mouse) interface or a serial lineconnection.The legacy protocols and the associated hardware have non-negligiblecosts. The board must contain and support the multitude of controllersand the associated auxiliary electrical components each of themrequires. This occupies a significant portion of the board real estatethat could otherwise contain, say, an additional CPU or memory. Themultitude and complexity of the legacy components also reduce the meantime between failures (MTBF).We propose that future servers will only need CPUs, memory, anorthbridge, and network interface cards (NICs). All the legacy I/O that is done today, e.g., over PCI, will be done over a single,universal I/O link -- the ubiquitous IP network. All communicationwith storage devices, including boot, will be done over iSCSI,console access will also be performed over the network. Protocols suchas USB can also be emulated over IP [5], providing a varietyof remote peripherals such as CD-ROM, printers, or floppy if they areneeded.Figure 1:Components of an IBM HS20 blade server.To illustrate this point, Figure 1 identifies the variouscomponents of an x86 server (an IBM HS20 blade server chosen for itsunobstructed layout). The CPUs, the DRAM, the northbridge, the BIOSflash, and the network hardware are necessary, while the southbridge,the SCSI and IDE controllers, the graphics adapter, and the integratedlegacy I/O chip (implementing the keyboard and mouse controllers,various timers, etc.) can be removed and their functionality can beemulated over the network.This ``remoting'' of I/O can be achieved without any modifications atall to the applications, the OS, or the BIOS of the server if theprotocol emulation is done in hardware. Substituting a singlehardware component for all the legacy controllers, capturing all thebus transactions involving the legacy devices, remoting thetransactions over the IP network, and performing the actual I/O atremote systems will be completely transparent to the BIOS and the OS,and thus to the applications.While many software based alternatives exist for remoting I/O when theOS is up and running (see Section 2), doingthe protocol emulation in hardware is essential for supporting thepre-OS (e.g., BIOS or bootloader) and post-OS environments.We developed a research prototype of such an ``IP Only Server'' that uses IPfor all of its I/O needs. We designed and implemented a legacy I/O emulator based on a COTS FPGA. The FPGA, connected to the host viathe PCI bus, serves as the local keyboard, mouse, and VGA controller.All keyboard, mouse, and VGA traffic reaches the FPGA and is sent to aremote station over IP. The user is able to perform all themanagement operations -- throughout the lifetime of the server, i.e.,during boot, BIOS, OS initialization, normal operation, and post-OSstages -- from the remote station. No changes were needed for thesoftware running on the host. In particular, neither the BIOS nor theOS were modified.The performance of the prototype is acceptable for the usualserver management tasks. The only significant network load may comefrom the remote iSCSI storage, the network utilization due to remotemanagement is small, and the user experience is acceptable.2 Related WorkThe concept of allowing the user to interact with a remote computerover a network had a long history. Today there exists a wide varietyof thin clients, e.g., SSH/X11, VNC, SunRay, Citrix Metaframe, andWindows Terminal Server. Our approach differs from all these solutionsin two major ways: we allow remoting of legacy I/O over the network a)without any host software modifications, and b) from the moment thecomputer has powered on until it has powered off, including when nooperating system is present (BIOS stage as well as OS crash).The Super Dense Server research prototype [7] presented serverswith no legacy I/O support. It used Console Over Ethernet, which is OSdependent, supported text mode only, ran Linux, and usedLinuxBIOS [4] rather than a conventional BIOS. Incontrast, the IP Only Server runs unmodified OSes and BIOSes and supportsgraphical VGA modes as well as text based modes. Attempting sweepingchanges in the BIOS (e.g., switching to LinuxBIOS) while requiring tosupport a wide variety of boards and many different software stackswould adversely affect reliability, availability, serviceability, andsystem testing.``USB over IP'' [5] is used as a peripheral bus extensionover an IP network. USB/IP requires a special OS-specific driver andthus is only available when the OS is operational, while the IP Only Server does not require OS modifications -- it listens on the PCI bus fortransactions using a hardware component.Baratto et al. presented THINC [3], a remote displayarchitecture that intercepts application display commands at the OS'sdevice driver interface. This approach looks promising for remotingthe display while the OS is running, but does not handle either pre-OSor exception (post-OS) conditions. THINC could be used together withmodifications to the system's BIOS to build a pure softwareIP Only Server. However, by using specialized hardware the system can beremoted at all times.IBM's JS20 PowerPC-based blades do not contain any video/mouse/keyboardcomponents. Instead Serial-over-LAN (tunneling text-only serial consolemessages over UDP datagrams) is used. KVM over IP products (e.g., CycladesAlterPath KVM/net) provide an easy way to remote all operatingenvironments. However, such products carry a non-negligible price tag, andservers using KVM over IP still require a full complement of hardwarecomponents.3 DesignThe IP Only Server was designed with several guidelines in mind.The server must run unmodified software, including OS and BIOS.
Remote access is needed at all times, from the BIOS boot stage through the OS's lifetime, and even post-OS environments such as the ``blue screen of death'' or a Linux kernel oops. This does not preclude a more effective remote access method when the OS is operational, such as X-Windows, or Windows Terminal Server.
The server should have the minimal amount of local state required for disconnected operation. The hard drives should be remoted over IP, including boot.
The IP Only Server must be able to work even when no remote management station is connected, or when one has been connected and then disconnected. Obviously, the remote storage that is necessary for boot and normal operation of the server must be available at all times.
Text (console) and graphical mode support must be provided. There is no requirement to provide more than plain VGA mode support -- the IP Only Server is not aimed at users who need accelerated graphics.
The remote management station should not require a custom or proprietary client, e.g., the KVM-over-IP (Keyboard/Video/Mouse-over-IP) protocol should be based on open standards.
A single remote station should be able to control multiple IP Only Servers concurrently.
The IP Only Server can be based on any standard architecture (such as x86). TheCPU, memory, northbridge, and BIOS flash are not modified. The serverwill include at least one network interface. Other peripheralcomponents will not be needed. The functionality of the peripheralcomponents can be emulated by dedicated logic that presents the legacyI/O interfaces to the host (via a PCI bus), and remotes them over anIP based protocol to the remote station. The logic may be implementedas an ASIC or an FPGA depending on the cost/programmability trade-off.The IP Only Server will not include any local disks. Instead, it boots from aremote boot device, such as an iSCSI disk via iBOOT [1] orPXE. Alternatively, disk access can be remoted like the other legacyI/O protocols. A mixture of the two approaches is possible inprinciple: the emulation hardware can include an implementation of aboot-capable iSCSI initiator. This will leave the BIOS flash as theonly local state.For the prototype described below we designed an FPGA that presenteditself as a VGA/keyboard/mouse device. The server's BIOS and OSaccessed the FPGA using their standard drivers. The FPGA received allhost accesses as PCI transactions, and handled them appropriately.We experimented with two different approaches to remoting these PCItransactions to a remote station. The first approach, the Internet PCIRemote Protocol (iPRP) was essentially ``PCI over IP'': PCItransactions were wrapped by IP packets, sent to the remote station,and processed there. Responses were sent back as IP packets as welland passed them to the local PCI bus. Clearly iPRP does not satisfydesign guideline 4 above, and was used mainly as an intermediatedebugging tool. It is described in Section 4.1.The second approach, using the RFB protocol, is described inSection 4.2. In this scheme the emulation FPGA translates keyboard,mouse, and video PCI transactions into the high level RFB protocolthat allows using any VNC client (and any OS) on the remote station.PCI transactions are processed locally by the FPGA, while the displayand user inputs are handled by the remote station.Figure 2:Comparison of iPRP and RFB implementations of an IP Only Server.The difference between the two schemes is highlighted byFigure 2: with iPRP the FPGA is essentially transparent,and the emulation is done in the remote station; with RFB the remotestation exists for the user interface only, the emulation is done inthe FPGA, and some PCI transactions (reads) are handled locally insidethe FPGA.Thus, in the RFB design, if the user is not interested in interactingwith the server, the server can operate without a remote station. Onthe other hand, a user can open VNC sessions against multiple IP Only Servers simultaneously. Clearly, this approach supports design guidelines 4through 7.4 ImplementationFor the prototype we used a Dini-group DN3000k10s FPGA evaluationboard with a Metanetworks Inc. MP1000TX Ethernet prototyping plug-inboard. The FPGA is a Xilinx Virtex-II x2v8000. The design ran at 50MHz, used 378 KB of local memory, and 8400 logic slices (equivalent toabout 1M gates).The FPGA firmware was divided into four main modules: a PCI InterfaceModule, a BIOS expansion ROM, a Network Interface Module, and a TransactionProcessing Module. The PCI interface module answers to keyboard controller addresses, VGAcontroller, VGA memory, and implements an expansion ROM base addressregister.The device identifies itself as a PCI-based VGA adapter. Upondiscovery of the VGA add-in card, standard BIOSes consider it as thedefault VGA adapter, and configure the chipset in such a way that allVGA I/O and memory transactions are routed to the device. Havingkeyboard controller I/O addresses routed to the device is trickier,and requires additional northbridge and I/O bridge configuration doneby the BIOS. The expansion ROM is size 32kB, and contains the VGA BIOSroutines. It was based on the GPL VGA BIOS included in the Bochs x86emulator ( ).The network interface consists of a Fast Ethernet MAC and a DMAengine. The amount of the management traffic is small, but it may bebeneficial to use a separate port for other reasons -- for instance,if the main interface is down or saturated (e.g., due to heavy load orto a denial of service attack). The iSCSI storage interface may alsobe separate for performance and/or security reasons.In our prototype implementation we used the FPGA's Ethernet interfacefor KVM emulation, and the server's regular Ethernet interface for theother traffic, including iSCSI.The Transaction Processing firmware consists of a single loop thatgets PCI transactions from the PCI interface, and either handles themlocally or wraps them into the appropriate network protocol (iPRP orRFB -- see Sections 4.1 and 4.2 below).To improve network utilization subsequent write transactions arecoalesced into bigger network packets, while read transactions areeither handled locally (RFB version) or sent to the networkimmediately (iPRP version). In the iPRP version read responses areforwarded to the PCI Interface Module that handles the PCI readresponse. The iPRP version also supports remote keyboard generatedinterrupt requests (IRQ1). Since this interrupt is reserved for thelocal keyboard controller we used the chipset's Open HCI USB legacykeyboard emulation feature to emulate it.4.1 iPRP -- Internet PCI Remote ProtocolThe iPRP protocol uses the UDP protocol to emulate memory and I/O space PCI transactions over an IP network. The protocol was designed forease of implementation, initial debugging and bring up of the hardware andfirmware. It is not very efficient nor robust (a network problem or aremote side failure will cause the host to crash or just lock up).A command is defined is a single I/O command such as memoryread, I/O space write, or acknowledgment. Multiple commands may bepacked into a single Ethernet frame. A message is one or morecommands, mapped into a single Ethernet PDU. In a multi-commandmessage only the last command may be a read type command, since thePCI-based system can not proceed until the read data arrivesback. Each commands has an attached sequence number (SN).The protocol uses the `go-back-N' scheme, i.e., there may be up to Nunacknowledged commands in flight. An ACK message acknowledges allcommands with sequence numbers less the the ACK's SN; in case of aread command the ACK also contains the returned data. A messagetransmit is triggered by a read, a timeout, or in case the maximummessage size is reached.The remote station software was based on the Bochs open source x86emulator. We extracted the relevant PCI device emulation code and fedit the PCI transactions received over the iPRP payload as input. Forhost PCI reads, a return packet with the response is sent back to theFPGA.4.2 RFB -- Remote Frame BufferTo overcome the shortcomings of the iPRP version, especially the factthat the server cannot operate without the remote station, we have toemulate the device controllers in the FPGA firmware rather than in theremote station software. As in the iPRP version we based theimplementation on code from the Bochs emulator.To transfer the keyboard, mouse, and video events between the FPGA andthe remote station we chose the Remote Framebuffer (RFB) protocol[6]. To this end, we implemented a VNC server in the FPGAfirmware. We choose RFB because it is a well known and widely usedopen protocol with numerous open source clients.Our FPGA platform was limited in both space (350 KB of memorytotal) and speed (50 MHz CPU, no memory caches). Accordingly, westarted with a straightforward hardware VGA controller emulation insoftware, based on the Bochs VGA controller, and optimized it both inspace and time to fit our FPGA environment. For instance, by removingsupport for VGA modes that were not used by either Linux or Windows,we managed to reduce the VGA framebuffer to 128 KB.Since the RFB protocol is TCP/IP based, we also added a TCP/IP stackto the FPGA firmware. Due to the limited memory and processingresources of the FPGA we had to implement a custom embedded TCP/IPstack. The stack is minimalistic and is specifically designed to fitour firmware environment, but it implements a complete TCP statemachine and is streamlined: it has a very low memory footprint andavoids copying of data as much as possible.Like the iPRP version, the RFB-based FPGA firmware is based on asingle looping execution thread. The firmware receives the host's PCItransactions from the PCI interface. Host PCI reads are answeredimmediately, while PCI writes update the local device state machines.Every so often, the frame buffer updates are sent to the remotestation to be displayed. The decision of when to update the remotestation is crucial for establishing reasonable performance with ourconstrained FPGA; we developed heuristics that performed fairly well(cf. Section 5).5 Performance Evaluation and AnalysisThe most important performance metric for evaluating the IP Only Server is userexperience, which is notoriously hard to quantify. In order toapproximate the user's experience, we performed several measurements.All tests were performed on two identical IBM x235 servers includingthe PCI-based FPGA evaluation board with a 100 Mb/s Ethernet networkinterface. The remote station software ran on two R40 Thinkpad laptopswith a 1.4 GHz Pentium M CPU and 512 MB RAM each. The servers bootedeither Windows 2003 Server or Red Hat Enterprise Server Linux 3.0.The iSCSI connection was provided through a separate networkinterface, and the FPGA was not involved in communication with theiSCSI target at all. The performance of iSCSI storage has been studiedindependently, and we were primarily interested in the performance ofour FPGA-based KVM emulation. Therefore, for the purpose of thesemeasurements we used local disks, to achieve a clean experimentalenvironment.First, we measured the wall time of a server boot for each of thethree scenarios: a server with native legacy I/O peripherals, an IPOnly Server with an FPGA using iPRP, and an IP Only Server with anFPGA using RFB. In each scenario, we measured the time from power onuntil a Linux console login prompt appeared, and from power on until aWindows 2003 Server ``Welcome to Windows'' dialog showed up.As depicted in Table 1, an unmodified serverbooted to a Linux login prompt in 238 seconds while a server using theRFB version of the FPGA booted in 330 seconds. This is a 38% slowdown(for the RFB version), which is acceptable for the very first,unoptimized prototype. Additionally an unmodified server booted to theWindows 2003 Server ``Welcome to Windows'' dialog in 186 seconds andthe server with the RFB FPGA booted in 289 seconds. This is slightlyworse, a 55% slowdown, but again, it is acceptable.
2ff7e9595c
Comments