This document describes the support status and in particular the security support status of the Xen branch within which you find it.
The Windows PV Drivers are built individually into a tarball each. To install a driver on your target system, unpack the tarball, then navigate to either the x86 or x64 subdirectory (whichever is appropriate), and execute the copy of dpinst.exe you find there with Administrator privilege. This architecture is similar to the current Xen driver(VBD) model. Backend driver works in kernel.( SCSI is processed in kernel space) Hypervisor sd/st/sr/sg scsi HBA driver SCSI back driver SCSI front driver SCSI device Host (Dom 0) Guest SCSI passthrough kernel space This SCSI passtrough driver supports. disk, tape drive and media changer.
IBM Developer More than 100 open source projects, a library of knowledge resources, and developer advocates ready to help. If you have any feedback please go to the Site Feedback and FAQ page. This release for all Citrix Systems family products and updates the driver to the latest version. This unified driver has been further enhanced to provide the highest level of power, performance, and reliability.This driver pack will provide a good start for your device to achieve its utmost efficiency. . Each presented as 2 SCSI devices of 200 GB (for a total of 4 devices and 800 GB). 1 x Fusion-io ioDrive2 (785 GB) After installing XenServer Creedence Build #86278 (about 5 builds newer than Alpha 2) and the Fusion-io drivers (compiled separately), we created a Storage Repository (SR) on each available device. This produced a total of 9.
See the bottom of the file for the definitions of the support status levels etc.
- Release Notes
- RN
2.1 Kconfig
EXPERT and DEBUG Kconfig options are not security supported. Other Kconfig options are supported, if the related features are marked as supported in this document.
2.2 Host Architecture
2.2.1 x86-64
2.2.2 ARM v7 + Virtualization Extensions
2.2.3 ARM v8
2.3 Host hardware support
2.3.1 Physical CPU Hotplug
2.3.2 Physical Memory Hotplug
2.3.3 Host ACPI (via Domain 0)
2.3.4 x86/Intel Platform QoS Technologies
2.3.5 IOMMU
2.3.6 ARM/GICv3 ITS
Extension to the GICv3 interrupt controller to support MSI.
2.4 Guest Type
2.4.1 x86/PV
Traditional Xen PV guest
No hardware requirements
2.4.2 x86/HVM
Fully virtualised guest using hardware virtualisation extensions
Requires hardware virtualisation support (Intel VMX / AMD SVM)
2.4.3 x86/PVH
PVH is a next-generation paravirtualized mode designed to take advantage of hardware virtualization support when possible. During development this was sometimes called HVMLite or PVHv2.
Requires hardware virtualisation support (Intel VMX / AMD SVM).
Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
2.4.4 ARM
ARM only has one guest type at the moment
2.5 Toolstack
2.5.1 xl
2.5.2 Direct-boot kernel image format
Format which the toolstack accepts for direct-boot kernels
2.5.3 Dom0 init support for xl
2.5.4 JSON output support for xl
Output of information in machine-parseable JSON format
2.5.5 Open vSwitch integration for xl
2.5.6 Virtual cpu hotplug
2.5.7 QEMU backend hotplugging for xl
2.6 Toolstack/3rd party
2.6.1 libvirt driver for xl
2.7 Debugging, analysis, and crash post-mortem
2.7.1 Host serial console
2.7.2 Hypervisor ‘debug keys’
These are functions triggered either from the host serial console, or via the xl ‘debug-keys’ command, which cause Xen to dump various hypervisor state to the console.
2.7.3 Hypervisor synchronous console output (sync_console)
Xen command-line flag to force synchronous console output.
Useful for debugging, but not suitable for production environments due to incurred overhead.
2.7.4 gdbsx
Debugger to debug ELF guests
2.7.5 Soft-reset for PV guests
Soft-reset allows a new kernel to start ‘from scratch’ with a fresh VM state, but with all the memory from the previous state of the VM intact. This is primarily designed to allow “crash kernels”, which can do core dumps of memory to help with debugging in the event of a crash.
2.7.6 xentrace
Tool to capture Xen trace buffer data
2.7.7 gcov
Export hypervisor coverage data suitable for analysis by gcov or lcov.
2.8 Memory Management
2.8.1 Dynamic memory control
Allows a guest to add or remove memory after boot-time. This is typically done by a guest kernel agent known as a “balloon driver”.
2.8.2 Populate-on-demand memory
This is a mechanism that allows normal operating systems with only a balloon driver to boot with memory < maxmem.
2.8.3 Memory Sharing
Allow sharing of identical pages between guests
2.8.4 Memory Paging
Allow pages belonging to guests to be paged to disk
2.8.5 Transcendent Memory
Transcendent Memory (tmem) allows the creation of hypervisor memory pools which guests can use to store memory rather than caching in its own memory or swapping to disk. Having these in the hypervisor can allow more efficient aggregate use of memory across VMs.
2.8.6 Alternative p2m
Alternative p2m (altp2m) allows external monitoring of guest memory by maintaining multiple physical to machine (p2m) memory mappings.
2.9 Resource Management
2.9.1 CPU Pools
Groups physical cpus into distinct groups called “cpupools”, with each pool having the capability of using different schedulers and scheduling properties.
2.9.2 Credit Scheduler
A weighted proportional fair share virtual CPU scheduler. This is the default scheduler.
2.9.3 Credit2 Scheduler
A general purpose scheduler for Xen, designed with particular focus on fairness, responsiveness, and scalability
2.9.4 RTDS based Scheduler
A soft real-time CPU scheduler built to provide guaranteed CPU capacity to guest VMs on SMP hosts
2.9.5 ARINC653 Scheduler
A periodically repeating fixed timeslice scheduler.
Currently only single-vcpu domains are supported.
2.9.6 Null Scheduler
A very simple, very static scheduling policy that always schedules the same vCPU(s) on the same pCPU(s). It is designed for maximum determinism and minimum overhead on embedded platforms.
2.9.7 NUMA scheduler affinity
Enables NUMA aware scheduling in Xen
2.10 Scalability
2.10.1 Super page support
NB that this refers to the ability of guests to have higher-level page table entries point directly to memory, improving TLB performance. On ARM, and on x86 in HAP mode, the guest has whatever support is enabled by the hardware.
This feature is independent of the ARM “page granularity” feature (see below).
On x86 in shadow mode, only 2MiB (L2) superpages are available; furthermore, they do not have the performance characteristics of hardware superpages.
2.10.2 x86/PVHVM
This is a useful label for a set of hypervisor features which add paravirtualized functionality to HVM guests for improved performance and scalability. This includes exposing event channels to HVM guests.
2.11 High Availability and Fault Tolerance
2.11.1 Remus Fault Tolerance
2.11.2 COLO Manager
2.11.3 x86/vMCE
Forward Machine Check Exceptions to appropriate guests
2.12 Virtual driver support, guest side
2.12.1 Blkfront
Guest-side driver capable of speaking the Xen PV block protocol
2.12.2 Netfront
Guest-side driver capable of speaking the Xen PV networking protocol
2.12.3 PV Framebuffer (frontend)
Guest-side driver capable of speaking the Xen PV Framebuffer protocol
2.12.4 PV Console (frontend)
Guest-side driver capable of speaking the Xen PV console protocol
2.12.5 PV keyboard (frontend)
Guest-side driver capable of speaking the Xen PV keyboard protocol. Note that the “keyboard protocol” includes mouse / pointer support as well.
2.12.6 PV USB (frontend)
2.12.7 PV SCSI protocol (frontend)
NB that while the PV SCSI frontend is in Linux and tested regularly, there is currently no xl support.
2.12.8 PV TPM (frontend)
Guest-side driver capable of speaking the Xen PV TPM protocol
2.12.9 PV 9pfs frontend
Guest-side driver capable of speaking the Xen 9pfs protocol
2.12.10 PVCalls (frontend)
Guest-side driver capable of making pv system calls
2.13 Virtual device support, host side
For host-side virtual device support, “Supported” and “Tech preview” include xl/libxl support unless otherwise noted.
2.13.1 Blkback
Host-side implementations of the Xen PV block protocol.
Backends only support raw format unless otherwise specified.
2.13.2 Netback
Host-side implementations of Xen PV network protocol
2.13.3 PV Framebuffer (backend)
Host-side implementation of the Xen PV framebuffer protocol
2.13.4 PV Console (xenconsoled)
Host-side implementation of the Xen PV console protocol
2.13.5 PV keyboard (backend)
Host-side implementation of the Xen PV keyboard protocol. Note that the “keyboard protocol” includes mouse / pointer support as well.
2.13.6 PV USB (backend)
Host-side implementation of the Xen PV USB protocol
2.13.7 PV SCSI protocol (backend)
NB that while the PV SCSI backend is in Linux and tested regularly, there is currently no xl support.
2.13.8 PV TPM (backend)
2.13.9 PV 9pfs (backend)
2.13.10 PVCalls (backend)
PVCalls backend has been checked into Linux, but has no xl support.
2.13.11 Online resize of virtual disks
2.14 Security
2.14.1 Driver Domains
“Driver domains” means allowing non-Domain 0 domains with access to physical devices to act as back-ends.
See the appropriate “Device Passthrough” section for more information about security support.
2.14.2 Device Model Stub Domains
Vulnerabilities of a device model stub domain to a hostile driver domain (either compromised or untrusted) are excluded from security support.
2.14.3 Device Model Deprivileging
This means adding extra restrictions to a device model in order to prevent a compromised device model from attacking the rest of the domain it’s running in (normally dom0).
“Tech preview with limited support” means we will not issue XSAs for the additional functionality provided by the feature; but we will issue XSAs in the event that enabling this feature opens up a security hole that would not be present without the feature disabled.
For example, while this is classified as tech preview, a bug in libxl which failed to change the user ID of QEMU would not receive an XSA, since without this feature the user ID wouldn’t be changed. But a change which made it possible for a compromised guest to read arbitrary files on the host filesystem without compromising QEMU would be issued an XSA, since that does weaken security.
2.14.4 KCONFIG Expert
2.14.5 Live Patching
Compile time disabled for ARM by default.
2.14.6 Virtual Machine Introspection
2.14.7 XSM & FLASK
Compile time disabled by default.
Also note that using XSM to delegate various domain control hypercalls to particular other domains, rather than only permitting use by dom0, is also specifically excluded from security support for many hypercalls. Please see XSA-77 for more details.
2.14.8 FLASK default policy
The default policy includes FLASK labels and roles for a “typical” Xen-based system with dom0, driver domains, stub domains, domUs, and so on.
2.15 Virtual Hardware, Hypervisor
2.15.1 x86/Nested PV
This means running a Xen hypervisor inside an HVM domain on a Xen system, with support for PV L2 guests only (i.e., hardware virtualization extensions not provided to the guest).
This works, but has performance limitations because the L1 dom0 can only access emulated L1 devices.
Xen may also run inside other hypervisors (KVM, Hyper-V, VMWare), but nobody has reported on performance.
2.15.2 x86/Nested HVM
This means providing hardware virtulization support to guest VMs allowing, for instance, a nested Xen to support both PV and HVM guests. It also implies support for other hypervisors, such as KVM, Hyper-V, Bromium, and so on as guests.
2.15.3 vPMU
Virtual Performance Management Unit for HVM guests
Disabled by default (enable with hypervisor command line option). This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
2.15.4 Argo: Inter-domain message delivery by hypercall
2.15.5 x86/PCI Device Passthrough
Only systems using IOMMUs are supported.
Not compatible with migration, populate-on-demand, altp2m, introspection, memory sharing, or memory paging.
Because of hardware limitations (affecting any operating system or hypervisor), it is generally not safe to use this feature to expose a physical device to completely untrusted guests. However, this feature can still confer significant security benefit when used to remove drivers and backends from domain 0 (i.e., Driver Domains).
2.15.6 x86/Multiple IOREQ servers
An IOREQ server provides emulated devices to HVM and PVH guests. QEMU is normally the only IOREQ server, but Xen has support for multiple IOREQ servers. This allows for custom or proprietary device emulators to be used in addition to QEMU.
2.15.7 ARM/Non-PCI device passthrough
Note that this still requires an IOMMU that covers the DMA of the device to be passed through.
2.15.8 ARM: 16K and 64K page granularity in guests
No support for QEMU backends in a 16K or 64K domain.
2.15.9 ARM: Guest Device Tree support
2.15.10 ARM: Guest ACPI support
2.16 Virtual Hardware, QEMU
This section describes supported devices available in HVM mode using a qemu devicemodel (the default).
Note that other devices are available but not security supported.
2.16.1 x86/Emulated platform devices (QEMU):
2.16.2 x86/Emulated network (QEMU):
2.16.3 x86/Emulated storage (QEMU):
See the section Blkback for image formats supported by QEMU.
2.16.4 x86/Emulated graphics (QEMU):
2.16.5 x86/Emulated audio (QEMU):
2.16.6 x86/Emulated input (QEMU):
2.16.7 x86/Emulated serial card (QEMU):
2.16.8 x86/Host USB passthrough (QEMU):
2.16.9 qemu-xen-traditional
The Xen Project provides an old version of qemu with modifications which enable use as a device model stub domain. The old version is normally selected by default only in a stub dm configuration, but it can be requested explicitly in other configurations, for example in xl
with device_model_version='QEMU_XEN_TRADITIONAL'
.
qemu-xen-traditional is security supported only for those available devices which are supported for mainstream QEMU (see above), with trusted driver domains (see Device Model Stub Domains).
2.17 Virtual Firmware
2.17.1 x86/HVM iPXE
Booting a guest via PXE.
PXE inherently places full trust of the guest in the network, and so should only be used when the guest network is under the same administrative control as the guest itself.
2.17.2 x86/HVM BIOS
Booting a guest via guest BIOS firmware
2.17.3 x86/HVM OVMF
OVMF firmware implements the UEFI boot protocol.
This file contains prose, and machine-readable fragments. The data in a machine-readable fragment relate to the section and subsection in which it is found.
The file is in markdown format. The machine-readable fragments are markdown literals containing RFC-822-like (deb822-like) data.
In each case, descriptions which expand on the name of a feature as provided in the section heading, precede the Status indications. Any paragraphs which follow the Status indication are caveats or qualifications of the information provided in Status fields.
3.1 Keys found in the Feature Support subsections
3.1.1 Status
This gives the overall status of the feature, including security support status, functional completeness, etc. Refer to the detailed definitions below.
If support differs based on implementation (for instance, x86 / ARM, Linux / QEMU / FreeBSD), one line for each set of implementations will be listed.
3.2 Definition of Status labels
Each Status value corresponds to levels of security support, testing, stability, etc., as follows:
3.2.1 Experimental
3.2.2 Tech Preview
3.2.2.1 Supported
3.2.2.2 Deprecated
All of these may appear in modified form. There are several interfaces, for instance, which are officially declared as not stable; in such a case this feature may be described as “Stable / Interface not stable”.
3.3 Definition of the status label interpretation tags
3.3.1 Functionally complete
Does it behave like a fully functional feature? Does it work on all expected platforms, or does it only work for a very specific sub-case? Does it have a sensible UI, or do you have to have a deep understanding of the internals to get it to work properly?
3.3.2 Functional stability
What is the risk of it exhibiting bugs?
General answers to the above:
Here be dragons
Pretty likely to still crash / fail to work. Not recommended unless you like life on the bleeding edge.
Quirky
Mostly works but may have odd behavior here and there. Recommended for playing around or for non-production use cases.
Normal
Ready for production use
3.3.3 Interface stability
If I build a system based on the current interfaces, will they still work when I upgrade to the next version?
Not stable
Interface is still in the early stages and still fairly likely to be broken in future updates.
Provisionally stable
We’re not yet promising backwards compatibility, but we think this is probably the final form of the interface. It may still require some tweaks.
Stable
We will try very hard to avoid breaking backwards compatibility, and to fix any regressions that are reported.
3.3.4 Security supported
Will XSAs be issued if security-related bugs are discovered in the functionality?
If “no”, anyone who finds a security-related bug in the feature will be advised to post it publicly to the Xen Project mailing lists (or contact another security response team, if a relevant one exists).
Bugs found after the end of Security-Support-Until in the Release Support section will receive an XSA if they also affect newer, security-supported, versions of Xen. However, the Xen Project will not provide official fixes for non-security-supported versions.
Three common ‘diversions’ from the ‘Supported’ category are given the following labels:
Supported, Not security supported
Functionally complete, normal stability, interface stable, but no security support
Supported, Security support external
This feature is security supported by a different organization (not the XenProject). See External security support below.
Supported, with caveats
This feature is security supported only under certain conditions, or support is given only for certain aspects of the feature, or the feature should be used with care because it is easy to use insecurely without knowing it. Additional details will be given in the description.
3.3.5 Interaction with other features
Not all features interact well with all other features. Some features are only for HVM guests; some don’t work with migration, &c.
3.3.6 External security support
The XenProject security team provides security support for XenProject projects.
We also provide security support for Xen-related code in Linux, which is an external project but doesn’t have its own security process.
External projects that provide their own security support for Xen-related features are listed below.
QEMU https://wiki.qemu.org/index.php/SecurityProcess
Libvirt https://libvirt.org/securityprocess.html
FreeBSD https://www.freebsd.org/security/
NetBSD http://www.netbsd.org/support/security/
OpenBSD https://www.openbsd.org/security.html
Needs Review Important page: Some parts of page are out-of-date and needs to be reviewed and corrected! |
About
These drivers allow Windows to make use of the network and block backend drivers in Dom0, instead of the virtual PCI devices provided by QEMU. This gives Windows a substantial performance boost, and most of the testing that has been done confirms that. This document refers to the new WDM version of the drivers, not the previous WDF version. Some information may apply though.
I was able to see a network performance improvement of 221mbit/sec to 998mb/sec using iperf to test throughput. Disk IO, testing via crystalmark, improved from 80MB/sec to150MB/sec on 512-byte sequential writes and 180MB/sec read performance.
With the launch of new Xen project pages the main PV driver page on www.xenproject.org keeps a lot of the more current information regarding the paravirtualization drivers.
Supported Xen versions
Gplpv >=0.11.0.213 were tested for a long time on Xen 4.0.x and are working, should also be working on Xen 4.1.
Gplpv >=0.11.0.357 tested and working on Xen 4.2 and Xen 4.3 unstable.
05/01/14 Update:
The signed drivers from ejbdigital work great on Xen 4.4.0. If you experience a bluescreen while installing these drivers, or after a rebootafter installing them, please try adding device_model_version = 'qemu-xen-traditional'. I had an existing 2008 R2 x64 system that consitently failed with a BSOD afterthe gpl_pv installation. Switching to the 'qemu-xen-traditional' device model resolved the issue. However, on a clean 2008 R2 x64 system, I did not have to makethis change, so please bear this in mind if you run into trouble.
I do need to de-select 'Copy Network Settings' during a custom install of gpl_pv. Leaving 'Copy network settings' resulted in a BSOD for me in 2008R2 x64.
I run Xen 4.4.0-RELEASE built from source on Debian Jessie amd64.
PV drivers 1.0.1089 tested on windows 7 x64 pro SP1, dom0 Debian Wheezy with xen 4.4 from source and upstream qemu >=1.6.1 <=2.0.
Notes: - upstream qemu version 1.6.0 always and older versions in some cases have critical problem with hvm domUs not related to PV drivers. - if there are domUs disks performance problem using blktap2 disks is not PV drivers problem, remove blktap2 use qdisk of upstream qemu instead for big disks performance increase (mainly in write operations)
Supported Windows versions
In theory the drivers should work on any version of Windows supported by Xen. With their respective installer Windows 2000 and later to Windows 7, 32 and 64-bit, also server versions. Please see the release notes with any version of gpl_pv you may download to ensure compatibility.
I have personally used gpl_pv on Windows 7 Pro x64, Windows Server 2008 x64, Windows Server 2008 R2 x64 and had success.
Recently I gave Windows 10 a try under Xen 4.4.1 (using Debian Jessie). The paravirtualization drivers still work. The drivers have not been installed from scratch but have been kept during the Windows Upgrade from Windows 7 to Windows 10.
Building
Sources are now available from the Xen project master git repository:
In addition you will need the Microsoft tools as described in the README files. The information under 'Xen Windows GplPv/Building' still refers to the old Mercurial source code repository and is probably dated.
Downloading
New, Signed, GPL_PV drivers are available at what appears to be the new home of GPL_PV athttp://www.ejbdigital.com.au/gplpv
These may be better than anything currently available from meadowcourt or univention.
Older binaries, and latest source code, are available from http://www.meadowcourt.org/downloads/
- There is now one download per platform/architecture, named as follows:
- platform is '2000' for 2000, 'XP' for XP, '2003' for 2003, and 'Vista2008' for Vista/2008/7
- arch is 'x32' for 32 bit and 'x64' for 64 bits
- 'debug' if is build which contains debug info (please use these if you want any assistance in fixing bugs)
- without 'debug' build which contains no debug info
Signed drivers
Newer, signed, GPL_PV drivers are available at what appears to be the new home of GPL_PV at http://www.ejbdigital.com.au/gplpv
You can get older, signed, GPLPV drivers from univention.
Signed drivers allow installation on Windows Vista and above (Windows 7, Windows Server 2008, Windows 8, Windows Server 2012) without activating the testsigning.
Installing / Upgrading
Once built (or downloaded for a binary release), the included NSIS installer should take care of everything. See here for more info, including info on bcdedit under Windows 2008 / Vista.
/! Please definitly visit the link above which links to /Installing . It holds information to not crash your Installation. It concerns the use of the /GPLPV boot parameter.
Using
Previous to 0.9.12-pre9, '/GPLPV' needed to be specified in your boot.ini file to activate the PV drivers. As of 0.9.12-pre9, /NOGPLPV in boot.ini will disable the drivers, as will booting into safe mode. With 'shutdownmon' running, 'xm shutdown' and 'xm reboot' issued from Dom0 should do the right thing too.
In your machine configuration, make sure you don't use the ioemu network driver. Instead, use a line like:
vif = []
Also fixed MAC address can be set,useful to the risk of reactivation of a license for Windows.
Known Issues
This is a list of issues that may affect you, or may not. These are not confirmed issues that will consistently repeat themselves. An issuelisted here should not cause you to not try gpl_pv in a safe environment. Please report both successes and failures to the mailing list, it all helps!
- An OpenSolaris Dom0 is reported not to work, for reasons unknown.
- Checksum offload has been reported to not work correctly in some circumstances.
- Shutdown monitor service in some cases is not added, and must be added manually.
- Network is not working after restore with upstream qemu, workaround for now is set fixed mac address in domUs xl cfg file.
- Installing with 'Copy Network Settings' may result in a blue screen.
- A blue screen may result if you are not using the traditional qemu emulator.
PLEASE TEST YOUR PERFORMANCE USING IPERF AND/OR CRYSTALMARK BEFORE ASSUMING THERE IS A PROBLEM WITH GPL_PV ITSELF
Note: I was using pscp to copy a large file from another machine to a Windows 2008 R2 DomU machine and was routinely only seeing 12-13MB/sec download rate. I consistentlyhad blamed windows and gpl_pv as the cause of this. I was wrong! Testing the network interface with iperf showed a substantial improvement after installing gpl_pv and thedisk IO showed great performance when tested with CrystlMark. I was seeing a bug in pscp itself. Please try to test performance in a multitude of ways before submittinga complaint or bug report.
Using the windows debugger under Xen
Set up Dom0
- Change/add the serial line to your Windows DomU config to say serial='pty'
- Add a line to /etc/services that says 'windbg_domU 4440/tcp'. Change the domU bit to the name of your windows domain.
- Add a line to /etc/inetd.conf that says 'windbg_domU stream tcp nowait root /usr/sbin/tcpd xm console domU'. Change the domU bit to the name of your domain. (if you don't have an inetd.conf then you'll have to figure it out yourself... basically we just need a connection to port 4440 to connect to the console on your DomU)
- Restart inetd.
Set up the machine you will be debugging on - another Windows machine that can connect to your Dom0 from.
- Download the windows debugger from Microsoft and install.
- Download the 'HW Virtual Serial Port' application from HW Group and install. Version 3 appears to be out, but i've only ever used 2.5.8.
Boot your DomU
- xm create DomU (or whatever you normally use to start your DomU)
- Press F8 when you get to the windows text boot menu and select debugging mode, then boot. The system should appear to hang before the splash screen starts
Start HWVSP
- Start the HW Virtual Serial Port application
- Put the IP address or hostname of your Dom0 in under 'IP Address'
- Put 4440 as the Port
- Select an unused COM port under 'Port Name' (I just use Com8)
- Make sure 'NVT Enable' in the settings tab is unticked
- Save your settings
- Click 'Create COM'. If all goes well it shuold say 'Virtual serial port COM8 created' and 'Connected device <hostname>'
Run the debugger
- Start windbg on your other windows machine
- Select 'Kernel Debug' from the 'File' menu
- Select the COM tab, put 115200 in as the baud rate, and com8 as the port. Leave 'Pipe' and 'Reconnect' unticked
- Click OK
- If all goes well, you should see some activity, and the HWVSP counters should be increasing. If nothing happens, or if the counters start moving and then stop, quit windbg, delete the com port, and start again from 'Start HWVSP'. Not sure why but it doesn't always work the first time.
Debugging
- The debug output from the PV drivers should fly by. If something isn't working, that will be useful when posting bug reports.
- If you actually want to do some debugging, you'll need to have built the drivers yourself so you have the src and pdb files. In the Symbol path, add '*SRV*c:websymbols*http://msdl.microsoft.com/download/symbols;c:path_to_sourcetargetwinxpi386'. change winxpi386 to whatever version you are debugging.
- Actually using the debugger is beyond the scope of this wiki page :)
Developers
Xen Gpl Pv Driver Developers Scsi & Raid Devices Driver Windows 7
- xenpci driver - communicates with Dom0 and implements the xenbus and event channel interfaces
- xenhide driver - disables the QEMU PCI ATA and network devices when the PV devices are active
- xenvbd driver - block device driver
- xennet driver - network interface driver
- xenstub driver - provides a dummy driver for vfb and console devices enumerated by xenpci so that they don't keep asking for drivers to be provided.