Jonathan Schwartz with Scoble

Jonathan is a very interesting guy, who seems to have a good vision for Sun. Its good to see an important tech company being lead by a visionary geek business leader, rather than just a business leader. Check out his blog as well.

Comments off

Virtual Iron short review

Exclusive: Virtual enlightenment through Xen

Moreover, Virtual Iron extends Xen by enhancing memory management to allow 32-bit and 64-bit guests to run side-by-side, full virtualization to allow guest OSes to run completely unmodified (the current Xen release requires the guest OSes to be modified to run in a Xen environment), and significant work to increase I/O performance of guest OSes. These features will be present in the forthcoming Xen 3.1 release, but Virtual Iron is offering them now, with the GUI management tools.

On the downside, there’s no iSCSI SAN or NFS support, so if you’re lacking a Fiber Channel SAN, you’re forced to use local disk, and this precludes the use of the LiveMigration, LiveRecovery, and LiveMaintenance features.

So what’s lacking? Polish, performance, and the little bits around the edges. The console interaction provided by Virtual Iron 3.1 is fair for Windows guests, but quite sloppy for Linux guests running X11. This is rather surprising, but mouse tracking under Windows is far superior. Of course, most Linux guests won’t be running X11, which mitigates this problem somewhat.

Also missing is VM snapshot support, as well as basic backup tools. Coupled with the lack of iSCSI and NFS support, very basic network configurations, questionable I/O performance, and the obvious wet-behind-the-ears feel of the package, it may be a bit of a hard sell for production use.

Also looks like Virtual Iron lacks vlan support at the moment. Lack of this and iSCSI/NFS restricting shared storage to Fibre is going to cut out a lot of potential users. Especially in the SMB market. Its strange as there storage subsystem seems like its layered on top of LVM with Microsoft based VHD files in logical volumes (LV). You would think that it was easy enough to engineer iSCSI by replacing the Fibre device blocks with iSCSI device blocks on the processing nodes.

Without the LiveMigration support Virtual Iron isn’t really that much better than Xen. They will also have to increase their systems supported coverage for Linux to Debian/Ubuntu for both there management and vmtools.

Technorati Tags: ,

 

Comments off

Virtual Iron

Check this webcast hosted by PlateSpin and Virtual Iron: Reducing Costs and Increasing Agility with Virtualization, and this interface demo. Unfortunately you have to sign up to see it, however it shows some similar stuff to the VI3 demo further down this blog. Virtual Iron is Xen based with some of their own additions, they have Live Migration and DR Migration working now, plus there management interface is very nice.

The one feature I think is perfect and I’ve thought about doing myself is PXE booting the computing nodes and having them join the computing cluster as a resource automagically. This is exactly the right idea. The hardware platform you are running on reduces down to just a software management system. No doubt within a few years server systems will start being designed with the option of a hypervisor as part of the bios. Some one clever could probably do it now with LinuxBios.

The Virtual Iron price structure is very similar to Xen, and beats VMware’s by a huge margin. 500USD plus 125USD per year vs 2875USD per socket plus 700USD per year.

Some other useful info on Virtual Iron:

I’ve been deciding between VMWare and Xen recently for a server upgrade, but I think that Virtual Iron might be the right choice. Xen flexibility with VMWare’s features.

Technorati Tags: , ,

Comments off

EC2 demo video from Amazon.

Comments off

OtherTricks for Spam from Spamassassin

Comments off

Lighttpd, spawn-php and daemontools

This has been sitting forgotten on the draft queue for a while. I’m not using this setup at the moment, but the information is still useful.

Read the rest of this entry »

Comments off

Active/Active HA for Xen using DRBD

HA Migration (not live) Howto for an Active/Active Xen system using DRBD:

So what does all this produce? Node 1 has N DomUs and so does Node 2. Each set of DomUs is on its own drbd device and each node is primary for one of these devices. When a node fails, heartbeat sets the other node as primary for the affected drbd device, activates the LVM VG and LVs and starts the affected set of DomUs via their custom xendomains script (xd1 or xd2). It works great. I’ve rebooted, pulled the plug, and hit the power button and everything fails over OK. There’s a slight delay of about 90 seconds since it isn’t live migration but my environment can tolerate this.

Technorati Tags: , ,

Comments off

VI3 iSCSI Setup Howto

Quick howto with screen-shots for the setup of a iSCSI initiator in VI3.

Up to this point I have not done extensive testing on the overall performance of my setup. What I do know is it performs more than well enough to run an IIS Web Server, an average load SQL Server, and AD Server, an Exchange Server and a File Server without breaking a sweat on the Linux iSCSI Server resources. In addition, the applications respond incredibly well considering the fact that my “Enterprise SAN” cost me less than $500 total. For development purposes to test VMotion, DRS, and HA, this is DEFINITELY a good solution to take a look at. Some brave people, like myself, may even consider using it for production data. I make sure I have a good solid backup every night.

This is running the following setup:

Virtual Machines Running

  • Windows 2003 Domain Controller – 384MB Memory
  • Windows 2003 SQL Server – 512MB Memory (Scripts running consistent read/write/update load on server)
  • Windows 2003 Exchange 2003 Server – 512MB Memory (10 Mailboxes, 5 with a TON of spam being sent for load)
  • Windows 2003 File Server – 384MB Memory

From the looks of some of the comments this is not totally production ready, although this probably has improved in the last six months. It does point to the way things are going though.

Diskless Processing Units, net (PXE, ISCSI, or ?) booting to a hypervisor and running Software Appliances back-ended to Storage Appliance (brandware or software) Units holding the data.

Some hints here for this right with Netapp equipment.

Technorati Tags: , ,

Comments off

VMware Infrastructure 3 Demo

Comments off

ATA over Ethernet

AoE [1], [2] is a recent protocol developed by Coraid.

ATA over Ethernet is a network protocol registered with the IEEE as Ethernet protocol 0x88a2. AoE is low level, much simpler than TCP/IP or even IP. TCP/IP and IP are necessary for the reliable transmission of data over the Internet, but the computer has to work harder to handle the complexity they introduce.

Users of iSCSI have noticed this issue with TCP/IP. iSCSI is a way to send I/O over TCP/IP, so that inexpensive Ethernet equipment may be used instead of Fibre Channel equipment. Many iSCSI users have started buying TCP offload engines (TOE). These TOE cards are expensive, but they remove the burden of doing TCP/IP from the machines using iSCSI.

An alternative to iSCSI, the AoE specification is 8 pages compared with iSCSI’s 257 pages.

The storage hardware sold by Coraid is very cost effective. The basic 1U four SATA disk chasis SR420 is 2000USD, and with the addition of four 750b SATA disks (say about $450) provides 3Tb raw for under 4000 USD in a 1U with RAID 0,1,5,10 or JBOD. This can be combined with their NAS Gateway the CLN20, to provide a reasonable local network storage system.

With Hitachi’s forthcoming 1TB disk, you get 4TB raw. Crazy!

Of course SATA is not the best for database system but if you have a low access archive type system, or a video silo then this is going to work well.

Technorati Tags: , ,

Comments off