Entries Tagged 'Systems' ↓
December 31st, 2008 — Unix
A few useful vim indenting links:
This is the most useful bit when pasting into a vim window.
nnoremap :set invpaste paste?
December 16th, 2008 — Linux, Software, Windows
Lanuchy is a quicksilver-like key stroke application lanucher for linux and windows. Very cool.
December 14th, 2008 — Linux
Shared subtrees – in depth look at bind mounts.
December 12th, 2008 — Linux, Tech, Windows
Windows* client CIFS behavior can slow Linux* NAS performance:-
We have compared the performance of Windows* and Linux*-based CIFS* (Samba*) servers for digital media applications and found that the ext3*-based Linux server’s throughput was up to 53% lower than the Windows server’s–although both used identical hardware (Figure 1). An XFS*-based Linux server had roughly the same performance as the Windows server. Our investigation shows that the difference lies in the filesystem allocation and handling of sparse files. In particular, the Windows client makes an assumption that the CIFS fileserver uses NTFS*, a filesystem that assumes files will be data-full (not sparse). This contradicts a fundamental assumption of ext3–that files are sparse–and leads to fragmentation of files and degraded performance on ext3. Further, we’ve seen this behavior manifested for a broad range of media applications including iTunes*.
December 10th, 2008 — Systems
December 6th, 2008 — Linux, Virtualisation
Couple useful articles from Andy Millar.
- Concise and clear explaination of linux load averages.
- Bug fix suggestion for VMware server which can hang on installation. Remove the floppy device. I’ve got another issue where a linux vm on vmware server hangs on startup and I’ll have to try this.
November 11th, 2008 — Hardware, Solaris
Some more detail on Sun’s new storage platform, Fishworks.
Hybrid Storage Pools in the 7410:
The write performance of 7200 RPM drive isn’t terrific. The appalling thing is that the next best solution — 15K RPM drives — aren’t really that much better: a factor of two or three at best. To blow the doors off, the Sun Storage 7410 allows up to four write-optimized flash drives per JBOD each of which is capable of handling 10,000 writes per second. We call this flash device Logzilla.
Logzilla is a flash-based SSD that contains a pretty big DRAM cache backed by a supercapacitor so that the cache can effectively be treated as nonvolatile. We use Logzilla as a ZFS intent log device so that synchronous writes are directed to Logzilla and clients incur only that 100μs latency. This may sound a lot like how NVRAM is used to accelerate storage devices, and it is, but there are some important advantages of Logzilla. The first is capacity: most NVRAM maxes out at 4GB. That might seem like enough, but I’ve talked to enough customers to realize that it really isn’t and that performance cliff is an awful long way down. Logzilla is an 18GB device which is big enough to hold the necessary data while ZFS syncs it out to disk even running full tilt. The second problem with NVRAM scalability: once you’ve stretched your NVRAM to its limit there’s not much you can do. If your system supports it (and most don’t) you can add another PCI card, but those slots tend to be valuable resources for NICs and HBAs, and even then there’s necessarily a pretty small number to which you could conceivably scale. Logzilla is an SSD sitting in a SAS JBOD so it’s easy to plug more devices into ZFS and use them as a growing pool of intent log devices.
The standard practice in storage systems is to use the available DRAM as a read cache for data that is likely to be frequently accessed, and the 7000 Series does the same. In fact, it can do quite a better job of it because, unlike most storage systems which stop at 64GB of cache, the 7410 has up to 256GB of DRAM to use as a read cache. As I mentioned before, that’s still not going to be enough to cache the entire working set for a lot of use cases. This is where we at Fishworks came up with the innovative solution of using flash as a massive read cache. The 7410 can accomodate up to six 100GB, read-optimized, flash SSDs; accordingly, we call this device Readzilla.
With Readzilla, a maximum 7410 configuration can have 256GB of DRAM providing sub-μs latency to cached data and 600GB worth of Readzilla servicing read requests in around 50-100μs. Forgive me for stating the obvious: that’s 856GB of cache —. That may not suffice to cache all workloads, but it’s certainly getting there. As with Logzilla, a wonderful property of Readzilla is its scalability. You can change the number of Readzilla devices to match your workload. Further, you can choose the right combination of DRAM and Readzilla to provide the requisite service times with the appopriate cost and power use. Readzilla is cheaper and less power-hungry than DRAM so applications that don’t need the blazing speed of DRAM can prefer the more economical flash cache. It’s a flexible solution that can be adapted to specific needs.
Some back story, a GUI screenshot and detail on Dtrace with:Fishworks: Now it can be told. And some detail on topology.
I wonder if it will be possible to get the Log and Read cache devices seperately.
November 11th, 2008 — Hardware, Solaris
Sun have come out with some storage appliances. The have some compelling functionality, like being able to trace critical path performance.
So there are many good reasons why the 7000 series is cool – the integrated flash devices, the hardware itself, blah, blah blah. Here’s the amazing part – the hardware isn’t even the coolest feature. It’s the software. The ability to *in real time* drop in new tracing events to see what’s really happening on the device is just unbelievable.
How many times have you seen your NAS devices suddenly “go slow”? And you have *no clue* as to why. I can tell you it happens often when running big infrastructure. You dig around for a while and maybe you can figure out that it’s one machine and if you’re particularly good you can figure out one user on one machine and slap their hands. With the 7000 you get the ability in real time to dig into whats being done using which protocol by user, by file, but whatever you want. It’s stunning to see, and incredibly useful in managing the infrastructure. For a mostly detailed overview of the capabilities, check out Bryan Cantrill’s presentation on analytics.
The mid and high end models both support ssd flash to speed up I/O.
The SSDs are used explicitly for caching and logging, and only the 7410 offers both — the 7210 has read-biased SSDs, and the 7110 doesn’t have SSD support. In discussions with Sun engineers, they claimed that the addition of the read-biased SSD caching in conjunction with ZFS’ predictive caching algorithms means that 7200RPM SATA drives perform just as well, if not better than 10K SAS drives. In fact, they’re conducting trials to determine if they can use 4200RPM SATA drives in these devices without sacrificing I/O performance. If that’s possible, then the price point, power consumption, and heat generation drops across the board.
I’ll be looking for reviews with interest to see how these compare with similar NAS systems.
August 13th, 2008 — Windows
Often when using multiple screens with a laptop then traveling away with just the single laptop screen applications my remember the second screen and disappear when windowed. Often you can “get them back” by maximizing the window.
A better method is to:
There’s a simple trick to get around this. First make sure you’ve alt-tabbed to the window, or clicked on it once to bring it into focus. Then right-click on the taskbar and choose Move
At this point, you should notice that your cursor changes to the “Move” cursor, but you still can’t move anything.
Just hit any one of the arrow keys (Left, Right, Down, Up), move your mouse, and the window should magically “pop” back onto the screen.
Note: For keyboard savvy people, you can just alt-tab to the window, use Alt+Space, then M, then Arrow key, and then move your mouse.
June 1st, 2008 — Linux
Occansion in Linux when running a VPN you want to generate traffic from the VPN server node. By default Linux uses the IP of the interface used to route a package. The might often complicated the routing tables required at client networks.
A solution to fix this issue is to use Linux source/policy routing. For example, for node with IP 192.168.10.1 and VPN server IP 172.29.148.1, route to subnet 192.168.9.0/24 out 172.29.148.1 via 172.29.148.2 with src (source) IP 192.168.10.1:
sudo ip route add to 192.168.9.0/24 src 192.168.10.1 via 172.29.148.2