Thursday, February 25, 2010

Oh dear that was too easy really

I'm glad to report that I've found a solution to the 2TB limit problem.

Let me tell you it was a serious Faceplam moment once I figured it out.

It turns out when you build the RAID you can just configure whatever size you want under the handy Array Size option. UGH!! How easy is that!?!?
So I built the System as 300GB and then allocated the rest to a "Data" volume.
The final build looks like this... Hmmm 5TB... Both volumes are RAID10 over all 6 drives and the System is a sexy 300GB short stroke.
I'm seriously thankful to all that is holy that I figured that one out. I was starting to feel decidedly noobish.

Wednesday, February 24, 2010

Only 2TB What the...?

Oh god it hurts.

Seems like 2TB would be more than enough storage on your boot volume. Well turns out it's all you get no matter what you want!

As you may have noticed these machines I’ve been testing and building this week have been built for high performance.

I’ve been discussing the RAID configuration which involves 6 x 2TB drives. This insane amount of storage has been configured in RAID10. Performance is good and we’ve got some redundancy which is nice.

The problem is when once we get the OS up and running we can only address the first 2TB of disk leaving the remaining 4TB in an unreachable limbo!

This is because the OS is booting from a MBR drive. The MBR system is old. It’s limited to 4 partitions and 2TB of space.

We really want the disk to be a GPT Disk so we can use the rest of disk. But as far as I’ve searched I’ve not discovered how to create a GPT disk at install or convert the System disk from MBR to GPT. I mentioned this issue in a previous post but all week I’ve assumed it’d be easy to resolve.

Turns out your hardware needs to support EFI booting. EFI is the new BIOS. It looks useful here’s a video on the Pre OS environment in windows that is interesting and elaborates on the topic in in a relevant way.

http://channel9.msdn.com/shows/Going+Deep/Windows-Vista-PreOS-Environment-What-happens-before-the-OS-loads/

The bummer is the machines I'm working with are BIOS based and thus only boot from MBR.So right now I’ve got a 6TB volume of which I can address on the first 2TB. The research is continuing.

The Future of Games

http://gigaom.com/2010/02/22/video-reality-tv-iphone-the-future-of-technology-why-its-all-a-game/

This is a highly entertaining look at where gaming is moving in the future.

It's a little worrying for me as well. Why are there more accounts on Farmville than Twitter? Why did Wii Fit do so well? And hell even I bought into Guitar Hero!

Check it out it's 30 minutes long but the presenter, Jesse Schell, is very entertaining.

Monday, February 22, 2010

Striping Shenanigans

So last time I looked at this 6 x 2TB drive array I hypothesised we could get it up to close to 1GB per second if all drives were striped. So I went ahead a did just that. Naming the resultant RAID0 array appropriately....


I must admit is was nice to see a 10TB volume in Disk Management. Given this is just in a standard desktop case not some rack mounted SAN attached monster. It's an obsene amount of disk for a dekstop really. It made me feel all giddy with excitement.


So I ran this bad boy through it's paces and was a little dissapointed to see little to no improvement in disk performane at all.

So looks like I'm hitting the maximum throughput on the RAID card... or the PCIe BUS... At a guess anyway.


The next step was to have a look a single drive performance with that controller. So 6 single disk arrays were created.

OS was installed to the first drive leaving the other 5 for Software raid testing. Performance on a single drive looks like this.


107MB/s is pretty dam quick! But why does a three drive stripe give the same 450MB/s that we got out of a 6 drive stripe? I think really we're looking a the performance of the write Cache on the RAID controller rather than raw drive speed.


So here's a quick breakdown of the Software RAID0 testing I ran through.
These tests were with Directory Opus copying a 12GB file from the Software RAID to the System drive. Peak Speed seems to scale with the number of drives in the Software stripe. However average speeds were consistent across all configurations. The peak was always achieved within the first 5% of the copy and then throughput slowed down for the rest for the operation, probably keeping pace with the single drive target of the copy.

So while all this is mildly interesting it's safe to say the whole 1GB/s throughput execerise resulted in an epic fail. Don't worry though I'll go away and think about it a bit and see if I can't open up that bottleneck and get closer to the important milestone in IO performance.

Friday, February 19, 2010

How to Add a Timestamp to Photos after they've been downloaded

Newer digital cameras aren't supporting embeding a timestamp in the photo. However our Regualtory services people require this feature for Legal reasons.

The following procedure can be used to add a Visual Timestamp to an image;

Install IrfanView
Install IrfanView plugins on the same page

  1. Load Iview
  2. Click file
  3. Click batch conversion/rename
  4. Add files that need to be stamped
  5. check on Use advanced options
  6. click set advanced options
  7. check on Add Overlay text
  8. click settings
  9. Make sure we've set text box Width 3648 height 2736
  10. Enter the EXIF TAG Value of the Date Taken TAG ( $E36867)
  11. Set the font to something inteligent (Calibri 48 Bold Green)
  12. Click OK
  13. Click Ok
  14. Click Start


Happy days. If legal boffins decide to challenge the validity of the timestaps we can provide them with the original files and they can check the date tags themselves.

UPDATE 14/05/2013 - Saw this is by far the most visited blog post I've made. So cleaned it up and removed referenced to user permissions this post assumes you have admin rights.

Increase the Size of a VMWare Drive

VMWare has a tool that allows you to resize a VMDK file...


"C:\Program Files\VMware\VMware Server\vmware-vdiskmanager.exe" -x 20GB "d:\Virtual Machines\EPO Server\Windows Server 2003 Enterprise Edition.vmdk"

Ok that's awesome Ben but what about the partion?

You can use the Diskpart tool! But Ben... If your disk is the system volume you can't!

Well here's where I earned my pay today!

Mount the Disk in another VM!!! - (yup that's it!)

Diskpart will work then! - Doh of course! You really are awesome Ben!

diskpart
detail disk
select disk *
detail volume
select vloume *
extend

NOTE The drive MUST have a drive letter for this to work


How to use Diskpart
http://technet.microsoft.com/en-us/library/bb735137.aspx

Decrease the Size of a ShadowProtectedDisk

Yes Shadow Protect is AWESOME however it has an annoying chink in the armor that even Livestate had covered back in the day.

That is to say Shadow Protect lacks the ability to restore to disks SMALLER than the orginal. This could be a major problem at times... It could be instant fail for a DR exercise with a limited hardware budget for example.

There is hope however! You just need to dig deep into your bag of super-nerd tricks and come out with a combination of solutions.

BASIC IDEA
So your backup is too big to fit on your new disk? Make the backup smaller then!!
  1. Restore image to a disk that is large enough for the backup
  2. Make the partition on that disk smaller
  3. Backup the smaller image


Simple no?
Ok here's what I would do if I had a Hyper-V server built and waiting...

  1. Open Hyper-V
  2. Edit the settings of an existing VM that you've got sitting doing nothing or create a new one
  3. Create a new Dynamically Sized VHD that is as big as the backup image requires
  4. Boot the VM from the Shadow Protect ISO
  5. Restore the Backup image (30 - 60 minutes depending on image size and LAN speed)
  6. Shutdown the VM
  7. Change the settings so it boots from the Gnome Partition Editor ISO
  8. Boot the VM again
  9. Resize the partition using this funky free tool! (should take about 30 minutes depending on hardware grunt)
  10. Boot back into Windows
  11. Load Shadow Protect
  12. Create a backup of your new SLIM disk (30 - 60 minutes depending on image size and LAN speed)
  13. Bathe in your aura of WIN (15 - 120 minutes)


Total time for the job is 90 - 150 minutes depending on image size (and levels of WIN bathing)

NOTE
Clearly in a DR situation this procedure does TRIPLE the time it takes to restore a ShadowProtect image. So the time it takes to set this up and shrink any images needs to be weighed against the time it would take to rebuild the server from scratch.

Also there's no reason this HAS to be done in a VM... If you've got plenty of BIG disks spare this would work just as well on real hardware (with the exception of RAID controller driver issues for both ShadowProtect and GPartED)

OR... if you restore to VMware:
Just make the virtual disk size as big as you need and specify it NOT to use all space up front.
Shadowprotect will be able to restore into the large disk, but it will not take up all the disk space on your host's hdd.

Update

Or you could just do this; http://www.storagecraft.com/kb/questions.php?questionid=88

Short Stroke Love

Just got in a bunch of 2TB drives and a speedy RAID controller... Now let's see what we can do with some silly Short Stroke Configs.

3 x 2TB Drive stripe... 450Mb/sec /sigh I think I’m in love.


But why is it flat? It doesn’t peter off at the end like other examples…

Well that’s because I forgot to make it a GPT disk at install (don’t even know if you can) so it’ll only partition the first 2TB hence the boost! It’s like enforced short stroking.

This will be a production setup. It's accually RAID10 across 6 drives. If I stripped all six drives I'm pretty sure I could get close to 1GB/Sec!

Short Stroke RAID Research

The Short Stroke RAID takes advantage of the fact that data access from the outer edge of a HDD platter is faster than the inner edge. There’s a lot of chatter about it on the Googlenet.


When “Short Stroking RAIDing” we create RAID 0 volumes that leverage a drives outer edge Sweet Spot before the performance drop off.


This snap from HD Tune Pro clearly indicates the performance “sweet spot” for a 250GB WD drive. Looks like it’s about the first 90GB.






As far as I can tell there’s not much more to it than that. I grabbed a couple WD250GB drives and ran them through several tests in different configurations. I consistently measured write speeds at around 150MB/s in all the configurations (I have a feeling OPUS is reporting read speed).



So here are the Read results;


So as you’d expect from the graph 1 Drive runs at 60MB/s two drives striped gives nearly double that (clearly there’s some overhead).


The interesting thing to note is that size of the striped volume made no difference performance as long as the data was in the first 90GB. When we tested “The Rest” at the back end of the disk, things started slowing down, just like in the graph.



The thing is some of the Short Stoking Guides you’ll find online will imply you should only use the outer edge of your disk and not touch the rest. I believe it should be fine to use the rest of the disk as long as it is used for non time critical data, that won’t be accessed when you are using the high performance section of your drive for productivity (IE Gaming), a good candidate for that may be the pr0n folder.



So yeah it’s not a magic bullet. Simply put if you require fast access to data, keep it at the beginning of the drive.



You shouldn’t need to jump through hoops with third party RAID or hard disk management tools (all my testing was with software RAID in Windows).



I recommend putting in some effort to find out what the sweet spot of your drive is, then leverage that with intelligent partitioning. All this “Short Stroking” carry on is pretty much a load of wank!