Contemplating File Sync/Sharing Services

I wrote a few months ago about what a difference the cloud has made for how I conduct business.  I rarely buy software for my new company anymore; I often am paying for some type of cloud-delivered service.

One of those services that I use (and love) is Dropbox.  It is an incredibly easy replacement for a file server when you need to share 10s to 100s of GB of files between mutliple users.  However, I definitely have some security concerns about it, and not just since the big snafu a few months ago.

One of my issues with dropbox is that they can access my data.  Data is encrypted in transit, but they can access my data because they have my password.  The same appears to be true of Syncplicity & Sugarsync.  Why do I think that? Because they have a "reset my password" link.  How does encryption work if they can change my password without a problem?  Compare this, for example, to wuala's answer and boxcryptor's answer to the question about a lost password.

Even with Wuala, who says they don't know my password, how do they share encrypted data with users I specify?  If all data is encrypted/decrypted locally, how does the person with whom I'm sharing files decrypt them?  I'm curious.

The last two listed are open source alternatives.  They're too limited in functionality for me, but I thought I'd throw them on there anyway.







What do you think about all this?  Anyone I left out that I shouldn't have?

Continue reading

Veeam excites and frustrates me

Veeam is one of the most innovative backup and recovery tools designed specifically for VMware and Hyper-V.  They've also done a really good job of marketing this tool.  In a matter of a couple of years, they've gone from "who's Veeam?" to the mindshare leader in this space.  I'm not sure what they're actual market share is, and there are several other tools that are also making a name for themselves, but it's hard to think of a product that has more successfully captured the hearts and minds of their target market than Veeam.

They announced their vPower functionality at Tech Field Day in Seattle quite some time ago.  To summarize, this is the ability to run a VM from their backup image of that VM.  This opens up all sorts of different levels of functionality, such as instant VM recovery and automated, full testing of the viability of your backups of a given VM.

This is why I looked forward to their presentation at Tech Field Day 7.  At first, I was not disappointed. They announced support for Hyper-V.  Yay!  They also announced further refinement of their vPower functionality.  (They even gave me credit in one of the Powerpoint slides for some suggestion I made that they acted on.) They also hinted at a new version that is almost out, but wouldnt' really talk about it or show it.  We definitely were not allowed to ask questions about it.  Note to future Tech Field Day presenters: I can't think of a way to frustrate bloggers more than to tell them about a new version that you're not going to talk about, show us, or let us ask questions about.  To make that matter worse, they kept hinting about the new version throughout the presentation, but then kept telling us we couldn't ask about it.

Where the wheels fell off the truck for me was when I brought up the fact that most Veeam customers use Backup Exec to back up Veeam.  Another way to say that is that Veeam can't back itself up.  This resulted in a 20 minute conversation during which I got quite riled up, while Doug Hazelmen kept looking at me like he had no idea why I had such an issue with this.  You can watch the whole conversation here.  It's from [1:24] to [1:45].  He occasionally snickered, as if to say that the whole point of the discussion was ludicrious.  At one point he actually said the statement that they can't back themselves up was "stupid."  Yet he confirmed that the most common practice for Veeam customers was to use Backup Exec to back up Veeam.

Veeam data is stored in two places: the SQL database and the backup jobs directory.  There is no way within the product to make a special backup of the SQL catalog so that it can be easily restored without creating a catch-22 situation.  For example, one suggestion was to use one Veeam server to backup another Veeam server.  That creates a catch-22 of having to restore one server before you can restore the other server.  What if both servers are gone?   Doug hinted that losing the SQL database just isn't that big of a deal because it's just job configuration information.  You could just redo it if you lost it.  Is this really a backup company talking to me?

The second part of their data is the backup jobs history.  It has no catalog; everything that Veeam needs to know about the backups is stored with the backups.  The question is: what happens if one or more of those files gets corrupted?  What happens if some well-meaning admin looking for space deletes some jobs?  What happens if a rogue administrator deletes all of them?  As far as I could tell, Veeam has no way of recovering from this situation — which is why most Veeam customers use Backup Exec to back up Veeam.

Doug seemed to think that I was pushing for tape support.  In a way, I was.  Tape is still the least expensive way to get data offsite.  In many organizations, it's the only way to get data offsite.  They just have too much data to be able to afford a pipe big enough to replicate their backups — even if they have been deduplicated.  That issue aside, I wasn't pushing so much for tape as I was a method for creating a backup of my backup.  Files stored in filesystems get corrupted.  It just happened to me today.  For no apparent reason, a file whose modification time hadn't changed was telling me that it couldn't be copied.  It was a movie file on an iMac.  I can play the movie, but I can't copy the file.  Weird. That's what files on filesystems do — and that's why we back them up.  But the guys at Veeam just don't seem to get this, and that's why they frustrate me.

On one hand, I think the idea of a backup that can test itself in a totally automated fashion is completely awesome, and a lot of other areas of functionality are very impressive as well.  On the other hand, them not understanding the issue I do have (and therefore not addressing it) is really frustrating.  I hope we can work this out eventually, but they'll first have to stop calling what I'm saying "stupid." 😉

Continue reading

Dell going for the big time

Dell is going to build a unified storage system that has everything you could want ever want in a mid-tier or enterprise-tier storage system.  Or so said the presenters at Tech Field Day 7.  Only time will tell.

I was part of several bloggers visiting Dell's headquarters in Round Rock, TX (a short drive from Austin) last month just prior to VMWorld.  (That's my excuse for this blog entry being so late, BTW.)  Dell apparently paid for a double-sponsorship from Stephen Foskett of Gestalt IT so that they could talk to us for four hours (instead of the usual two).  They had a lot to talk about.

They made sure we knew about all of the major acquisitions that Dell has made over the past few years:

  • Equallogic – A scalable iSCSI grid storage array
  • Exanet – A scalable NAS system
  • Perot Systems – Professional Services
  • Ocarina – Deduplication and Compression
  • Compellant – Midrange storage arrays
  • RNA Networks – Cloud memory
  • Scalent Technologies – Datacenter management software

I believe it was Carter George who explained all this, and explained how Dell was going to integrate these technologies faster and better than any other storage company has ever done.  The way he described it, it was as if Dell would come out with a totally unified scalable storage system that supported iSCSI, NAS, dedupe and compression that could meet the needs of the mid-market and enterprise market, while being easy to manage in a datacenter — and be cloud ready.  And they were going to do all of this reeeeal soon.  He didn't give dates, but the way it was talking, it sounded like 2012.

Dell, you see, "is starting from scratch."  Those other vendors weren't.  The problem is that I'm not sure how having several products from several different companies, all of which already have existing customers is "starting from scratch." 

The way this usually goes is each company becomes a faction in a big project, each wanting to put their technology into the finished product.  Each of them thinks that their technology is what's going to make things better.  I have one product in mind from the past, where it was pieced together from acquired technologies from a bunch of different companies.  The result was three levels of abstraction (one from each company) before the data ever got to disk.  The result was also a piece of crap.

Maybe Dell will be different.  I wish them the best of luck.  Good luck at tearing down the fiefdoms without damaging egos.  Good luck getting people to speak their mind when it's really important — when the emperor appears to be getting undressed.  My personal experience with trying to do that with Dell did not go very well (to put it mildly), so I hope things have changed.

I also have concerns about how Dell salespeople will evolve to sell products that require upfront sales engineering to get the order right.  My personal experience with their sales teams so far suggests that they've got as much work to do here as they do with all their products I mentioned earlier.

I have been exposed to Equallogic, Compellant, and Ocarina before, and have heard nothing but good about them from the field.  So I think Dell has chosen some really solid building blocks to build a real storage company with.  I just don't think it's going to be as easy as the presenters at Tech Field Day were trying to say it will be.  I'll be more than happy to be wrong, though.

Continue reading