Netex makes remote backups easier

Netex is the final company that I’m going to blog about that I learned about on Tech Field Day 5.  Their goal is to make it much easier to stream large amounts of data over TCP/IP. Customers using their optimization software (that runs as a virtual machine) are able to fully utilize their WAN connections, instead of using 50% or less of a typical connection.

The Hyper IP product does this by converting TCP streams to UDP streams, then ordering and compressing those streams but frankly that’s about all I can explain.  Hey, I’m a backup guy not a network guy.  What I can say is that the numbers and technology seemed to really impress the network guys.

This product does not qualify as a full bandwidth optimization product, because it’s missing one feature that isn’t important to those who are doing backup/replication tasks.  But if what you’re doing is remote replication or backups of any kind over a WAN connection, it seems like they’re the low-cost, highly-functional alternative to those products.  They had plenty of stories of customers who already had much more expensive bandwidth optimization products opting to use their product for one project — only to end up using it for several projects.

At one time these guys only worked over leased lines, but now they work just fine over Internet connections.  Also, in case you’re wondering — unlike bandwidth optimization products — they work just fine with any deduplication product.

I’d recommend that any customer doing replication over a WAN look into Netex’s Hyper IP product.

Continue reading

Can Druva succeed at mobile backup?

At Tech Field Day 5, we learned about Druva (formerly Druvaa), who has developed a completely new mobile backup product.  Not sure if the world needed another one, I listened closely to how they were going to differentiate themselves.  First, I can say that they definitely understood the needs of the mobile backup market.  It’s very difficult to backup large amounts of data over very small, very unreliable connections while simultaneously remaining virtually invisible to the end user.  But at least they know that’s what they have to do to be successful.

They claim to do this using application-aware deduplication, where they understand 87 different data types, and dedupe them all in the optimum way for that application.  (The story reminded me very much of Ocarina’s story.)  They don’t start backing up the machine as soon as there is a connection; they wait a user-configurable time (default 15 mins) before starting up.

They also automatically detect the type of connection you’re using, and alternate between smaller/bigger packet sizes, more/less compression, fewer/more connections based on what they observe the connection to behave like.

The client side of the application is 40 MB, which is a relatively small size these days.

There is definitely a market for this type of product.  There are also a number of “dead soldiers” in that market.  I hope that Druva is able to exactly what they say and is able to differentiate themselves enough to carve out their own niche. One interesting fact is that they got 600 customers before they officially launched the product.  Perhaps they will be different!

Continue reading

Data Robotics is Dead: Long Live Drobo

Those who follow my blog may have noticed that I really dinged Drobo a few months ago when I found out that they only offered cross-shipping (AKA advanced replacement) for customers who had purchased a support contract.  This means that customers under warranty (but not under a support contract) were required to ship their units to Data Robotics for repair or replacement.  I also dinged them for not having 24×7 support for customers under contract.

I’m happy to announce that Drobo (the new name of Data Robotics) has changed both policies.  They offer cross-shipment/advanced replacement for anyone under warranty and offer 24×7 phone support for customers who have purchased a DroboCare contract.  In addition, DroboCare customers also get same day shipping and next-day arrival of replacement units.  They also now support 3 TB drives in their units.  In short, they fixed everything I complained about in my previous post.

They’re also coming out with some new products – 12-bay drive units to be exact, with 3 GbE iSCSI pors, and it’s rack-mount ready.  This unit still ships with what they call the Drobo manual — the simple diagram behind the face plate that shows what the various lights mean.  It’s almost that simple.

In a very near future date, the business line of products will support automatic tiering of frequently accessed blocks to faster storage, and they will support SSDs plugged into the same unit.  That should give them a significant performance advantage.

There is also a completely new Drobo Dashboard that is a significant advancement over current versions.  It manages multiple units, even giving a graphical display of all of their display lights, as if you were standing in front of them.

As cool as I think the Drobo units are, I do feel the need to point out that they don’t have a snapshot system yet.  This means that they still must be backed up using traditional backup systems.  They are very cool in a lot of ways, but I’d still love to see them support this feature at some future time.

Other than that minor complaint, Drobo makes a heck of a nice unit and I wish them luck expanding the line.

Continue reading

Impressions of Symantec from Tech Field Day

Who would’ve thought that I needed to go to Symantec headquarters to learn something about NetBackup and Backup Exec.  But learn a few things I did when I visited Symantec’s headquarters last week while attending Stephen Foskett’s Tech Field Day.

I knew that the NetBackup team had been putting a lot of R&D resources into VMware backup, so it was no surprise that they had made a number of advancements in this area. They, of course, fully support the vSphere API for Data Protection that replaces VMware Consolidated Backup(VCB).   Symantec does claim that NetBackup is the only product that offers granular file restores without forcing backups to be on a filesystem style device. NetBackup users can backup VMs to a filesystem device, tape device, or virtual tape device–and can still restore individual items from those backups.   Other products must either keep such backups on a filesystem-type device forever or de-stage tape or VTL backups to a filesystem before performing granular restores.

In NetBackup 7.1, they also added concept of the VMware Intelligent Policy. This policy allows you to select an ESX server and have all of its VM’s automatically backed up. The VM’s that the policy will backup can be programmatically determined via a bunch of parameters, such as the name of the machine, the folder it is in, or the storage pool is stored in.

One other interesting piece of functionality that I had never heard of was something called NetBackup Air, which is the ability (using functionality in OST) to duplicate NetBackup images from one master server to another. Currently, this functionality is only available via the NetBackup appliances, but I wonder if it will eventually be enhanced to provide functionality similar to what TSM users can do. (They can easily export backups from one TSM server and import them to another TSM server.)

The Backup Exec folks also presented their product, and also gave a very detailed explanation of how they support VMware and Hyper-V. While certain parts of their functionality might not be as “cool” as how NetBackup works, it is important to understand the target market for Backup Exec. They are aimed solidly at the SMB market, not the enterprise market (where NetBackup is aimed). They are therefore focused mainly on simplicity and cost — not high-end functionality.  Having said that, they do have a fully integrated solution for VMware and Hyper-V, which should be compared to other products in their space.  It was obvious to me that their main goal was to have us understand that they want to be the one-stop-shop product for SMBs to use to back up both physical and virtual machines, as opposed to products that only do one or the other well.

There is one pricing decision that the Backup Exec team made that I do not agree with.  For years, Symantec customers were forced to buy individual licenses for each VM, and those customers were not given a free upgrade to the one-license-handles-all vSphere product when it came out.  They said they did that for a short period of time when they first came out with their VCB product, but have stopped it now.   My belief is that the previous VCB product was so bad (because of VMware’s design, not Symantec’s implementation) that this new product is really the first viable option that Symantec customers have had to buying individual licenses for each VM.  That’s why I say that customers were forced to buy individual licenses and Symantec made a lot of money on those licenses.  I think they should therefore allow customers to trade in a certain number of VM licenses for a single VMware license.  If they force them to pay the full license of the VMware product, then that gives their customers a reason to check out the competition, and that’s the last thing Symantec wants right now.  Just sayin.

During our time at Symantec headquarters, we also received a visit from Symantec CEO, Enrique Salem. His main message was that Symantec was committed to the backup, recovery, and archive markets–despite any rumors to the contrary. That’s good to hear.

Continue reading

Presenting at Tech Field Day

I attended Stephen Foskett’s Tech Field Day 5 last week in San Jose, CA.  Once again, there was an impressive array of vendors (both new and existing) that told us all about their products.  As I am usually in presenter mode, it was nice to sit back and just listen and watch for a change.  I picked up a number of interesting things to blog about, including one company I wasn’t following at all.  I also got to watch companies succeed and fail in front of such an interesting crowd.  Like The NetWorking Nerd, I thought I would give some suggestions to future presenters.

Unfortunately, this is mostly about what companies did wrong.  All of the dos or dont’s below were not followed by at least one vendor, but I’ll let them go anonymous.

  1. We’re geeks, not marketing nerds.  Present to that.
    • Don’t fill your slides with a bunch of marketing mumbo/jumbo.  If you’ve got 15 slides on what the problem is, present one of them.  Ask us if we’ve got it, then skip the next 14.  Unless we say we don’t understand the problem, of course.
    • We are not the usual techie or management drones you’re used to presenting to.  Part of what we want to do is to have fun while we’re there.  Companies that help us have fun get remembered.  See where I’m going here?
  2. Follow the session when you’re not presenting
    • We have a live video stream of what’s happening in the room before you get in there, as well as a twitter hashtag you can follow.  I’m pretty sure you can even visit it in person if you want to be a fly on the wall.  The point is to be familiar with the attendees and how they behave before you step on stage.  If you make a call back to a joke that happened before your company presented, you get serious brownie points from the audience.  (It’s the complete opposite of what happens if you say the word “Gartner,” BTW.)
    • The best example of this was that in the first session someone made a joke that it would be cool if the vendor was passing out bacon.  Then someone else said “or chocolate!”  The next session the vendor showed up with a plate of bacon and a handful of chocolate!  What a way to start out a presentation by showing that you’re participating in the overall event.
  3. Follow the session while you’re presenting
    • One vendor’s first presenter rambled on for 45 minutes without telling us hardly anything about their company’s products.  Not every presenter is equally adept at measuring audience response, but anyone can follow a twitter hashtag or IRC feed.  If this company had been doing that, they would have seen a twitter message about an IRC session that was going on.  They should have then been following both.  Had they done that, they would have yanked the presenter off the stage because what was being said about him on twitter and IRC was not helping the company one bit.  (For the record, once they actually got to “what our product does,” we were very interested.  But they spent 45 minutes wasting our time.)
  4. Leave some time for people to breathe (and talk about you!)
    • One vendor gave a lot of interesting content, but left no room for questions.  They felt the need to fill every minute of our time with presentations and/or to tell us absolutely everything about their product.  We never got the chance to ask them questions or talk about them to each other with them present.  Give us some time to do that!
    • To harp about the presenter from #3 above, I will say that something he said suggested that he felt the need to fill the two hours.  He wasn’t sure how they were going to do that, so he decided to ramble for 45 minutes to fill the time.  Trust me; it would been a lot better if they quite 45 minutes early. No one would have complained! Instead he gave a completely rambling, unprepared talk that turned everyone off.  OK, enough about that guy.
  5. See if you can do some kind of cool giveaway.
    • Give attendees a discount on your product.  Give away something cool. It doesn’t have to be expensive.  Netex used an ice-chest full of beers to explain network congestion and then gave us the beers when it was over.  I don’t even drink beer and I thought that was cool.  (The people that drink beer probably thought it was cooler.)

I hope that helps.  Now let me blog about the products I found interesting.

Continue reading

EMC changes Mozy pricing with little notice

EMC significantly changed the pricing of their Mozy Home backup service last week.  They eliminated their unlimited offering and replaced it with two metered offerings.  The first offering is $5.99 a month ($1 more than the previous unlimited offering) for 50 GB and one computer, or $9.99 a month for up to 3 computers and 125 GB of data.  Customers that go over their allotted number of gigabytes pay $2 per month for another 20 GB. 

This immediately set off a firestorm of complaints, over 900 of which (as of this writing) are shown here in this Mozy Community forum thread.

Is it “wrong” for EMC to do this?

EMC has to do what they have to do to make a profit.  Yes, those who are currently backing up terabytes of data to Mozy would have to pay hundreds of dollars a month to stay there, and that seems like a ridiculous price jump.  It also seems ridiculous to think that someone would store your TBs of data and expect to keep doing that for only $5/month!  So my first reaction to the news was that I wasn’t surprised by EMC’s actions.

The previous business model of Mozy is similar to several other “unlimited” business models.  Sell unlimited Internet access to thousands of people and hope that all of them don’t want to use it at once.  Sell hosted websites with unlimited bandwidth and hope that most customers don’t get anywhere close to using it.  The people in my office building have unlimited use of the water fountain, but we can’t all use it at the same time.  The “unlimited” concept works when you get the 80/20 rule right.  But sometimes you don’t, and things have to be adjusted.

I believe that Mozy’s new pricing is meant to drive the Terabyte customers away.  They couldn’t possibly be expecting for their terabyte customers to pay $200/month to store 2 TB when they could get the “same” for only $5/month.  There will be a mass exodus of those customers, which is exactly what I believe EMC wants.  Many will go to their competitors and others (based on the posts in the forum) will buy local USB drives and back up to those.

Should they have done it with so little notice?

This is where I think EMC went wrong. When I did my first Mozy backup, it took me months to upload my 300 GB to them.  (That same amount would now cost me roughly $30/month.)  They need to give their larger customers a whole lot more than a few weeks to move.  EMC is forcing these home users to either pay hundreds of dollars per month or cancel their account and have no backups while they upload to their next provider.  This, in my opinion, is very uncool and will earn EMC a lot of negative brand equity.

Was this a smart move on EMC’s part?

Short version: I don’t think so.

Only EMC will know when the dust settles, but I’m not sure they thought about the ramifications of forcing their larger customers to leave them.  People who have terabytes of data tend to be geeks like me.  (I’m a movie buff, and I now have over 6 TB of personal data at home.)  Geeks like me tend to have a bunch of people around us that ask us what we think.  EMC says that it’s 5% of the customers that are forcing them to change their prices.  Suppose each of the customers in that 5% have 10 friends they recommended Mozy to, and that these angry geeks now call everyone they recommended it to and tell them to move.  That 5% suddenly becomes 55%.  Their could be a serious snowball effect and a significant revenue hit for Mozy.

But then again, what do I know?  The company that everyone seems in a hurry to move to (Carbonite) actually lost several thousand customers’ data.  Instead of falling on their sword, they’re actually suing their storage array vendor as if it’s their fault!   Carbonite, a company whose entire purpose for existence is storing customer’s backups lost their backups.  And they’re trying to pass the buck to their storage array vendor!  Why would anyone store their backups there?  And people are choosing them over Backblaze or Crashplan because they’ve… been… around… longer… 

Like I said.  What do I know.

I still ultimately think this move (and the way it was executed) is not a smart one.

Continue reading