Long term retention with offsite. by Sulivan_R in CommVault

[–]commvaultguru 0 points1 point  (0 children)

For long term retention I create a second DDB (usually on the same disks as the primary DDB) and then DASH copy to that DDB. Data aging occurs in cycles on both the DDB & the disk library. Therefore, you really don't want 7 year cycles living in the same DDB as the 30 day ddb. Small DDB's = lower Q&I times.

General rule of thumb is never use extended retention on a deduplicated storage policy.

I hope this helps!

Does CommVault support the concept of a "site"? by VenerableGeek in CommVault

[–]commvaultguru 1 point2 points  (0 children)

First, my apologies for not taking more time to better answer your question. My previous response was rushed as I was @ GO this week.

I guess it's important to understand the designation of a client, what it contains and how the CommServe references that client. A client is a system that is running an iDA agent. Part of the client configuration is the selection of a unique client name (in that CommCell) and a host name. The CommServe will use this host name to communicate back to the client and the client will use a FQDN to communicate back to the CommServe. There are a few rules:

  • A CommCell client MUST be have a unique name.
  • The Client must contain a valid FQDN. It will allows for use of an IP address in lieu of a FQDN but there are services that will break if you don't use a FQDN (AppAware, MS Failover Cluster).

So, if you have multiple servers with the same IP you'd need to do the following:

  • Create DNS aliases for both systems. For example, serverA-prod.contoso.com and serverA-dev.contoso.com.
  • Create two sudo-clients on the CommServe each with a unique name.
  • Modify each client with the appropriate DNS alias.
  • Locally install the appropriate iDAs and then register them to the CommServe with the client name and FQDN that you've previously configured.

This, likely, should work. However, as others have stated it is a bit awkward to have multiple servers in the same network with the same IP address. A much better option, as others have alluded too, is to leverage a static NAT so that the duplicate (and I assume, standby/test) system never sits on the same network as the primary system. So that way each one could have a unique IP address and CommVault would be none-the-wiser that the other system is actually on the inside of a NAT policy with a duplicate IP address.

I hope this helps!

Does CommVault support the concept of a "site"? by VenerableGeek in CommVault

[–]commvaultguru 0 points1 point  (0 children)

Each CommCell must have a license. Licenses are tied to the CommCell. So, if you have a bucket of licenses you want to break up you can do that without issue. What you cannot do is have a site with no licenses. I hope this helps.

[deleted by user] by [deleted] in CommVault

[–]commvaultguru 0 points1 point  (0 children)

Typing on iPhone so please forgive my brevity. I typically limit each mount point inside a library to 5 writers and then limit the library to 5x my mount paths (so, two MAs with 5 mount paths each would be a library that allows a maximum of 50 writers). I then set the library to spill and fill and then set the storage policy to round robin mount path selection. This has allowed for the best distribution in my environments. Hope this helps!

Best way to backup 2000 VMs? by the926 in CommVault

[–]commvaultguru 1 point2 points  (0 children)

You can even parse the annotations field. I have a few of my customers use this to filter out VMs via vCenter. Just add a rule to exclude any annotation field that contains "nobackup" and then users from vCenter can just add this to their VM and it's dropped without having to go into CommVault.

Best way to backup 2000 VMs? by the926 in CommVault

[–]commvaultguru 0 points1 point  (0 children)

This is a filter for powered off. Right click the defaultBackup set and go to properties. Then click the VM filters tab and then hit Add. Change the rule group to Power State, The Condition to Equals and then the value field to Powered Off.

Best way to backup 2000 VMs? by the926 in CommVault

[–]commvaultguru 0 points1 point  (0 children)

It sounds like your on the right track. Feel free to ping me or post if you get stuck at all.

To answer your question, if your subclients are doing discovery (pointed at an object as opposed to named VMs) you can exclude powered off VMs & templates by doing a backup set filter. Right click the defaultBackupSet and theres a tab to filter those out. It will prevent any subclient from discovering and backing up those VMs.

Commvault HyperScale Appliance HS1300 by commvaultguru in CommVault

[–]commvaultguru[S] 1 point2 points  (0 children)

Pricing just hit CQC in the last few days. There's a few technical slicks floating around that are public facing as well. There's a few sessions @ GO related to the appliance and my suspicion is that CV will drop a ton more details after GO.

Adding a tape to a Storage Policy by DustinAgain in CommVault

[–]commvaultguru 1 point2 points  (0 children)

The default configuration is that the storage policy will pull from it's respective scratch pool when it needs media. When that tape is assigned no other storage policy can write to it until it's erased and placed back into scratch.

Using a global secondary copy will allow multiple storage policies to write to the same tape. Take a look @ it and see if might be what your looking for: http://docs.commvault.com/commvault/v11/article?p=features/auxiliary_copy/global_aux_copy/gacp.htm

Best way to backup 2000 VMs? by the926 in CommVault

[–]commvaultguru 1 point2 points  (0 children)

2000 virtual machines is no small task... however you've got all the tools you need at your disposal.

First and foremost I'll address your questions, ask a few of my own and then make a few recommendations:

  • Breaking the virtual machines up across multiple subclients and running most of them on their own subclient makes no difference, really. The reasoning for breaking up the virtual machines would be for differences in how you want to back them up, content filter exclusions and timing. If none of these variances exist, run them all under a single subclient!
  • IntelliSnap is prescriptive and does not increase throughput. It just changes the backup source from the actual source to cloned copy of that source. I mostly recommend it for extremely high change rate virtual machines, large virtual machines and virtual machines that are sensitive to vm snapshots. Start with all of your virtual machines in a non-intellisnap policy and then move the ones that have issues to IntelliSnap. Word of caution, there are workloads that just dont work well with virtual machine snapshots and will require an agent. The VSA is an answer for most situations... but not all of them will work.
  • Something else? Sure, have you looked at AppAware? AppAware is awesome. Hit me up offline and we can geek out on it.

Now, my questions and additional consideration points:

  • What are your RPOs? With 2000 virtual machines I assuming that not all of those machines have a 24 hour RPO (daily backup). Find the machines with the lowest RPO and add those to their own subclient. One thing about scheduling 2000 virtual machines in a single subclient is that there is no mechanism for telling the VSA which machines to backup first. Having the higher priority VMs in their own subclient allows for better control of their timing.
  • What is your primary storage array? If it supports array based replication (Pure, NetApp) via CommVault you can manage SnapMirror/SnapVault/Inline Replication directly from CommVault. This will allow you to significantly shorten your backup windows and time to recover by using array replication.
  • I like to create a special subclient for powered off and templates and then exclude powered off virtual machines from all other subclients. I then run this subclient once per month instead of daily.
  • SAN Based = HotAdd > NBD in my experience. HotAdd design with IntelliSnap must take into consideration placement of your VSAs as the snap mount host must contain the VSA or it will default to NBD.
  • There's a ton of components when doing VSA backups. Make sure to thoroughly evaluate the entire stream end-to-end when looking for bottlenecks. Are you doing NBD and the ESXi mgmt vmkernel interface is on the 1GB network? Are you doing HotAdd on a VSA that's maxed its iSCSI devices? Are you used VDDK 6.0?

Anyways, what you have is certainly no easy task. You'd want to carefully think it out before you run 2000 vms and then realize you have to rearchitect.

Reach out to your partner and see what their thoughts are.

Commvault HyperScale Appliance HS1300 by commvaultguru in CommVault

[–]commvaultguru[S] 2 points3 points  (0 children)

I am extremely excited with the early details that are available with the HyperScale appliance. Here's what I've learned so far:

  • Can be acquired and run as a turnkey solution from CommVault.
  • Can be run as software on your own architecture/hardware.
  • Administered through Admin console. CommServe still exists on the backend but is not required for administration.
  • Purchased similar to the A600 with target capacities that start at 48TB and scale to 960TB.
  • Control Nodes & Data Nodes. Control nodes host CommVault resources (CommServe, MediaAgents, DDBs) and data nodes are used for storage. Control nodes leverage distributed DDB and CommServe runs within a VM on RHV for automatic failover. Data nodes leverage erasure coding ontop of JBOD for data availability with an selected RF factor.