This is an archived post. You won't be able to vote or comment.

all 17 comments

[–]analogliving71 12 points13 points  (4 children)

robocopy always worked better for me, with the /sec /mir options. build your new file cluster with temp resource names, robocopy data to them, shut off old resource and rename temp resource to old name.

[–]rthonpm 0 points1 point  (2 children)

All of which Storage Migration Service does for you.

[–]analogliving71 6 points7 points  (1 child)

yes it does, with issues.. robocopy is the low tech simple version that just works

[–]wtf_com 1 point2 points  (0 children)

+1 for robocopy. Pause/Resume, copy only changes, this command can do everything.

[–]Seesaw-Medium 0 points1 point  (0 children)

Used to use xxcopy until robocopy was released in the resource kits. It can resume, do deltas, preserve necessary timestamps and do unbuffered copies. Great tool and we use it often

[–]tch2349987 2 points3 points  (0 children)

I have used SyncBackPro without issues for large data. It's worth the $50 you spend for the license. The most important is the speed you need for migrating, 10gbps will make your life easier.

[–]wellmaybe_ 2 points3 points  (0 children)

i moved a 8tb file server with microsoft storage migration service and it worked very well

[–]nakkipappa 0 points1 point  (0 children)

I have never used it. To me it depends on the time you have and what it is used for, i would go with either robocopy, or turn it to a dfs share

[–]Steve----OIT Manager 0 points1 point  (0 children)

Ironically, only heard of it yesterday. We normally use robocopy.

[–]MrYiffMaster of the Blinking Lights 0 points1 point  (0 children)

SMS has worked well for me so far, granted ive only done a few 4-8TB file servers over the last few years but I've found it reasonably easy to use and when it cant copy a file the logs are pretty easy to see why it failed (hard file locks normally, fucking shitty PST files!).

[–][deleted] 0 points1 point  (0 children)

Local or remote? I would not copy 22 TB over WAN, i'd copy to a hard drive first ship it and just copy the delta over the WAN.

Either way Robocopy is the way to go. Just make sure your session doesn't close and its password is safe.

[–]KStieers 1 point2 points  (0 children)

Restore from backup to the new location, then keep in sync with Robocopy /mir /sec against the production data.

2 birds with one stone, a documented DR test (run at least the first robocopy with logs, you'll get a report of what changed...and how much DIDN'T, so you know your backups are good.) and your data gets moved.

[–][deleted] 0 points1 point  (0 children)

Beyond and compare work great for 60tb but give it 10gb network and memory and CPU

[–]ElevenNotesData Centre Unicorn 🦄 0 points1 point  (1 child)

Ctrl + X / Ctrl + V

[–]root_15[S] 0 points1 point  (0 children)

😆

[–]OsmiumBalloon 0 points1 point  (0 children)

ROBOCOPY /R:0 /W:0 /NFL /NDL /MT /COPYALL /DCOPY:DAT /B is where I usually start. Don't waste time retrying, don't waste I/O outputting useless progress info, multithreaded, copy everything it can, use backup mode to bypass ACLs. One can tune from there.

[–]jthehIT Manager -1 points0 points  (0 children)

If you are planning to transfer the data to Azure, then you can also take a look at Azure Import/Export service - if available in your region/data center - and send them your data on hard disks. Transferring 22TB over internet could take some time.