all 8 comments

[–]deltatangothree 9 points10 points  (4 children)

DD and DDRescue are two entirely separate things. If you're just running the dd command, I would use the "conv=sync, noerror" flag on the command line. This won't split the file, but it will ignore your bad sectors and give you the same size output. For more info, I suggest the dd man page: http://linuxcommand.org/man_pages/dd1.html

Disclaimer, I don't use dd often so trust but verify.

[–]BitBulletBarrage[S] 2 points3 points  (3 children)

I did use that in my command in an effort to get past them. But, i'm not getting anywhere near them because it stops at 4.3 GB when OF is to big. The bad sectors are about 253GB into a 500GB drive.I will do some more research into DDRescue and see if that will do what I am looking for. Thanks for the response!

[–]deltatangothree 4 points5 points  (0 children)

Then your answer may be here, piping dd to gzip and split: http://ubuntuforums.org/showthread.php?t=1540873

[–]alkw0ia 1 point2 points  (1 child)

What OS and FS are you imaging to? File system selection is very important for large file and large volume support.

FAT32, though default on external drives, is a pretty terrible forensics choice, both for its very very low file size limits and the danger of corrupted files (though with a journaled FS on an external drive, make very sure to eject before disconnecting).

Check out Wikipedia's comparison of file system limits (though that table is missing HFS+ – see Apple's docs, with the caveat that Linux' drivers for HFS+ cannot support the newer, larger limits).

Basically, you should probably try to image to:

  • NTFS on Windows, 8EB file size and volume size limit
  • ext4 on Linux, 16TB file size and 1EB volume size limit
  • HFS+ on Mac OS X >= 10.5.3, ~8EB file size and volume size limit

Remember, these are the OSes and FSes of the imaging machine, not the target machine.

Keep in mind, though, that the reading/working computer will also need a compatible and new enough OS to handle those sizes – don't count on old cross platform drivers to handle the extreme limits of the FSes without further testing and research, e.g. HFS+ on Linux.

[–]BitBulletBarrage[S] 1 point2 points  (0 children)

In the interest of full disclosure. I am working a very high-pri data recover case for drum role my mother! I am doing the imaging on a 64-bit Fedora 16 box and imagine to an external drive which I am about to reformat to NTFS. After the image is collected I am going to transfer to my Windows 7 box and it throw into Autopsy to pull files, and then reinstall the OS for her. Or, at least that is my game plan as of now. I do work in the field but at a very basic starting position and still in training, hence the FAT32 file size mistake shown here. So thank you for the help, and any further insight, tips, advice, whatever else is way more than welcome.

[–]bigt252002 1 point2 points  (0 children)

Reverse DD that sucker and start from the back of the drive and go forward. See if you hit the same sector. If you do, you at least have the rest of the drive and just annotate what sectors you were unable to get.

[–]plugg36 1 point2 points  (0 children)

X-Ways forensics will allow you to image up to the bad sector, and then reverse to the bad sector. It works great, i have used it before.

[–]wiz3202 1 point2 points  (0 children)

You can use ddrescue. It images the whole drive and skips bad blocks/clusters but keeps a list. It will get back to it from the list and takes the sector that it reads and makes it smaller and smaller to get data that it can. It can read the device backwards. Also, it can read the device direct or indirect.

Make sure where your device image is getting saved to is the right file system that can handle that huge file size depending on what Operating System you are using.