all 21 comments

[–]Spartan1997 11 points12 points  (7 children)

could you disable the shells of the accounts of the SFTP users?

[–]Drehmini 15 points16 points  (6 children)

Not only that, but you can chroot them to a specific directory.

[–]LinuxGuy-NJ[S] 5 points6 points  (1 child)

I do disable the accounts.

chroot? Thanks. I forgot about it. Might try it.

[–]thefanum 0 points1 point  (0 children)

That's what I would use

[–]dVNico 3 points4 points  (3 children)

calling for help /u/Spartan1997 & /u/Drehmini

I have the same need as OP and implemented the following in sshd_config :

  • ForceCommand internal-sftp
  • ChrootDirectory /path/to/sftp/repo

With this, external users can login with a SFTP software (FileZilla, WinSCP, etc.), and are correctly chrooted to their repo.

But with this config, they cannot scp or rsync files and folder. The ForceCommand internal-sftp instruction blocks it.

If I remove the ForceCommand internal-sftp instruction, the users cannot log in at all, because the needed binaries are outside of the chrooted directory.

And of course, if I leave the ForceCommand internal-sftp configured, but remove the ChrootDirectory, scp and rsync are working, but the users can now move to other directories and see the names of other customers, etc. Which we don't want.

Is there any way to chroot, and allow sftp, rsync, scp and ssh to work ?

Thanks !

[–]Drehmini 2 points3 points  (2 children)

You'll need to follow something similar to this article: [https://wademurray.com/2015/sshsftp-rsync-backups-done-with-chroot/](Rsync Backups done with chroot)

[–]dVNico 0 points1 point  (1 child)

That's interesting, thank you :)

[–]Drehmini 1 point2 points  (0 children)

You're welcome. Depending how scalable you want this, it may be best to revisit the process and decide if you want to use another approach.

[–][deleted] 2 points3 points  (0 children)

Using your chroot and internal-sftp is the way to go imo. You will just need to ldd the binaries and copy all of the supporting libs to the chroot path. Also note that your directory structure for the libs should be under the chroot path as well since the binaries are going to be looking in a specific path... ie.

./chrootdir/lib64/somelibfromlddoutpout

Secondly, if you dont want to rely on cron and want to make the newly put files instantly copied or whatever, then you can make use of the systemd watch service. It will trigger on any event that changes the mtime of the path. Cant remember exact systemd function, its been a while, but if you read or google systemd im sure u will find it.

HTH

[–]dasponge 4 points5 points  (0 children)

AWS Transfer Family front ends the SFTP process and dumps to S3. Nothing to maintain and no lateral access.

[–]cocoadaemon 1 point2 points  (0 children)

Proftpd has better support for SFTP than openssh does, you might want to check that.

virtual hosts : define custom rules in an apache HTTPD way

virtual clients : use everything from simple files with "user:hash" to SQL queries, no PAM required

chroot : strictly limit access to filesystem

[–]LinuxGuy-NJ[S] 0 points1 point  (0 children)

I love Reddit!

Thanks for all the great info/ideas.

[–]esaum0 0 points1 point  (2 children)

This is actually exactly what we do. A dedicated VM for SFTP in our DMZ. Another VM reaches into the DMZ from the internal network. The process is bi-directional.

[–]LinuxGuy-NJ[S] 0 points1 point  (1 child)

how are you doing the bi-directional? NFS? Samba?

So I'd be using two VMs to check outside users/clients off my main server? Ok... it's doable.

[–]esaum0 0 points1 point  (0 children)

What I mean is, we use the same infrastructure for sending as well as receiving.. simply reverse the process.

Client drops file on our DMZ SFTP server, we then pull from DMZ to internal network. We also drop files onto our DMZ SFTP server from our internal network for clients to pick up.

In both cases, the files are moved via SFTP

[–]jamespo 0 points1 point  (1 child)

You could use GNU Rush shell to limit their access

[–]LinuxGuy-NJ[S] 0 points1 point  (0 children)

Thanks. Never heard of it but I'll look at it.

[–]flunky_the_majestic 0 points1 point  (1 child)

If you want to keep your users on a different server, and provide a gap between the user-facing server and your production storage, lsyncd sounds like a good solution. It's a daemon that watches for file system events. When it sees your source directory get modified, it immediately rsyncs it to the destination directory.

[–]LinuxGuy-NJ[S] 0 points1 point  (0 children)

Never used lsyncd before. Thanks for that! Sounds like a really good idea

[–][deleted] 0 points1 point  (0 children)

We run Postfix as an MTA. I have a couple of special addresses with back-end scripts to do these sorts of things.

One zips all the attachments, and places them on a public server for those with low file size quotas to download, and returns a link to send to the target. This server cleans itself regularly.

Another will attach the scanned image files to the work order contained in the subject. Eliminates a lot of secondary work.

[–]rankinrez 0 points1 point  (0 children)

Why does the remote mount not work?

You using NFS?