Quote:
Originally Posted by jggimi
... so that the file can be encrypted/decrypted in flight instead of being staged through a cipher utility like openssl(1). But that's similar to sftp(1) or scp(1) in network throughput.
|
It raises an interesting point. As much as it can be frustrating to wait for a file to cross the network more slowly with the encrypted tunnel, OTOH if using netcat (or other unencrypted transport) then the file has to be encrypted before transit and decrypted afterwards. That will increase the total time to do the job, which may be a more important metric than network transit time. Which way is "better" will probably depend on many factors so I doubt there's a simple answer.
I had a couple of other general thoughts for jonsec:
A) Writing
largefile.enc.gz suggests that the file encryption is done first, followed by compression. I suspect this isn't the best order to do it. The encrypted file should be very random and probably won't compress well due to that. So if the compression is done first, there is a possibility of getting good compression (if the material isn't already in a compressed form, like an MPG). This would lead to
largefile.gz.enc. Having a smaller file would give better network transit times (and less disk space usage, etc.).
B) A given port, such as 1234,
might already be in use by another process on the target machine (though probably unlikely, depending on the circumstances). If you want to automate this to some extent (e.g., use a shell script to set up the receiving nc process) you might want to use a small range of ports, e.g., 1234-1238, and use the first free one. Or use a random port in a larger range. This would affect what you do, or can do, with pf of course.