Share and Enjoy
Routine, scheduled and automated file transfers between servers occur in a wide range of use cases in just about every organization. In this post, we will begin a two part series and look at those use cases and the top five factors compelling enhanced server-to-server file transfer.
In the next blog post we will look at the technology requirements to improve server-to server file transfer.
The use cases for automated server-to-server file transfers include:
- The movement of bulk/batch data from one application to another
- The deployment of web application updates from a staging system to production systems
- The transfer of end-of-day financial and/or inventory data from branch locations into systems hosted in the corporate data center
There are of course many additional uses in any organization. These are simply some of the more common and generic examples.
5 Factors Impacting Server-to-Server File Transfer
Critical business functions are dependent on these file transfers working consistently, so a reliable and reusable framework for the movement of data between servers within an organization supports the key IT objectives of maintaining a reliable data infrastructure and responding quickly to new business initiatives.
When server-to-server file transfers fail to support these objectives, it’s typically due to one or more of the 5 following factors:
- Size/Volume Growth – File sizes and file transfer volumes have grown too a level that’s challenging for existing systems to handle reliably and efficiently.
- Too many solutions – There are a variety of disparate file transfer technologies implemented across the organization’s server-to-server file transfer use cases; there are too many different ways to do the same thing.
- Lack of file transfer governance– Aggregated auditing and reporting needs to be established for all sensitive or regulated data moving within the organization. See this blog post on file transfer governance.
- High manual effort – Certain file transfers are too critical to leave unmonitored. Existing technologies can’t provide the reliability and visibility to the critical transfers, requiring IT staff to babysit key transfers.
- Expensive Automation/Integration – The lack of ROI from the effort to automate critical server-to-server file transfers because existing technologies fail to do enough of the work needed for reliability, error recovery and post-transfer activity. This means that you’re required to code this logic into your scripts and applications, increasing their complexity and making sustainability of those scripts/applications more challenging.
3 Requirements for Enhanced Server-to-Server File Transfers
Managed File Transfer systems that are optimized for server-to-server file transfer use cases (not all are) support the following requirements that overcome these challenges.
- Reliably accommodate large files and scale to large volumes.
- Provide a powerful and easy-to-consume interface for file transfer and administrative functions to system administrators and application developers.
- Deliver the visibility necessary to quickly resolve outages, audit the movement of sensitive files and generate reports that document compliance in your handling of regulated data.
Supporting these requirements is essential in successful and secure server-to-server file transfers that maximize the value of automating routine but vital data transfer.
The next blog post delves into the technical capabilities of advanced MFT solutions that support the above requirements.


You must log in to post a comment.