It is important to check that backups are indeed running at the scheduled time and frequency, and that enough disk space is available at source and destination. Offsite backups also allow restore operations to be done from any location in the world.īackups need to be monitored frequently to ensure everything is running smoothly. Offsite backups, aka cloud backups, are the premium solution to protect against local disasters, such as fires, power surges, lightning, floods, and viruses that spread through the organization’s LAN. They ensure the most common scenarios of data loss are handled immediately, quickly, at a very low cost, and without major interventions.Įxternal storage, such as NAS backup and network server backups, use centralized, redundant storage so that data is automatically protected by using multiple copies at a centralized location, where their functionality and reliability is monitored daily. Internal backups are amazingly fast and cheap. Using three types of storage media gives you better control over all potential scenarios and their costs involved. Use internal, external, and offsite storage Offsite backups protect from local disasters but usually take a long time to complete and restore, and consume expensive internet bandwidth. External drives are very fast and cost very little however, they are sensitive, provide no redundancy, and reside right next to the server or workstation and could be damaged by viruses, electromagnetic interference, and other environmental factors. Each of these backups gives you benefits at a certain cost. Ideally, you want to have at least three different backups: a local external drive backup, a network backup, and an offsite backup stored at a remote location. Data corruption, hard drive crashes, server failure, and other common events can quickly lead to a 100% loss of all data. In fact, it only protects from accidental deletion. You’ll need to keep this amount of space available at all times and monitor its availability.Įliminate single point of failure: Use multiple storage devices and use backup rotationīacking up data on the same disk or physical machine where it came from does not eliminate all risks of data loss. In almost all cases you’re better off holding on to business data files for as long as possible.īackup systems also require a certain amount of slack space to work properly, usually around 200% of the space required for a full backup. In addition, the value of older files may not be clear at the present but a catastrophic event may make these files very valuable in the future. Wasting an hour of an engineer’s time at work usually costs several times more than a terabyte of disk space. It’s a reality in many businesses that it is more expensive to identify which old files could be safely deleted than to just add more backup disk space. If you agree with your team that 10 TB are enough, you’ll find a year down the road you really needed over 20 TB. If too little space is available, disk access slows down dramatically.īackups can’t work properly when disk space is low since old backups won’t be cleared until the next new backup has completed.ĭata usage grows exponentially. All kinds of strange software errors emerge, even from Microsoft Windows itself, when resources are low. Keep enough disk space free at all times: At the source as well as the targetįree disk space is of paramount importance. This article summarizes best practices on how to manage Windows Server backups, server data and files, and Hyper-V backups.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |