This issue was supposed to be "Talking About Troubleshooting, Part 2". But instead it will be on backing up data. Talking, part 2 will come in a future issue.
Step 3 of the Universal Troubleshooting Process is "make a damage control plan", which includes plans for data backup. While data loss isn't as disasterous as a Chernobyl, a Bhopal or an Exxon Valdez, it can cost millions. And the real tragedy is that data loss is so easy and inexpensive to prevent.
Large networks are backed up by fabulously expensive systems beyond the scope of these articles. They employ specialists trained in backup strategies, and have all the resources necessary to do a great job. The following articles discuss backups of moderate systems.
So kick back, relax, and read this issue. And remember, if you're a Troubleshooter, this is your magazine. Enjoy!
Sometimes I still look at it, and still grieve. Born in early 1987, it died May 10 of that year. But in a cruel twist of fate, the almost complete executable is still there -- mocking me. Only the source code died in the disk crash.
It's called bytes2.com. Its file date is May 10, 1987. Bytes2.com would have been pretty good shareware by 1987 standards, if it had been allowed to grow to adulthood. It counted the bytes used by files in a directory. The plan, partially completed, was to have it count the bytes used by a whole tree, as well as the bytes which would be freed up by deleting the tree (clusters used times bytes per cluster). Back in the days of 40MB disks, this was a valuable resource. It was menu driven, with nice, circa 1987 dos character "windows", with information on my company, other products offered, price lists, contact info.
It wouldn't have made me rich, but it was a great marketing tool that might have brought in a few extra buck. Gone. Gone because of a disk crash, with no backup. Absolutely, irretrievably gone. Forever gone.
Sometimes I run Bytes2.com. Long obsolete, it's still pretty. It could have grown into such nice shareware. It died in infancy.
But Bytes2.com's death wasn't in vain. It lives on in the backup procedures I've done ever since. Backup procedures that recovered crashes which would have put me out of business.
I still look at it sometimes, and give thanks for the lesson it taught me.
"It'll probably work" cuts it in many situations, but not with backups. When a disk crashes, it's essential the backup be trusted. It must do the same thing, make the same files, restore the same way, every time. In most cases this lets out inexpensive tape solutions. It also lets out certain software, which appears to run out of memory or recurse into infinity with certain backup parameters.
The best way to test predictability and trustworthiness of a backup solution (software and hardware) is to repeatedly backup and restore (to a different drive, obviously) a complex backup. Are the results the same every time?
In addition, any good backup system provides a method to confirm backup accuracy. This comes in two flavors, comparision against original and CRC comparison. Each has its advantages.
Right after the backup, this is the most reliable test. It verifies that each backed up file is a byte for byte match of its source. However, once disk data changes (usually within an hour after backup), this method can't be used. Particularly, it can't be used months later to detect media deterioration induced change in the backup.
This compares the CRC (cyclic redundancy check, a number reliably verifying the contents of the file) calculated from each backed up file with the CRC originally read off its source file. Obviously, if the backup program read the source wrong, this wouldn't be detected as it would be in the comparison against original method. Also, I believe (not sure, but I believe) that it's remotely possible for a certain type of multiple corruption to produce a corrupt file with a CRC equal to the original. This possibility is remote.
Other than the above, CRC comparision has a host of advantages.
To be predictable and trustworthy, any backup system must provide a method of confirmation on a file by file basis.
You have a certain subset of your disk(s) you want backed up. It could be a hundred thousand files. Too many to inspect by eye. Is it backing up what you think, and excluding what you think? If not, data loss is just a crash away.
To be accurate, the backup system must provide the user with a method of choosing, using various criteria, which files and directories to be backed up.
Set up a backup criteria, then repeatedly back up and restore with your present backup setup and the one you're thinking of going to. Are the results the same? If so, you're OK. If not, figure out which one isn't doing its job. Repeat this test with various backup criteria at various times
Every day millions of write-only backups come into being. Write-only because the backup software doesn't work, or it doesn't work with the hardware, or because the user made a mistake.
Millions more backups were once good, but became unrestorable due to age, or new hardware/software environments.
The entire purpose of backup it to be able to restore, so restorability is essential. There are three categories of restorability:
Can it be restored? Can it be restored to a different drive so it can be explored before overwriting the existing? Is the media high reliability (for instance, moderate cost tape backup systems are not). Will little hardware conflicts render restore impossible? Will the restore software work with less than perfect backups? Will the tape backup restore with a different drive if necessary?
In general, your best bet in immediate restorability is to back up to a DOS device that looks to the computer like a hard disk (did somebody mention Zip Drive?), using a format that looks like a file that can be copied to another drive and worked with there (did somebody mention PKZip?). In general, tape backup is the least reliable, while floppies fall in between. Note that floppy quality varies widely. My experience is the TDK and Maxell floppies have around 3 defects per 50, while another well known American brand has around 25. This renders the well known American brand useless for backup, and means that even the best are practical only in a backup less than 9 floppies.
I don't have a crystal ball, but I'll bet you five years from now you'll be using a different operating system with a different tape or removable disk drive. I'll also bet you you'll still consider your 1998 tax return vital.
I think its safe to say that five years from now proprietary hardware/software combinations WILL NOT RESTORE. You might be able to read the tape, but it's unlikely you'll find software to read that proprietary tape on your newer (and from a different manufacturer) drive. No matter what hardware is involved, you're much better to record the backup in a standard format, like .ZIP or .QIC or .GZ or whatever. That way, once you get that single file off the tape or other media, you can work with it on your hard disk in a hardware free environment.
Likewise, standard hardware media and format is essential. Over the years there have been floppies that spun ten times as fast, recording ten times the data. There have been backup programs that put 9 sectors per ring instead of 8, just to gain that %12. How can you read those media now? You don't.
Here's your best bet. Have the hardware and software for the backup completely separate and independent. Make the backup on the media a simple file created by the backup software. Have the media look like a disk drive which can have files copied to and from it. Keep a copy of the backup software, including the part that restores. Then, even years later, all you need to do is find hardware and drivers to read the DOS device media, copy the file off it, and use the backup software to restore.
Last, but not least, restoring from years ago requires existence of such backups. Backup media are expensive, so there's a natural inclination to re-use them. Strike a balance between re-use and data conservation. It might be something like this:
Find a friend who's a big-iron computer guy -- he'll know how to implement such a system.
Flying televisions, microwaves, and computers woke us 4:31am the morning of January 17, 1994. The famed Northridge Earthquake (which was really centered in Reseda, 6 blocks from our home) rendered our place uninhabitable. We gathered a checkbook, a change of clothes, diapers, formula, and medicine for our exodus from Southern California. On the way out the door I grabbed the tape backup, and prayed it would restore.
The backup was restorable, and I didn't need it anyway. My Gateway 486-66 took a licking and kept on ticking (thanks Gateway).
But what if it had been more like the 1906 San Francisco earthquake, with a firestorm driving us out with only the clothes on our back? Or a flash flood? Or a home-invasion robbery? Both the backups and the computer might have been gone. That's why offsite backup is so vital.
Not just any offsite backup. Backup out of the region -- out of reach of a regional disaster like a large earthquake, a hurricane or a flood. It's expensive to buy and ship media out of state, so you might want to do it only once or twice a year. Balance the expense against how far you're willing to be "set back" in event of a regional disaster-caused data loss. Here's a possibility:
A backup technique that's too hard to use won't be used, or won't be used often enough. An excessively difficult, time-consuming, or costly backup technique is useless.
On the other hand, entirely too many write-only backups have been done in the name of "ease". "One button" backups often don't include the right stuff, and many times don't restore, or restore to the wrong drive, or whatever.
The ideal backup is a fully configurable one whose configuration can be remembered between backups.
A proper system of daily, weekly, monthly, quarterly and yearly backups, with proper use of offsite backup, is essential to guarantee the safety of your data.
Da good old days. Circa 1990. You could back up your whole system, apps, operating system, data, configuration, long term data, everything, on a 40 floppy backup set. If you crashed, you could boot from a floppy, restore, reboot, and you were back in business. And the first part of that year, you could buy your favorite song on 45 rpm vinyl too!
Now phonographs litter the attic along with disco wardrobes, and a full-system backup would take over a hundred dollars of media. And it probably wouldn't install right, either.
Hot tip now is the data-only backup. But what is data?
At its simplest, data is that disk content you can't replace through purchase or through reinstallation. It might also include disk content that could be replaced, but not in a cost or time efficient manner. For instance, space considerations have forced me to throw away most of the manuals and installation media for my old DOS apps. So I keep them in a c:\da (stands for dos apps) directory tree. I consider that tree data, although long-term, unchanging data with a different backup strategy than the data I create every day.
Another example of my long term, relatively unchanging data is my \inst directory tree. Every time I download an installation program, it goes in that directory so it can be backed up, and can be restored in a disk crash. When I stop using it (for instance, because I'm not willing to pay the shareware price), I delete it. Speaking of licensing issues, in a different directory I keep all the software piracy protection serial numbers for my installation media so if I lose the cd cover or that little piece of paper in the box I can still reinstall my software.
|While we're on the subject, would you software vendors please stop treating your paid customers like thieves with this "please type in the serial number now" stuff? A real thief would certainly record the serial number -- a paying customer might not -- and be robbed of the software he paid for. Who's the thief, anyway? QUARTERDECK, MICROSOFT, ARE YOU LISTENING!|
So here's my recommendation. Put all your fast-changing data (your modern work-product, in other words) on your D: drive. Easy to remember because D stands for Data. Regular backups of your D: drive capture your most important data. All your installation serial numbers should go in your d:\serialno tree. I'd highly recommend the D: drive be a separate physical drive, and that it not be compressed. This gives the best assurance of a "put the drive in a new computer" fallback option, to enhance your backup strategy.
If your data drive backup can't compress down to the size of your backup media, you can create a d:\classic directory for your old work product and other slower changing or non-changing data, and back that tree up on separate media.
Somewhere on your D: drive, document things like userid's, host names, ip addresses, and any other necessary configuration information. For obvious reasons, do not put your passwords on your D: drive -- some things are better remembered in the mind.
All your DOS apps, and any of the few remaining Windows apps which can be installed by a mere copy into a single directory with no registry mods (you don't know what you got, until you lose it), can be put in your c:\da tree, and backed up every time something's added to the \da tree. Keep 2 backups, alternating between your computer room and the trunk of your car. If anything in c:\da is vital to retrieving important documents, an out-of-region backup might also be appropriate. By doing this, installing all your DOS and simple WIN apps is as simple as an Xcopy or a tree restore.
|Example Directory Structure, C:\da|
All downloaded install programs, and all floppy based install programs (floppies have a terrible shelf life) should go in an \inst directory, probably not on your D: drive. \inst should be backed up every six months, or sooner if a vital program is added. Two backups of \inst should be alternated between your computer room and the trunk of your car. Keep in mind if your \inst directory is lost, it's very inconvenient but not the end of the world.
By carving up your drives and directories appropriately, you can back up what you should, and only what you should, at the proper intervals.
We'd had a swashbuckling adventure in early December, 1984. The adventure had all the elements -- good guys, bad guys, cops and robbers, pursuit, capture, action. On my brand-new Kaypro 2x (8bit CPM), I made my buddy a humorous text-art Christmas card to commemorate the event. He cracked up when he saw it. A couple years later he moved a half continent away.
In 1998 a mutual friend put us in contact. In preparation, I began looking for the card on my hard disk. I tried looking by name, by filedate, searching for a sequence of asterisks (the main component of the text art). Nothing.
I looked in backups. They were sparse -- maybe 1 every 2 years -- and many were unreadable due to incompatibility with operating systems not dreamed of when the backups were created. Others had exceeded the shelf life of the media. I even tried Uniform, my old CPM to DOS conversion program. Nothing.
It would have been nice to have 3 or 4 backups from each year, so if something furtively disappeared from my hard disk I could still bring it back a couple years later. It would have been nice if the backups could have been read without formatting up a DOS computer and installing 10 year old backup software. Those decisions to use proprietary-format backups, and to re-use backups to save media costs had come back to haunt me.
My buddy and I talked on the phone. I asked "hey, remember that Christmas card?". He did. We laughed and laughed. But I sure wish I had been able to email him another copy. One file is worth a thousand words.
By now it's no surprise that I'm recommending PKZip Command Line 2.5 backing up to an IOMega Zip Drive, and restoring via PKZip Windows 2.5 or some other WSYWYG ZIP file reader (note most of the other ZIP file products are not suitable to back up -- only to restore). But even this setup isn't perfect.
Zip disks are only 100 meg, and they cost a dime a megabyte. Right now I can't recommend any of the other removable disks either because I've tried them and didn't like them, or because I've heard bad things about them.
However, one area I find particularly intriguing is the possibility of backing up to a write-once CD. These CD's cost a buck for 680meg, so 52 weekly backups will cost you $52.00 -- probably less than you'd spend per year on rewritable media. And the write-once feature relieves you of the temptation of cannibalizing your backups. A complete backup system with complete offsites and weekly, monthly and quarterly backups would be easy and economical, and wouldn't take much space. Of course, backups must be named to sort by backup date. For instance, 19980701D.ZIP might be the D: drive backup from July 1 1998. It's possible to store more than 1 backup per CD, further reducing media cost.
The CD option might not be ready for prime time because reasonably priced CD recorders are 4x, and would take all night or more for a substantial backup. Also, I know of no studies showing the recording reliability or shelf life of one dollar CD's. One thing I'd suggest is that if you do use the CD option, you use a separate 32X or more CD reader to verify the backup. If you already have a CDR, you might try this and see what kind of results you get. Just be sure to keep lots of the CD's, because the shelf life is an unknown.
PKZip Command line has some gotchas you must know and take seriously. Formost, it handles next-disk detection very badly. When it's done with the present disk, it prompts you to insert the next disk, then press Enter. If you insert the next disk and quickly hit Enter (in other words, follow screen's instructions), it's likely to abort. You must insert the next disk, wait for the drive light to go out (at least with a Zip drive), then count to 20, then hit Enter.
Test and restore can be tricky propositions. Above and beyond waiting the requisite time for the new disk to be recognized (no matter what the prompt says), always stick the LAST disk in first, rather than sticking in the first and letting it prompt for the last. I've found this to work better.
I've also seen some evidence, while inconclusive, that your Zip Drive will do a better job with disk spanning if you define it as an Int 13 unit. You can do that with start/settings/system_icon/devicemanager_tab/disk_drives/iomega_zip_100/properties/settings_tab. So far I've found no undesirable side effects caused by making my IOMega Zip Drive an Int13 unit.
PKZip Command Line's executable, and its only executable, is pkzip25.exe. Put that file in a directory on your path so it's always ready for action. When backing up, copy that file to the last backup disk in your set (after completing the backup) so years from now you can use it to restore (and you can restore on a minimal system).
Before converting lock, stock, and barrel to PKZip Command Line 2.5, work with it a while. Span disks, get used to the arguments, make test restores to an empty drive or a subdirectory. Once you can regularly backup, test and restore spanning disks, working around the little disk swapping quirks, use it.
From the above paragraphs, it's pretty obvious PKWare has some work left to do on Command Line PKZip for Windows. Even so, it's by far the best backup software existing today.
Here's a typical backup command for command line PKZip, and its explanation
pkzip25 -add -dir=root -excl=*.swp -excl=*.bak -excl=\classic\*.* m:\19980701D.zip d:\*.*
|The PKZip executable|
|Add the zipped files to the zip file|
|Back up the directory structure, and store the directories all the way from the drive root.|
|Do not back up .swp files|
|Do not back up .bak files|
|-excl=\classic\*.*||Do not back up any files in the \classic directory. Interestingly enough, this will still put an empty \classic directory in the backup, but won't put anything below it.|
|m:\19980701D.zip||The backup file is named 19980701D.zip (naming convention for D drive, July 1, 1998), and put it on the M: drive, which in this case is your removable drive.|
|d:\*.*||Backup the root of D:, subject to the information above.|
Given that any number of processes can dink with the archive flag on a file, you might be safer to back up by mod date instead. To do that, just add the following command segment:
This will add ONLY files created ON or AFTER (maybe they should have named this argument -notbefore).
Here's the entire command:
pkzip25 -add -dir=root -after=07021998 -excl=*.swp -excl=*.bak -excl=\classic\*.* a:\19980701D.zip d:\*.*
Note you're now placing the "much smaller" incremental backup on your A: drive. No reason to consume a $10 Zip Disk. Note also how easily this whole thing could be incorporated in a batch file, complete with increment date arguments.
I'd recommend testing the new Zip file with PKZip 2.5 GUI, but if you want to test it with Command Line PKZip, it's certainly possible:
pkzip25 -test m:\19980701.zip
If there are any errors in the file (there almost never are with Zip Drive media), it will just go thru to a command prompt. If there are errors, it will ask whether you want to fix them. My advice would be to do a format and surface scan on the media and try again. I think it's poor practice to try to "revive" a bad backup.
As I said before, I think GUI PKZip is the way to restore. But if you want to restore using Command Line PKZip, it's easy enough to do.
pkzip25 -extract -dir m:\19980701D.zip
In the above command, -extract tells it to extract the files, while -dir tells it to extract them into their original directory tree (relative to the current directory where you ran the command from).
She approached me in the airport. An ordinary, pretty lady with a casual business demeanor. She had seen me using my laptop to create content for Troubleshooters.Com.
She explained that she "used to" have a website. She had paid a web designer thousands to create it. But after a few months of existance, her website disappeared in a disk crash at her ISP. Gone forever.
I asked why she didn't have a copy on her local hard drive. She replied that nobody had said it was necessary, and she assumed the ISP was backing it up.
I told the lady next time she should be sure to FTP her site down to her local disk and back it up. She replied there wouldn't be a next time. The loss of the website had crippled her business, and she didn't have the money to make another website.
I looked at the lady's face. I'm a pretty good writer, but her look of loss and sadness is beyond my ability to describe.
Your website is some of your most important data. If it isn't a significant revenue source now, it soon will be. It's the future of your business. It's a four, five or six figure asset, either in money you paid a web designer or in time you spent creating it yourself.
You probably pay a sizable insurance premium on your house, your car, and maybe even your computer hardware. How much would you pay to insure your website? Would a couple hours a month and a few dollars for backup media be too much? That's what complete website insurance costs.
The easiest and most reliable backups are those made on a single directory tree. So make your website a single directory tree by making all intra-site links relative. IntRA-site links can have either a same-site or subsite relationship, while intER-site links can have only an independent-site relationship.
This is on-topic no matter how your website is designed and hosted. If someone else is creating your site, make this part of the specification: Make all intra-site links relative. Never make links absolute paths from the root. Otherwise you'll have a site that's married to your present ISP and is difficult to back up. Below is a discussion of this principle as it applies to different kinds of web page relationships.
For instance, a link from the Troubleshooters.Com home page to the Troubleshooters.Com Universal Troubleshooting Process (UTP) page is a same-site relationship. The UTP page is a part of Troubleshooters.Com, and has no meaning outside of Troubleshooters.Com. This type of relationship is a "same-site" relationship.
On the other hand, a link from this page to the IPSWITCH home page is an intER-site link. The two pages belong to two distinct websites, owned by two separate entities, espousing two different messages. They have distinct domain names and IP addresses. The only commonality is that this page sings the praises of IPSWITCH and sends visitors to it. This type of relationship is an "independent-site" relationship.
Note that independent sites can exist in the same directory tree on the same ip address. Websites Troubleshooters.Com and ProblemSolving.Com exist in the same directory tree and share a common IP address. But all links between Troubleshooters.Com and ProblemSolving.Com use URL's -- none use relative directory anchors. That way if I ever want to move ProblemSolving.Com to a different ISP, it involves only an Internic Domain Update and an FTP command. The two sites just happen (at present) to share the same directory tree, owner and IP address (I'd like to thank web programmer Steve McCausland for the money-saving domain switcher software that made this possible).
You can further investigate the ProblemSolving.Com/Troubleshooters.Com relationship. Note that you can access ProblemSolving.Com at:
The third type of relationship isn't so clear cut. What's the relationship between Troubleshooters.Com and Troubleshooting Professional Magazine? They have distinct but related themes. They might make sense as totally separate entities. Troubleshooting Professional Magazine already has its own disk directory on the host, so it would be quite easy to separate them. But each would lose value being de-coupled from the other. So Troubleshooting Professional Magazine is implemented as a part of Troubleshooters.Com, but with its own directory. This type of relationship is called a "subsite" relationship.
Another example of this third type (subsite) is even blurrier. Consider the relationship between Troubleshooters.Com and Code Corner. Completely different themes. Unique directories. Probably a different readership. Code Corner would make more sense as a separate entity.
But for economic reasons I haven't separated them. I don't want to pay for another ip address to host Code Corner. I don't want to have another site to administer -- another FTP session to pay attention to. And Code Corner's main purpose (besides being lots of fun) is to bring traffic to Troubleshooters.Com. It does that more efficiently as a subsite of Troubleshooters.Com.
When a part of a website starts looking like it has potential for independence, it should be given its own directory under the main directory. Now it's a subsite. That way, if the time comes to split it, it's merely a directory copy and a change of a few links to and from the subsite.
Hopefully, the above paragraphs explain the definition and boundaries of a website, the difference between same-site, subsite, and independent sites, and how to decide, and where the gray areas are. Here's a summary:
|Name||Conceptual relationship||Disk relationship||Backup||Link code|
|Same-site||Same theme, same owner||same directory||Part of main site||<A HREF="tuni.htm">UTP</A>
(Relative Link, same as directory of current page.)
|Subsite||Similar or at least compatible theme, same owner||sub directory, same site||Part of main site||<A HREF="codecorn/index.htm">Code Corner</A>
(Relative link, in directory codecorn below directory of current page)
|Different themes or different owners||no relation ship between directories||Its own back up||<A HREF="http://www.ipswitch.com">IPSWITCH</A>
(No directory stated -- just a URL)
|VITAL!!! In no case should an HREF ever point to an
absolute directory path such as:
<A HREF="/user/steve/public/codecorn/index.htm">Code Corner</A>.
To do so would make the website totally non-portable, and extremely hard to back up! All intra-site links must be relative to the directory of the calling page, and therefore ultimately relative to the site's home page.
To conclude the subject of intRA-site links vs. intER-site links, intER-site links must be done by URL, while intRA-site links must be done by relative directory path. In NO CASE should a directory path be made absolute, as to do so would make the site non-portable and extremely difficult to back up.
If you designed your website yourself and ftp'ed it up to an ISP, you probably have an exact copy on your hard drive. After all, that's where you designed it. Then, as long as that hard-disk image of your website is captured by your regular data backups, you're in the clear. Just remember to FTP down any files on the host which contain user input. The Troubleshooters.Com suggestion box is a good example of this.
But some authors with access to the server itself design the website on the server itself. And that author might not have control over the system backups. That's OK. For small to moderate sized websites, a simple recursive copy to removable media (recommend IOMega Zip Drive) will do the trick. You might want to use PKZip to compress it, or if it's a UNIX system, one of the free-software compression utilities.
But what if you hired someone to design the site? You have no knowledge of directories or filenames comprising your site. How do you back it up?
Assuming that your website designer has followed good practices and made all intra-site links relative, it means your entire website is contained in a single directory tree on the server. Here's what you need to do:
Assuming that your website designer has followed good practices and made all intra-site links relative, it means your entire website is contained in a single directory tree on the server. All that's necessary is to find out from the designer the directory that's the root of that tree, and to back it up the way you would back up any other tree on that server.
Do not depend on the system administrator to back it up for you. Maybe he hasn't backed it up in several months, and really doesn't care because he's interviewing for another job. Remember that the person in trouble if the site is lost is you.
The System Administrator might be one of these possessive types who won't give you read access to the tree and its files. Some system administrators do this because of unfounded paranoia (after all, read access can't corrupt anything -- it just makes backups possible). More likely (and becoming more prevalent every day), the sysadmin locks everyone out for job security reasons. After all, if he's called 10 times a day to provide access or grab some files that someone with rights could do himself, he's useful and essential, right?
Don't take no for an answer. Get the rights, and safely back up your website.
A website is a costly asset requiring the insurance of backup. Website backup is inexpensive and quick, once the procedures are in place. The person responsible for the website should back it up so he or she can sleep at night. An FTP based backup can destroy website data if done wrong, so proper procedures must be followed. If the person responsible for the website doesn't understand the theory and practice of backing up the website, he or she should hire a consultant to teach and document the procedures and theory.
"Losing a website" isn't as bad as Chernobyl, Bhopal or the Exxon Valdez, but it's not something you want on your resume. Backup is the answer.
Windows 98 is brand new, but I'm glad to say you have the same great backup options you had in Windows 95, plus Windows 98's included backup utility now restores with the proper file dates. My parallel Zip drive installed exactly like it would have in Win95, and PKZip Command Line 2.5 and PKZip GUI 2.5 worked the same way they do in Windows 95.
The Win98 backup utility has a more complex user interface than I'd like to see, but it's definitely solid and able to make and restore an accurate backup. I can't help wondering if they've made a "proprietary" QIC format. Remembering the changes to .DOC and .RTF, it's possible.
Windows 98 backup utility or PKZip? Decisions decisions. Both are excellent products. I make my choice on the vendors.
Bugs aren't always obvious, so either PKZip or Win98 Backup could have a bug I don't see. A bug that could cause disaster. Which is more likely to contain an unrecognized bug? Brown Deer based PKWare has a ten year history of bug free products that work solidly, as well as complete backward compatibility. Redmond based Microsoft's history tells a different story.
Microsoft backup is free, but I'll spend the extra money for PKZip. My data's worth it.
The early users of the IBM PC had so little data to back up that a mere copy of a few directories to floppy would do the trick. The DOS system had programs called backup and restore, but these had such a bad reputation that few used them.
By 1984, power users had as much as 20 megabytes of data, in a complex directory structure, to back up. Floppies were expensive, so compression was desirable. The old methods were now useless. Luckily, Fifth Generation Systems from Baton Rouge, Louisiana put out a product called Fastback, which could back up a whole or partial drive with a number of include/exclude criteria, and span floppies. It could format floppies if necessary, and could even detect the insertion of the next floppy without pressing a key. It was relatively restorable. Its executable could be copied to a bootable floppy for the utmost in long-term restorability (or as long as the shelf life of a floppy, which seems to be about 3 years).
Fastback committed marketing suicide by bringing out a new, confusing user interface, and never regained its status. Central Point Software (makers of cracker program Copy2PC and utilities package PC Tools), brought out Central Point Backup. It was even easier to use than the original Fastback, could write backups in either a DOS compatible format or a special "high density" format. A special benefit is it could write to tape drives of various manufacturers. Power users were beginning to abandon their fifty floppy backups for a tape backup.
But it crashed frequently, wasting gobs of time. And frustratingly, its "insert next disk" prompt took several seconds to recognize insertion of a disk, encouraging users to mistakenly remove the new diskette and abort the multi-disk backup. Nevertheless, in 1989 Central Point Backup with floppies was the way to go.
As a side note, a file-compression utility from a Brown Deer, Wisconsin, company called PKWare, was released and quickly took over the crowded file-compression market.
By 1994 the average user had too much data for practical floppy backups. Although things like Bernoulli disks existed, they were expensive and relatively unknown. Tape drives became the backup medium of choice.
Backup software was a choice between proprietary programs supplied by the tape drive manufacturer, and the long-in-the-tooth Central Point Backup, with its insert prompt bug and crashing still included. By 1994 Central Point Backup was even included as a utility in the DOS operating system, but in a very confusing user interface reminiscent of the "New Fastback".
Would you rather be boiled in oil, or slide down a banister made of razor blades? Central Point Backup was buggy and hard to use, but the proprietary tape drive software carried the likelihood that it couldn't be restored on any other tape drive. The mid-90's were a bad time for backup, and in 1995 it was destined to get worse.
August 24, 1995 Microsoft fired the shot heard round the world. Windows 95 was released. Word on the street was that many backups made in DOS/WIN31 simply would not restore under Windows 95. I can't say how correct those rumors were, but I can tell you that none of my tape backups were able to make the transition. If I want to go back to my 1992-1995 backup tapes, I'd have to configure a computer with DOS, put back the tape drive (if I can find it), and hope I can find the backup software. In other words, it's gone.
Luckily, the DOS utility version of Central Point Backup, loaded onto floppy, did make the transition. I wonder how many people, after installing Windows 95, had to format and reinstall DOS/Win31 just to get their data back, so they could back up to floppy and try again.
In December of 1995 I bought an IOMega Ditto tape drive packaged with their version of Arcada Backup, which was supposed to be Win95 compliant. In fact it didn't work. Calling the vendor, I discovered I'd need an "update", which could only be bought from Seagate for a substantial charge. There was no guarantee that even the latest version would work. My Ditto Drive spent its life as obsoleting shelfware, having never seen action.
So in 1995 users backed up what they could, how they could, and crossed their fingers.
Word was out. If your data was vital, don't use tape. IOMega's Zip Drive, which was introduced early in 1995, had proven practical and reliable, and while it wasn't as cheap per megabyte as tape, it was reasonable. It could be written with Windows 95's Backup program, and restored faithfully. Except that the Win95 backup sometimes (which times were a roll of the dice) would restore the file with a file date of the restore date, not the modify date of the original file.
Happy days were here again! As long as you had less than 100meg (after compression) of data, or could split it up to make multiple < 100meg zip files, you could use PKZip for Windows to create an industry standard .ZIP file on an industry standard IOMega Zip Drive. It was PKZip for Windows version 2.5, and it was fantastic. It did everything but remember include/exclude specs, and span disks (although it could restore spanned disk .Zip backups).
Users with more data than could compress to 100meg were left manually excluding data in an interface requiring re-specification for every backup. A hassle, but at least we now had reliably restorable backups. Later in 1997 PKWare came out with a disk-spanning version, 2.6. Unfortunately, it took up to 10 times as long to build or read a Zip directory, making it impractical for backup purposes. (PKWare is aware of this problem and is working on a patch to fix this problem). By year's end the Zip drive with PKZip 2.5 for Windows remained the backup of choice.
By 1998 there was a plethora of removable disks, most more economical than the Zip Drive. But none so reliable and standardized. If restorability was an issue (and of course it is), the Zip Drive remained the winner for the third year running.
1998 was the year Brown Deer, Wisconsin based PKWare hit one out of the park with their PKZIP 2.50 Command Line. It spanned disks. It could do its thing from a batch file. All configuration and include/exclude (and boy are the possibilities extensive) can be recorded in that batch file. Once the batch file is made, it truly does make a "one button" backup.
Its command line status gives us some advantages we haven't seen in years. The backup executable can be stored on the backup medium, so that as long as you can read the medium, you can decode the backup. And for the first time since August 23, 1995, you can restore your backup on an operating system booted from a floppy.
PKZip command line is not the ideal restore or confirmation environment, because it's hard to capture and interpret its output. For partial restores, I'd recommend using PKZip 2.5 Windows (it can't span disks but it can read spanned backups), or any of the other WYSIWYG .ZIP file readers such as WinZip. Note that registered users of PKZip for Windows can obtain the Command Line version for only $19.95 plus shipping -- a tiny price for security.
So that's it. Backup has always been a little squirrelly, but today there's plenty of room for optimism. With PKWare improving their product every day, and new, larger, cheaper removable disks from IOMega and its competitors coming on line every day, I'm optimistic.
We anticipate two to five articles per issue, with issues coming out monthly. We look for articles that pertain to the Troubleshooting Process, or articles on tools, equipment or systems with a Troubleshooting slant. This can be done as an essay, with humor, with a case study, or some other literary device. A Troubleshooting poem would be nice. Submissions may mention a specific product, but must be useful without the purchase of that product. Content must greatly overpower advertising. Submissions should be between 250 and 2000 words long.
By submitting content, you give Troubleshooters.Com the non-exclusive, perpetual right to publish it on Troubleshooters.Com or any A3B3 website. Other than that, you retain the copyright and sole right to sell or give it away elsewhere. Troubleshooters.Com will acknowledge you as the author and, if you request, will display your copyright notice and/or a "reprinted by permission of author" notice. Obviously, you must be the copyright holder and must be legally able to grant us this perpetual right. We do not currently pay for articles.
Troubleshooters.Com reserves the right to edit any submission for clarity or brevity. Any published article will include a two sentence description of the author, a hypertext link to his or her email, and a phone number if desired. Upon request, we will include a hypertext link, at the end of the magazine issue, to the author's website, providing that website meets the Troubleshooters.Com criteria for links and that the author's website first links to Troubleshooters.Com. Authors: please understand we can't place hyperlinks inside articles. If we did, only the first article would be read, and we can't place every article first.
Submissions should be emailed to Steve Litt's email address, with subject line Article Submission. The first paragraph of your message should read as follows (unless other arrangements are previously made in writing):
After that paragraph, write the title, text of the article, and a two sentence description of the author.