# I don’t like having to double-click to edit a cell. What’s wrong with single click?
# The cell type is not clear before clicking on it. For example, the Book Title column is apparently a text field since it’s editable in-cell. But the Author column opens a small text-area, however without a visible cursor.
# The Shipping dropdown does not render as a true drop-down box (one row high, with a drop-down showing the excell options), rather a list box (overlayed on top of the other elements). This is a problem I ran into as well, though: apparently there is no DHTML function which will tell a list box to open up; it has to be manually clicked by the user. Maybe it’s possible to simulate a “click” event and send it to the list box element.
# If I open a list box by double-clicking one of the cells under Shipping, and then scroll the whole table from side to side usings its scrollbar, the list box showing the shipping options stays in place relative to the browser window, not the frame in which the table is shown. You can end up with a dropdown box not positioned over the element whose choices it’s showing.
# Why does clicking in a cell with a checkbox not check the box? Instead one needs to click exactly in the checkbox. Clicking in the rest of the cell area just moves the current row highlight, which is not useful.
# It’s not possible to edit the Date of Publication field. I would think that showing off a date-editing widget is an important piece of functionality, since there are so many ways to go about this. For example, I could be given a cursor to edit the text of the date directly, or separate dropdowns for year/month/day, or a small pop-up calendar overlay, and so on.
# Tab functionality doesn’t work properly. Half of the time, Tab will take me to the next element, but ofter it will start moving the focus to other links on the page or browser menu options.
# The current row is highlighted, but the focused cell is not.
On the other hand, their [http://scbr.com/docs/products/dhtmlxTree/index.shtml Tree] and [http://scbr.com/docs/products/dhtmlxTabbar/index.shtml Tab-bar] widgets seem pretty nice. It’s just the grid that’s poorly designed.
If I were running a service like Carbonite, I would detect when my users were backing up the same file and not store multiple copies of the same thing. I’m sure people back up their music collections, and there’s probably a great deal of redundancy there. Files for which, based on the filename alone, it’s easy to find candidates for an exact match. A byte-by-byte comparison could determine whether two files are identical. Then? Just store one copy and have users’ backup sets which have that file just point to it. Essentially they’d be creating a pool of all songs downloadable or being traded online, and for each user store a reference to their subset of the pool.
I did a bunch of reading/research/experimentation with online backup services, yesterday.
[http://www.carbonite.com/ Carbonite] is a good online backup service, but the interface is very basic. They advertise “unlimited” data storage for $5 per month. (Although they have a so-called “god clause” in their Terms Of Service (i.e., “we can terminate your account for any reason at any time”), and presumably storing “too much” data could be perceived as abuse, there’s no information on what “too much” data is. Users in comments report storing 50 GB in the service with no issue, and one fellow had uploaded 500 GB (over a long period of time). Carbonite’s simplified interface gives the impression that it “just works”, and in fact reports are that it’s reliable. It allows you to choose files/folders to backup by right-clicking on them, and indicates backup status of each file with an icon overlay, a la [http://tortoisesvn.net/ Tortoise SVN].
As an alternative online backup solution, there’s [http://aws.amazon.com/s3 Amazon's S3 service], which comments speculate Carbonite is likely using as their back end. Rates are 15 cents per gigabyte per month for storage, and 20 cents per gigabyte transferred. S3 is just a raw Web Service (no program, no GUI, nothing), and clients for it are not yet mature. (But in a way, that’s part of the fun of it.) One way to use it as a backup system is to use two free programs, [http://www.jungledisk.com/ Jungle Disk] and [http://www.acs.uwosh.edu/novell/netdrive.htm NetDrive], to get a mapped drive letter for your S3 storage account (under Windows). Then use a backup or sync program to move data to the mapped drive. I gave [http://www.2brightsparks.com/syncback/ SyncBack] a try, but the problem is that since SyncBack stores no data about your backup history, it needs to do a full enumeration of all the files in both the source and destination every time it makes a backup (to determine which files to back up). That’s no problem for sync’ing from one hard drive to another over a LAN or USB or some other fast connection, but with S3 determining information about a batch of files is inefficient. In order to accurately determine if something actually changed, SyncBack currently is downloading the entire file. A better solution (I have yet to try) might be [http://allwaysync.com/ AllWaySync], which stores information in an XML file about past sync operations. That way, the files themselves don’t need to be examined, just the XML file(s). This should be fast. Although Carbonite is easy, it only backs up data files, not programs by default. Still, that’s good data insurance in case of fire or vandalism or an outdated image file, but keeping a Ghosted disk image around is probably better for the most typical cases of data loss which would be (1) hard drive failure or (2) bad virii. In essence, then, the compexity of getting an automated S3 routine working may not be any better than imaging your disks every once in a while, combined with if-all-else-fails “data insurance” through Carbonite or a simple service like it.