I've tested it and it seems to pretty much confirm what the poster said. I created a random 1GB TrueCrypt volume and started uploading it and then deleted it prior to completion and it was able to (continue) finish-ing itself. Seems like these transactional uploads are located in C:\Users\%username%\AppData\Local\SpiderOak\SpiderOakONE\tss_external_blocks_pandora_sqliite_database with the blocks being in the format of "00000117", etc... After I deleted that file (the only one that was 1GB since I did this on a clean test VM with a new 2GB spideroak account) and restarted the SpiderOak client the upload was just "stuck" at its last status percentage perpetually and could not upload anymore. When after I put in new files in the Hive for SpiderOak to upload, it would never get to it. There was no errors logged, it just showed the deleted file as 'stuck' at whatever percentage state it was last in, and no matter what other files I subsequently put into SpiderOak or how many times I reboot or restart the SpiderOak client, nothing at all ever happens anymore. Plus there is no notification that SpiderOak is in a screwed state. This is not redundant error or problem recovery in my opinion.
In any case, SpiderOak doesn't seem to operate optimally. First, there is no way to change the order of uploads, so if you have a huge queue and a gigantic file followed by a bunch of normal sized or even tiny files then you are screwed if you have a slow upload connection and have to urgently get the files behind them to be uploaded. Even if the "pause" queue and the "clear upload queue" is all or nothing, there doesn't seem to be a way to specify exactly which files to pause or cancel. And strangely enough, even yesterday evening when I was looking into this I went under the "MANAGE" tab in the SpiderOakONE interface, and in "Deleted Items", I navigated to the path of the said file, then manually selected it and hit "remove", refreshed SpiderOak and it was STILL uploading. You would think there would be a clear consistent way to delete files or cancel pending or active uploads!
The whole "transactional" thing has its downsides too because consider two common scenarios. Last night I was downloading a sizeable (relative I know) file from the Internet directly into my SpiderOak Hive and it got stuck about half way. Apparently when the download errored out, and after I restarted the download from the Firefox browser, SpiderOak already seemed to have committed that partial download to the queue, so that even when I finally finished a working copy of the download, SpiderOak was hellbent intent on uploading the failed/corrupted version FIRST, before reuploading the corrected version that finally actually download, thereby wasting time and space and effort and bandwidth. At the very least they should offer the users a way to turn this off for files above a certain size threshold.
Dropbox and others do not do this, when I delete a file that was partially uploading in dropbox/etc, it stops uploading that file immediately. This is the way it should be or at least the user should be told about or given the option to configure the way SpiderOak works in this regard.
Also, consider someone moving bulk of his external archives into SpiderOak to take advantage of the "unlimited" storage or even the generous 1TB/5TB offers (which is a lot of space by any account), if he copies everything over first (easier to maintain nested folder structure, etc) and then starts to decide to prune his upload collection, he could be screwed thinking that he could quickly delete the large files that he didn't want to be uploaded (that by any reasonable account SpiderOak would not possibly have gotten to uploading so quickly) but then to find out (or not) that SpiderOak did indeed spent the wasted time, effort, bandwidth in doing "work for nothing" by consuming bandwidth and uploading files the end user never even intended to be uploaded in the first place. Especially consider if a user is working in the Hive and copying and pasting huge files around, merging, archiving, changing directories, performing temporary transactions, processes that create a lot of large files that he very quickly deletes and never intended to get uploaded or queued, this all seems like nothing but just a huge waste of time and bandwidth when it is indeed all for nothing.
Someone who has a slowly upload connection would especially be affected by this "transactional" nonsense. The fact that SpiderOak has not error correction in cases I detailed above and that it doesn't even let the user know that it is happening is most concerning to me. The only way I found to recover from t his is the clear the upload queue, which is not even an option in the activities panel, but on the overview panel, however by looking at the overview panel one would never even know something had gone wrong, etc... UI is not very straightforward or intuitive.
And while we are at it, why is SpiderOak client still not open source? Are the blocks in tss_external_blocks_pandora_sqliite_database encrypted with our private key prior to upload? Can or will SpiderOak release an open source tool to allow us to manually and personally confirm and verify that it is indeed encrypting our files correctly prior to upload? Users should be provided with a decryption tool that allows them to manually decrypt the contents of those "00000NNN" files or elsewhere so that they can confirm that only they and uniquely they can decrypt their own local files that were allegedly encrypted securely prior to upload.
Also, I don't concur with SpiderOak saying it is a "zero knowledge" system. As far as I know, SpiderOak doesn't take bitcoin as payment nor cash, and only take credit cards. They used to at least give a 2GB "free" forever, now it is a 2GB trial for 60 days or something like that. And there is no way to make payment other than on the web version (you can't do it on the client) affectively defeating the whole "zero knowledge" because you know that at least ONCE all your keys are sent to SpiderOak. (if you want to use SpiderOak in any real capacity you are forced to sign into the unsecure web bversion) So it is never completely "zero knowledge" they way they described it no matter how you look at it.
And so the warrant canary doesn't do any good when it is only updated every six months, and based on the above paragraph, SpiderOak definitely technically COULD give your keys out. (even assuming they are not incompetent and have always acted in good faith and the whole thing isn't rigged or backdoored or somehow compromised to begin with)
I think the offer of "unlimited" storage at 150USD a year is a great deal. (even the 1TB at $12/month is awesome) But until SpiderOak actually open sources the entire client instead of lip service talking about it, it would not fully trust it. Better to encrypt locally before uploading anything to the cloud and not rely on its alleged encryption. (and I wouldn't want that SpiderOakOne client with its OS integration scanning my mounted TC volumes god knows what it is doing there)