11:08am EST Status: Back up
Please note that the updates below are based on my understanding of the problem — that only one database table needs to be checked. The estimates are based on the number of records processed, though even in optimal conditions the repair will take a bit longer than these numbers imply.
10:30am EST Update:
Indexes 1-6 (I believe that’s all of them) on the largest table are apparently repaired, and the database tables are being copied. Hold tight folks – we’re close, but I can’t estimate how close.
Assuming, of course, that nothing else goes wrong.
10:02am EST Update: 94% complete
9:59am EST Update: 85%
9:51am EST Update: 67% complete
9:44am EST Update: 57% complete
9:41am EST Update: 47% complete
9:34am EST Update: 35% complete
9:29am EST Update: the repair looks like it’s about 25% complete
Our load over the last few days, unsurprisingly, has been about double what we’re used to. Things were slowing down, and back-end processes (like the process that updates search, and the backup routines) were starting to conflict with each other as well, so basic database optimization was required.
I triggered the appropriate maintenance to run at our slowest time this morning, and the process failed because it ran out of disk space. The server went idle while waiting on me to free up some disk space, but I couldn’t fix the disk space issue due to some weird combination of the backup software I’m running, the database software, and the zero disk space problem. I’ve found a patch that should stop this from happening in the future, but right now the immediate problem is getting the database server running again.
I’ll keep y’all updated, but I will have the server up as soon as possible.