We are moving our database from one server (i.e. computer) to another this weekend. In the long run, the new kind of database server will be easier to configure, backup frequently, and extend as more and more people join Memrise.
However, this requires us shutting down the site so that we can write out, transfer and then load in the gigabytes and gigabytes of data from one server to the other. We timed this proces in a dry run as taking 21 hours. If we don’t do it now, it’ll only get uglier to do it in the future.
Usually, it’s possible to do this kind of thing in the background without anyone ever noticing. Unfortunately, in this particular case, that’s very hard to do. We wracked our brains for ways to reduce the downtime, but concluded that we would need to spend a couple of weeks of engineering time, and could very well be an error-prone procedure.
We decided to swallow the bitter pill of 21 hours of downtime now rather than even more later, and to spend those weeks of engineering time on things that will make for a better site, rather than in avoiding the downtime. We think that’s the decision you would want us to make.
This is the last major (more than a few minutes) maintenance period that we have scheduled for the foreseeable future.
P.S. See also: explanation of the data centre failure last night that made all of this a lot more complicated.
P.P.S. For the technically minded amongst you, we are hosted on AWS. We’re moving the database server from a standard EC2 instance to RDS. Amazon make it very hard to move to RDS without downtime, since we couldn’t find any real way to set an RDS instance up as a slave of an existing EC2 mysql database.