engineering dataops
July 22, 2018

I found a really novel setup for cycling a very common toolset (MySQL) in one of the unlikeliest places. Can we learn anything about data velocity from helping a friend deal with their ‘less than awesome’ deployment of WordPress? As it turns out, yes! 

If you’ve got a WordPress backup and the corresponding MySQL dump, you can pretty easily set up a local environment using a little bit of Docker ingenuity.

If you need some help with a backup script, check this out. Specifically, I grabbed a very basic Docker Compose file – In fact, here it is from Docker’s samples!

For those of you familiar with Docker or WordPress, none of this should be surprising at all. If you also then go look at the MySQL Docker image documentation, you’ll find a ‘hidden gem’ related to the /docker-entrypoint-initdb.dpath inside the container: any .sql file you place in this directory will be executed in alphabetical order.


Check It Out…


We published the code for this on our github account!


Why is this important? What happens when I REALLY mess something up, or I want to test something locally before I push it out to production? We’ve even got a template for you to use in our introspectdata/wordpress-dev repo.

  • Clone or download the repo
  • Put your .sql mysqldump file in the /data directory
  • Unarchive your file backup into the /wp-root directory
  • Then start it up… (read the README.md for more details)

Why have I spent so much time talking about a dev setup that’s not all that terribly interesting or even all that unique? Data velocity.

Using this tool-set and the ability to automatically restore a backup into MySQL I’ve solved for two things: we have a way to reproduce the environment and we’ve made it dead simple to return the local setup to a known good configuration.

In other words, we have a data-driven environment that’s now safe to experiment with and when we’re not worried about taking our production system out there’s a lot more room for new ideas.

We love sharing little technical tweaks we come across. Improving data velocity is one such area, but it’s not all! More importantly, when you can find a novel way to increase experimental safety for engineering, operational and data-focused teams, your ability to move faster and dig deeper increases exponentially.