I’ve been thinking a lot about GDPR and similar legislation for the last year or so. I think that codifying how companies treat our data (and what they can and can’t do it with) is a huge step in the right direction. Late in 2018 I was happy to find myself working with Redgate on their SQL Data Catalog early access program as we started looking at ways to attach and track metadata about our data in our environment. Last month Washington state (where I currently reside and work) proposed it’s own GDPR like legislation. Since we aren’t likely to get any federal level legislation anytime soon it’s most likely the US DBAs will have to learn about a variety of laws and investigate (along with a legal team) what they actually mean and if they impact their organization.
In the mean time now we can still a jump start on collecting the data we need to meet what is sure the bare minimum requirement of these laws: getting a handle on what you actually have in a way that is accessible and reportable. Also reviewing your organizations practices for cleaning data in dev and test environments. This means generating test data or masking/altering production data to remove/replace sensitive information. If digging into market trends to see what other organizations are doing is your thing Redgate is making the 2018 Data Masking Market Guide available for your perusal here. I’m really enjoying my foray into these questions! Have you started trying to solve these problems in your org? What have you run into that you weren’t expecting when you started?