This is one of those posts so I never have to google this again (one hopes). Here is the PS code to pull back a set of details about every SSRS instance installed on a server including the SSRS instance name, & the first URL port it is running on, the service name and the name of the report server database etc.
2018 was a challenging year for all sorts of reasons. I set some modest goals for myself and it was mixed bag. My work goals got kind of demolished: I failed to post much last year (although I am happy I now have a post to return to every time I need to write a pivot) and after multiple attempts I failed to pass the MS 70-463 exam on building a data warehouse (I just couldn’t seem to get the last 50-100 points to push me over the top). My personal goals I did okay on though: I passed my nidan test back in September and I am about 1/3 of the way through learning to finger pick Loch Lomond (definitive version is Runrig’s live version) on my tenor uke. I would think the year was failure however I did a lot (especially in the first 3 months):
- Coordinated and ran the first SQL Saturday here in Spokane.
- Presented at said SQL Saturday.
- Managed to keep the local PASS chapter meeting every month (even while I was out of the country with the assistance of my co-leader Elizabeth Hunt (t).
- Attended SQL Sat Oregon, rode the SQL Train to PASS Summit. I finally managed to make the Summit meet-up with other chapter leaders and SQL Saturday organizers: so much learning and networking.
- I started the process for creating a not for profit to handle the money for our SQL Saturday since it was successful enough to warrant another one. We officially incorporated right before Christmas.
- At work I finally got automated backup testing up and running. It’s probably a little over-engineered but it was a solid first effort.
- I also began attending the data governance meetings at work so we could start incorporating our SQL data into their consideration and I got us enrolled in the EAP for Redgate’s Data Catalog.
This last one is by far one of my favorite things this year. I’ve spent a lot of time thinking about how we treat our data but beyond being sure it was encrypted and backed up I haven’t been able to do much else concrete but in the last month of 2018 I was able to start cataloging data in our internally developed databases and get start on one of our bigger vendor databases. Adding this metadata will provide us with a much better view of what kinds of data we are storing and where it is. I’m happy with this initial run at the problem space and I can already see where it starts to answer some questions we should all be asking about the data in our care.
2019 is looking to be another challenging and busy year: I need to refocus and look at my path to an MCSA/MCSE certification. I’m hoping I can take the 70-764 and 765 exams this year. I was hoping to get my 2012 MSCA and upgrade that but I still don’t do enough SSIS work to get myself over the last point hurdle with the 70-463. And being certified on the upcoming versions is probably worth my time.
I’ll finish putting together the Inland Northwest Data Professionals Association in the next couple of months, just in time to run the second SQL Saturday Spokane. I also applied to the Friends of Redgate program and we’ll see what comes of that: that could lead to a whole host of community contributions that I can’t even foresee right now.
I don’t find myself pivoting data often but when I do I seem to always need to do the “complicated” pivot. You know, the one where you have a collection of unique values like email addresses where the value isn’t the column name but you want the 4 email addresses connected to one person in a single row rather than four rows. I can never remember how to do that so here is my personal internet bookmark for that code. Thanks to the late Robert Davis who gave the answer I needed back in 2015. Robert was a wonderful and generous contributor to the SQL Family his site is full of useful information and is still available here.
The solution is to add the categories you want as eventual columns to your result set before pivoting (it sounds simple enough but again then I can never remember it). So if you have say an account with multiple contacts:
This returns a list of results such as:
The one simple trick is to create a value for each row that will be repeated it can be used as a column name when you pivot:
Simply repeat the structure of creating a valid column name (‘String’ + CAST(counter as VARCHAR)) and and you can include a pivot across as many different columns as need to be included. Our short list above now becomes:
Hopefully now in the future I’ll remember I wrote this and I won’t have to go searching for the solution (again).
So this is a thing: I was nominated as an outstanding volunteer for PASS for April 2018. I really appreciate that someone thought I was doing something outstanding. I am bad at both accepting compliments and self promotion. I really believe in the PASS as a positive extension of the SQL community. I know that I wouldn’t be where I am today without the generous and helpful people who want to help myself and others do and be their best meeting the data needs of our respective employers/clients/organizations. It’s only natural that you turn around and help those around you that want to do better for themselves as well. I’m excited for the growth of our users group and the success of our SQL Saturday: I could not have done it alone.
My road to tools has been both short and long. Short because I started out (just a few years ago) and didn’t think I needed much more than a connection to the database and my own brain. Long because I also had not yet had the misfortune of discovering how much pain good tooling can eliminate. It started one crisp winter day as bright eyed junior DBA when I was informed I would be supporting an internal deployment that evening. The first that had occurred since I started about 5 months prior. Our internal databases are relatively small and mostly simple so none of this was particularly frightening.
“They’ll give you a deployment script to run on the database. Make sure you take a backup so you can roll back if you need to, but it shouldn’t take more than a few minutes of your evening. Good luck,” they told me. With an update like that what could go wrong? Pretty much most of the things.
The developer provided me with a script. I dutifully took the database backup, ran it and waited for him to confirm the update had applied successfully.
“Somethings wrong. Hang on a sec. I forgot to include some changes in the script.” Within a few minutes I had a second script to run in addition to the first, which I quickly run against the database. A pause.
“No, it’s still not working as expected. Um, let me check some things…” they sent me. This went on for 30 minutes, an hour. I couldn’t make the call to roll it back and cancel the upgrade and the developer was researching what he might have missed in his change script. I asked often if he had anyway to to tell me a total list of objects he expected to have changed for the deployment to work (“Who needs a staging environment?” past me complained). A “simple” deployment that I had expected to maybe spend 30 minutes (including logging in and waiting around for the start time) dragged on to close to 2 hours. Eventually it was completed and we went our separate, frustrated ways for the evening. The next morning I started researching tools for diffing databases and found Redgate’s SQL Compare. Within a week I had approval to purchase their toolbelt and the next time there was a deployment we scheduled in time for them to review the differences between their test environment and production to come up with a deployment script. It took a little longer but I was able to finally convince the development team they needed their own tooling to manage their database code but after about 18 months we had them using Ready Roll to manage database migrations and now a few years later we are finally getting to the point where we can use a build agent like Octopus Deploy so none of us DBAs have to get up at all.
Besides SQL Compare the other tool that I really notice when I don’t have it (absence makes my heart so much more sad) is SQL Prompt. Having to work on a machine without it now makes me feel like an idiot now that I’m used to have easy access to a library of code snippets (and automatic code formatting: tabs forever!).
It’s not essential (yet) but recently I’ve been trying to use PowerShell more when I can and quickly discovered dbatools which does a lot of the heavy lifting and provides immediate access to a lot of management tasks. My favorite database migration: we used to manually copy backups from one server to another and restore them or detach a database and copy the mdf/ldf to the new server and re-attach which was a pain: I can never remember the exact syntax, I end up having to RDP to the server to move files: ugh. But with a single line of powershell I can take a backup (or backups) to a network location, restore that(those) backup(s) without leaving my own desktop and I’ve barely scratched the surface of what I can do with it: highly recommended!
I’ve been taking notes hoping to share some lessons learned from our deployment of Red Gate’s SQL Clone but they kept it so deliciously simply and the documentation and worked examples for PowerShell are difficultto improve upon (also we aren’t doing anything too terrible fancy yet either). My only gotcha was learning that if you don’t use the redgate extension (.SQB) for backups that SQL Clone won’t recognize that it was compressed and encrypted with SQL Backup (we’ve wrapped an existing backup procedure around the SQL Backup proc that preserved our old naming conventions when we switched to the new backup tool). The work around was simple: replace the backup file extension prior to creating the image from the backup. If you think that you could benefit from cloning your databases to replicate (and mask) production data I encourage you to check it out. In the meantime I’ll see if I can find a use case or work around that needs some more documentation as we roll it out for our test group.
I was thinking more about numbers and wanted to provide some further breakdown on our SQL Saturday. It seemed like there was a large number of BI focused people who signed up/attended and that was validated by the numbers: 21% seemed to have a BI focus to their job with titles like BI consultant, BI Developer, Business Intelligence Analyst, Business Intelligence Team Lead, Data Analyst, Data Integration Developer, Information Analytics Manager, Information Delivery Manager, Information Systems Analyst, Manager of Predictive Analytics, Financial Analyst. Application developers were the next biggest group at 14%. This included Developers of multiple types, Software Architects and Enginners and Programmers. Closely on the developers heels were DBAs of different stripes at 12%. The last significant group were folks that were managers (or higher) at their organization at 10% including team leads and even one VP of Professional Services. 80% of our registrants were local (with a rather large circle of local considering we’re the only user group for eastern Washington and northern Idaho) which bodes well for our local PASS Chapter. The largest distance traveled (for a non speaker) was approx 1200 miles from Pasedena, CA. Some other numbers 4 of 22 speakers were part of our local group (19%) although we drafted a local professor into presenting as well upping our local speaker count to 5 (24%). 6 of our 22 speakers (27%) were women and while I’d like to see that percentage higher our first time out we didn’t fall completely down in that department. Running a quick informal peek at the attendee names 25 out of 93 attending (27%) seems to match up with our speaker break down as well.
Wow! The weekend was crazy but amazing as well. In my best dreams we got a little better than 50% buy in to our first SQL Saturday: I wasn’t sure where all our local DBAs were at and I expected that we’d end up rather student heavy. Instead we had 113 registered as of Saturday morning (including some middle of the night registrations) and 93 bodies through the door the day of: That’s an 82% attendance percentage! 72 of those folks (64%) were SQL Saturday first timers (and in an informal poll on Saturday many of them were new to PASS as well). 23 (20%) of those registered were students.
The event itself went rather smoothly: we were at a university in their classrooms so the technology aspect was well tested by daily use (we had an issue in one room and in the end just moved to a different open room). It’s been over a year since I had been able to make it a SQL Saturday or Summit and I’d forgotten how energizing it can be. I know that I’ve been struggling to be enthused about my job this winter but I have a new found energy that I can attribute directly to the chance to interact with such a great professional community. Even better we were able to really promote the local users group and I’ve already seen a better response to our next meeting that we’ve had in the past so I think we managed to find a lot more of our local DBAs here in Spokane. I’m looking forward to seeing them at our monthly meetings as well as SQL Saturday Spokane 2019. We’re definitely going to be doing this again and improving on our success this year.
We’ve been re-arranging some deck chairs at work (deploying a new backup tool in our SQL environment). As we’ve been working our way through that we’ve had to update or re-engineer some of our processes for automatic restores to use the new tool. We’re doing one restore though where no one could recall who actually used the restored database. After asking around for a month we didn’t have any better idea of whether the database was actually used or not. Being a risk-averse (not to mention polite) group we wanted to really be sure that no one was using it. Thus the inspiration for this post was born: we needed a way to audit database access confidently say one way or the other that our mystery database was in use or not.
We needed something quick and lightweight and After a quick spin through the SQLHelp channel on Slack I’d been pointed in the direction of extended events. SQL combined with XML my forever nemesis. Thanks to some help from Thomas Schutte (b) particularly his blog post on getting started with XML in SQL. So here is some useful code if you need to audit database use on a server.
First we need a simple extended event session. I picked the sql_batch_completed event so I knew I would catch users as they changed database context on the server and executed code. I kept the data just in the ring buffer since I’ll be checking it often to pull out relevant data and storing it in a table.
CREATE EVENT SESSION [UserContextAudit]
ADD EVENT sqlserver.sql_batch_completed
( ACTION ( sqlserver.database_id
, sqlserver.username )
WHERE sqlserver.session_id > 100 )
ADD TARGET package0.ring_buffer
WITH ( MAX_MEMORY = 1024KB
, STARTUP_STATE = ON );
ALTER EVENT SESSION [UserContextAudit]
ON SERVER STATE = START;
Generating extended events is easy peasy, especially when using the wizard in SSMS. The terrible (for me) part is parsing the results from XE which is stored as XML. I got about 65% of the way there but struggled to get to the data points I needed out of the XML. Thomas’s post above pointed me in the direction of the second cross apply I needed to get the XML nodes I was missing into my temp table. Once you have a good query to pull out the extract the data from the XE session we just dumb the results into our database. I’m running this script once a minute via a SQL Agent job to preserve the information.
SET QUOTED_IDENTIFIER ON;
DECLARE @XML AS XML;
SELECT @XML = ( SELECT TOP 1 CAST(xet.target_data AS XML) AS XEData
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe ON ( xe.address = xet.event_session_address )
WHERE xe.name = 'UserContextAudit'
AND xet.target_name = 'ring_buffer' );
SELECT ce.event_node.value('@timestamp ', 'DATETIME2') AS EventDate
, ca.action_node.value('@name ', 'VARCHAR(200)') AS ActionName
, ca.action_node.value('(value/text())', 'VARCHAR(MAX)') AS ActionValue
FROM ( VALUES ( @XML )) xx ( XMLData )
CROSS APPLY xx.XMLData.nodes('//RingBufferTarget/event') ce(event_node)
CROSS APPLY ce.event_node.nodes('./action') ca(action_node);
SELECT DISTINCT a.EventDate
, a.ActionValue AS username
, b.ActionValue AS databasename
FROM #temp a
JOIN #temp b ON b.EventDate = a.EventDate
JOIN #temp c ON c.EventDate = b.EventDate
WHERE a.ActionName = 'username'
AND b.ActionName = 'database_name'
AND c.ActionName = 'sql_text';
SET lastdate = d.EventDate
FROM AuditDB.audit.usertracking upd
JOIN #temp2 d ON d.username = upd.username
AND upd.databasename = d.databasename
WHERE upd.lastdate < d.EventDate;
INSERT INTO SQLMgmtD.audit.usertracking ( username
, lastdate )
FROM #temp2 i
WHERE NOT EXISTS ( SELECT *
FROM Audit.audit.usertracking a
WHERE a.username = i.username
AND a.databasename = i.databasename )
GROUP BY username
DROP TABLE #temp;
DROP TABLE #temp2;
Run that XE Session and SQL Agent collector job for a month or so and you should have a solid idea of what databases are actually being utilized on your server. There is a non-zero chance that a database might only be referenced as cross-db query but if you think that’s possible the sql_text from the XE session could be analyzed to look for and pull apart 3 part names for databases referenced this wa
In 8 days Spokane will play host to it’s first ever SQL Saturday event (yay!) Right after I got my first DBA job I quickly discovered the PASS community (via Brent Ozar to be specific, the senior DBA was quite the fan and their 6 month JR DBA Training plan was invaluable in getting my sea legs). I quickly discovered a truly welcoming and helpful community that at its core is deeply invested in the success and growth of its members. I attended my first SQL Saturday in Portland in 2014 (and again in 2015 and Redmond in 2016). I learned about Summit which I first attended in 2015 (and again in 2016). After attending I was frustrated that there was no local user group to connect with (boo!) but as soon as I decided to take steps to create a Spokane PASS chapter (or join us on fb) another local DBA beat me to the punch (yay!). However I found myself running the group in the fall of 2016 and a small group of us have met regularly since then.
Something was missing still though: in a town of a quarter million surely we had more than 10 DBAs. I reached out to the wonderful Melody Zacharias (b|t) to get her thoughts about a SQL Saturday in our neck of the woods. She was encouraging and graciously agreed to help organize our own event. Today I am excited to report that our event is over 75% booked with more than half of those registered are new to SQL Saturday (and likely to the PASS community at large). I have high hopes for the success next weekend: if you are one of the many people who submitted sessions to present: thank you, just receiving all your submissions validated our idea. If you are one of the few that were selected to present and able to join us next weekend: thank you, I am incredibly excited to meet and learn from you. If you’ve registered to attend: thank you, I am excited to meet you and hopefully get the chance to tell you about how great PASS is and invite you to our monthly meet ups as well!