Category Archives: Scripts

Creating Wait stats widget on Azure Data Studio for macOS

A couple of weeks ago, Microsoft released a new multi-platform tool called Azure Data Studio, this tool is the final version of SQL Operations Studio. If you are familiar to SQLOps, you probably recall that this tool 100% open source, and because of that you can customize the JSON code to do certain things the way it works best for you.

In my personal opinion, this is of the best features of Azure Data Studio are widgets. It gives the option to DBA’s or Database developers to create their own custom widgets to access SQL Server data using simple charts. I personally don’t like the very buggy reports from SSMS which take time to load and are not fully customizable … they are like a black box to me.

I decided to give it a try and create a custom widget of wait stats, because I use them everyday and also is one of the most common troubleshooting methodologies across SQL Server data professionals. If you are not familiar with wait stats, you should … it provides diagnostic information that can help to determine the root cause or symptom of a potential bottleneck in multiple areas like:

  • Locking \ blocking issues
  • CPU pressure
  • Memory pressure
  • Disk IO problems

In this post, I will walk you through the creation of a custom widget to display the top 5 wait stats from your system but first  you must have Azure Data Studio installed on your computer already. So in case you don’t please make yourself a favor and download the macOS version on this link.

From now on, I will assume you have Azure Data Studio installed also a working connection to a SQL Server instance. All the Azure Data Studio widgets uses a query file as a source, in this case I created a short version of world-famous Paul Randal’s (b|t) script to determine the wait stats for a SQL Server instance, you can download the modified script here from my GitHub repository.

This short version will show only the top five wait types from your system, I also I removed the links from Paul’s version where he explains each ignored wait type just to make the code easier to read but I strongly recommend you to check Paul Randal’s blog post to learn more about.

Download the script and move it to a known location rather than your Downloads folder, this will make things easier while creating the widget. A folder location like /Users/MyUser/Documents/wait_stats.sql would work.

Open the script with Azure Data Studio and execute it, then go to the results window look at the top right corner of this section and click on the Chart icon:

Once the chart is displayed in the screen, look for the chart options from the right panel and make the following changes:

  • Chart Type = horizontalBar
  • Data Direction = Vertical
  • Legend Position = none
  • X Axis Label = PERCENTAGE
  • Y Axis Label = WAIT TYPE

As you may noticed at this point, a nice horizontal bar type char is displayed on your screen, but this is just halfway now let’s move ahead and get the widget created. Click on the Create Insight button at the top of the RESULTS \ CHART grid to generate the JSON code for this chart.

Azure Data Studio will open a new windows with some JSON code, probably you noticed the way it is showed in the screen is not that readable. Don’t you worry, we can address the format problem just select all the JSON code using the ⌘ A (Command + A) shortcut then just run the Format document function of Azure Data Studio using the ⇧⌥ F (Shift + Alt + F) shortcut and voilà the code is now looking perfectly formatted:

Noticed that I have highlighted the name key, because the auto-generated code puts My-Widget as value by default in our case we want to call this widget something like Wait stats, also noticed the queryFile key value it points the location you saved the wait stats script at the beginning of this post.

Finally it’s time to add our custom Wait stats widget to Azure Data Studio, open the Azure Data Studio command palette using the ⌘⇧P (Command + Shift + P) and then write Preferences you will see a list of options displayed as a list choose Preferences: Open User Settings from the list.

In the Search Settings text box write dashboard.server.widgets then click on dashboard.server.widgets and select the Edit option:

The left panel will turn yellow in Edit mode, now look for the last bracket from the last widget on the last panel in my case the all-database-size-server-insight. Carefully add a comma symbol after the last bracket, then copy the JSON code from the other window and paste it. It should look something like this:

Then simply save the changes to the user settings using the ⌘S (Command + S) shortcut, close all the open windows. Re-connect to one of your SQL Server instances, right click then choose manage it will display the server dashboard now just look for the widget we just created.

I hope you find this tip useful, in case you want to grab a copy of the SQL and JSON file used in this post you can find it here.

Stay tuned for more DBA mastery tips!

I’m very experienced multi platform Database Administrator (Oracle, SQL Server, MySQL), working for large scale US based companies and some other companies around the world.

I have multiple Microsoft SQL Server certifications under my belt (MCP, MCTS, MCSA, MCSE), and currently taking the Microsoft Data Science program.

Reviewing the SQL Server ERROR log

 

The SQL Server ERROR log is the best source to understand what’s wrong with your SQL Server instance, think this log is like your flight recorder.

In case you are not too familiar with the SQL Server ERROR log, the xp_readerrorlog extended stored procedure is your friend; and as a good friend you must get to know him. In this post, you will find some examples of how to take advantage of this undocumented stored procedure and maximize your time avoiding the utilization of slow and painful tools like Event Viewer from Windows.

First thing first, let’s understand what are the parameters we need to provide to the stored procedure and the most important how we have to use them.

Archive ID:
The first parameter is is about picking the order of the log we want to query:

  • Current log = 0
  • Archived error log #1 = 1
  • Archived error log #2 = 2

Log type:
Then, the second parameter is the SQL Server component we are focusing in:

  • SQL Server Instance Log = 1
  • SQL Agent Log = 0

So, combining these two parameter we can check the current log (0 in the first parameter) for a SQL Server instance (1 in the second parameter) or the SQL Server Agent (2 in the second parameter)

Here is how it looks like in T-SQL code:

-- Checking the current SQL Server instance and SQL Agent logs:
xp_readerrorlog 0,1 -- Remember 1 for SQL Server instance
xp_readerrorlog 0,2 -- Remember 2 fro SQL Server Agent

We also have an optional parameter we can include, it’s a simple search string. This parameter it’s really handy when you are looking for a specific event.

Here are some T-SQL examples how we can combine the first two parameters with a specific search string:

-- Checking for failed logins in our SQL Server instance:
xp_readerrorlog 0,1,'Login failed' --For SQL Server 2012 and below
xp_readerrorlog 0,1,"Login failed" --For SQL Server 2014 onwards

-- Checking for TempDB related entries in our SQL Server instance:
xp_readerrorlog 0,1,'TempDB' --For SQL Server 2012 and below
xp_readerrorlog 0,1,"TempDB" --For SQL Server 2014 onwards

Hopefully this tip could help you to save some time when looking at your SQL Server instance for recent errors or some events from the past that you might need to collect for further analysis.

Stay tuned for more DBA mastery tips.

I’m very experienced multi platform Database Administrator (Oracle, SQL Server, MySQL), working for large scale US based companies and some other companies around the world.

I have multiple Microsoft SQL Server certifications under my belt (MCP, MCTS, MCSA, MCSE), and currently taking the Microsoft Data Science program.

MSDB Tuning by purging backup and restore history

Have you ever had problem with your backup jobs taking more time than usual overlapping your maintenance window? It is the disk IO throughput fast enough? or it is your SQL Server not performing well? what could be affecting the performance to avoid your backup job to completely in timely fashion?

The answer to all these questions is not that difficult to find, in most of the cases there is a common point of failure for backup jobs taking more time with the past of the time. Your databases are bigger and your backup and restore history has been retained forever (since the SQL instance was created).

Most of the DBAs in the wild doesn’t know about this very important performance tuning aspect, they completely forgot about paying attention to the MSDB database. Probably because this is a system database there is an assumption that does not requires maintenance as any other regular database.

According to Microsoft docs you simply have run the sp_delete_backuphistory stored procedure in MSDB database, providing a date parameter to be used by this process as your oldest date to retain data to …. but guess what it  does not work well and is not that simply.

Before you go for it, you must consider the following aspects:

  • Purging this backup and restore history leads to blocking into MSDB
  • Running the in sp_delete_backuphistory stored procedure causes odd problems to your ongoing or scheduled SQL Agent jobs
  • The sp_delete_backuphistory it’s not optimal and takes a long time to run

So what are your options?

  • Understand what the sp_delete_backuphistory stored procedure is doing and write your own

Probably not the best option, because it will require you to invest time doing research, analysis, design and testing

  • Look for missing indexes in all the MSDB tables purged by the sp_delete_backuphistory stored procedure (Best option due the amount of time you have to invest creating a new process)
  • Schedule a job to purge the backup history during short periods of time after all your daily jobs are completed

This is a good option, because we have identified those indexes beforehand for you and also there is no development effort to re-create a process that already exists. Also you probably can schedule a job to run every weekend to purge 2 weeks of data at the time so the MSDB is not compromised by the purging effort.

Based on our experience adding the missing indexes to all the tables involved in the purge process, we noticed an improvement of 60% of the sp_delete_backuphistory stored procedure. So go ahead and try it and let us know how everything works for you 🙂

Happy purging, stay tuned for more DBA mastery tips.

Note:

  • This indexes works starting from SQL Server 2008 R2 SP3
  • You can find the MSDB Missing indexes script here

I’m very experienced multi platform Database Administrator (Oracle, SQL Server, MySQL), working for large scale US based companies and some other companies around the world.

I have multiple Microsoft SQL Server certifications under my belt (MCP, MCTS, MCSA, MCSE), and currently taking the Microsoft Data Science program.