Wednesday, 31 October 2012

Failed to open the explicitly specified database - but what database was it?

A situation where SQL is helpful, but not quite helpful enough, is when it reports that a user is unable to access a certain database.

SQL will now helpfully tell you the user - it will even give you the IP address of the machine they tried to connect from - leaving you short of one minor detail ... which database was it they were trying to access.

This will show up in the error logs as something like:

Error: 18456, Severity: 14, State: 38.

Login failed for user 'Grumpy\DBA'. Reason: Failed to open the explicitly specified database. [CLIENT:]

So what was the specified database??

I'm not sure if there is a way to retrospectively find this out, but you can certainly use Profiler if it is something that is still happening.

In profiler, the Eventclass that gives you the magic information is under "Errors and Warnings", and is the "User Error Messages" Event.  That's right - SQL tells the user what database they can't open but won't tell you!

If you also enable the "Security Audit", "Audit Login Failed" events, and add a filter against "LoginName" for the user name in the original message.

What you should see are "Audit Login Failed" messages for the login you care about, that give the same message as in the Error Log, but you should also see a "User Error Message" entry just before it, and in here you get:

Cannot open database "GrumpyDatabase" requested by the login. The login failed.

The ApplicationName field may also help identify what the user is trying to use to access this database, but at least now you are armed with all the relevant details when you challenge them about trying to access your databases.

Tuesday, 23 October 2012

What reports are being read in SSRS

If you want to check which reports are being run (or not) in SSRS, or details about which users are running reports, you can pull this information from the ReportServer database.

If you need to check you are on the right database, you can check the SSRS machine name and instance the ReportServer database relates to by running:

WHERE Client = 1

Now to find out who's being doing what ... and SQL keeps it all nicely for us in the table "ExecutionLog".

You need to join this back to Catalog to get the report names / paths, but you can then easily start to get information on the most frequently run reports, which users are running certain reports, or even which reports are slow to render.

E.g. to get details on how often a report is run:

SELECT c.Name, c.path , count(*) as 'ReportRun'
FROM ExecutionLog e 
JOIN Catalog c on e.ReportID = c.ItemID
GROUP BY c.Path, c.Name

to find out which users are running a certain report:

SELECT userName, COUNT(*) 
FROM ExecutionLog e 
JOIN CATALOG c ON e.ReportID = c.ItemID
WHERE c.Name = 'ReportName'
GROUP BY e.UserName

I'm sure you can find other details to pull out of this table that may prove useful.


By default the retention period for the ExecutionLog table is 60 days.  However this is also adjustable by updating the ConfigurationInfo table for the ExecutionLogDaysKept row.  Set the number of days, or use "0" to keep data indefinitely.

UPDATE ConfigurationInfo 
SET VALUE = '-1' 
WHERE Name = 'ExecutionLogDaysKept'

NOTE - Ignore books online which says this should be set to 0 - if you look at the clean up proc in the ReportServer database, namely ExpireExecutionLogEntries, you can see it actually only leaves the data if the value is set to -1, and not 0.

I would also recommend you download the SSRS Reports Dashboard.  This plugs into SSMS and once installed, you just right click on a ReportServer database, and it gives you access to all sorts of info relating to the relevant SSRS instance.

Wednesday, 19 September 2012

SQL 2008 R2 Reporting Services won't restart after SP2

Seems a few people have encountered a problem with SSRS 2008 R2 after installing SP2.

In my case the SSRS instance would not restart, so I checked Event Viewer.
There were no messages at all under the Application log, and all the system log stated was:

Timeout (30000 milliseconds) waiting for the SQL Server Reporting Services (Grumpy) service to connect.

I tried changing the account that SSRS was running under, but it would still not restart, and I was considering un-installing SP2 (KB 948109).

A bit more investigating though, and there didn't seem to actually be any error from SSRS, so I tried changing the service startup timeout value to 60 seconds.

This time SSRS started successfully (albeit slowly).  It did require me to backup the encryption key file, but I think that is more likely to have been related to playing around with the account it was running under rather than the timeout change.

The post I found that details changing the timeout is as follows:

  • To increase the timeout value in the registry, follow these steps:
    1. Start Registry Editor (Regedit.exe).
    2. To change the value data for the ServicesPipeTimeout DWORD value to 60000 in the Control key, follow these steps:
      1. Locate and then click the following registry key:
      2. Click the Control subkey
      3. Right-click the ServicesPipeTimeout DWORD value, and then click Modify.
      4. Click Decimal.
      5. Type 60000, and then click OK.
    3. If the ServicesPipeTimeout value is not available, add the new DWORD value, and then set its value data to 60000 in the Control key. To do so, follow these steps:
      1. Locate and then click the following registry key:
      2. Click the Control subkey.
      3. On the Edit menu, point to New, and then click DWORD Value.
      4. Type ServicesPipeTimeout, and then press ENTER.
      5. Right-click the ServicesPipeTimeout DWORD value, and then click Modify.
      6. Click Decimal.
      7. Type a value of 60000, and then click OK.

        The value is 60000 milliseconds and is equivalent to 60 seconds or to one minute.
      Note This change does not take effect until the computer is restarted

Thursday, 1 March 2012

Finding recent SQL deadlocks

Identifying deadlocks is often a pain, and previously required various trace flags turning on that then filled up your SQL event log with errors about deadlocks.
Great if you have time to go through the error log - not so helpful if you just want a quick overview of when deadlocks happened, and what objects were involved.

The trace flag method also wasn't much use when someone asked about a deadlock that had just happened, and the trace flags weren't already running, and your time-machine is at the menders.

Dashing to the rescue in SQL 2008 R2, is "Extended Events".
This works a bit like an in built profiler trace for certain events.  And as luck would have it, SQL has a system event session that already captures deadlock information for you without you needing to even turn anything on.
Also, as the data is stored in XML, you can read the data directly into a useful format

I wasn't very familiar with the SQL XML parsing, and I've seen various complicated ways of retrieving the useful data.
However I  found that some of the methods really hammered server cpu, so I played around until I found a relatively straightforward and low impact way of reading the XML.


SELECT TOP 1 @XMLData = CAST(target_data  AS XML)
FROM sys.dm_xe_session_targets st
JOIN sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = 'system_health'

SELECT  col.value('(./@timestamp)', 'DATETIME') AS Deadlock_TimeStamp , 
col.value('(data/value)[1]', 'VARCHAR(MAX)') AS DeadLock_XML
FROM @XMLData.nodes ('//event') AS X(Col)
WHERE col.value('(data/value)[1]', 'VARCHAR(MAX)') LIKE '<deadlock%'

OK, so the deadlock info is still XML, but we immediately have the date/time of deadlocks, and the details of each event.

Diving further into XML coding, I found I could still pull out some more useful info without needing to manually start looking anything up yet.
For example, run the previous select into a temp table, such as #DATA, and you can now run this:

SELECT Deadlock_TimeStamp , 
DB_NAME(Currentdb) AS DBName,
CASE WHEN InputBuf LIKE '%Object Id%' 
THEN OBJECT_NAME(SUBSTRING(InputBuf, CHARINDEX('Object Id', InputBuf) + 12, LEN(InputBuf) - CHARINDEX('Object Id', InputBuf) - 12 ), currentdb)
ELSE '' END AS ObjectName,
SELECT Deadlock_TimeStamp,
a.value('/process[1]/@spid', 'INT') AS spid, 
a.value('/process[1]/@waitresource', 'VARCHAR(100)') AS waitresource,
a.value('/process[1]/@hostname', 'VARCHAR(100)') AS hostname,
a.value('/process[1]/@currentdb', 'INT') AS currentdb,
(SELECT data.block.query('.') AS a, Block.value('inputbuf[1]', 'varchar(MAX)') AS inputbuf, d.Deadlock_TimeStamp
CROSS APPLY DeadLock_XML.nodes ('//process-list/process')  AS Data (Block)
) x
) y

Now for each deadlock we know the database, what was running (the inputbuf field) or the object being referenced, and the waitresource details.

I've done a couple of posts on how to identify what the waitresource field refers to, so you could go further and automate that as well if you want, but for now this gives a quick overview of what was going on in the deadlocks.

Wrap all this up in a proc, add a data parameter or two, and you have a quick way of returning details on yesterdays deadlocks the next time your time machine is broken!

Word of warning - the data in sys.dm_xe_session_targets is purged when the session event is restarted, or the ring_buffer is full - if you want to keep a history of deadlocks I'd recommend you write the information into a table on a regular basis.

I would also highly recommend the Extended Events SSMS Addin available here: Extended Events Addin - it makes creating event sessions much more straightforward.

Friday, 17 February 2012

Change default backup path for SQL Server

Fed up having to change the path everytime you want to manually take a database backup?

Have you found places to change the default database and log file path, but are wondering why there doesn't seem to be anywhere you can change the default backup path?

If so, then this is the article for you (if not, then sorry - nothing to see here!)

To change the default backup path for a SQL instance, you need to open the Registry Editor (run "RegEdit").

Now Navigate to-
> Microsoft 
> Microsoft SQL Server 
> SQL Instance (see below for naming style)
> MSSQLServer

Note the SQL Instance folder may be in an odd format - it seems to show MSSQL10_50.instancename for SQL 2008 R2, and just MSSQL.1 for SQL 2005 - but you want the one that has another 10 or so directories inside it, one of which is MSSQLServer.

In this MSSQLServer key you should see an entry for "BackupDirectory".

Simply open this and change the value to your backup directory path of choice, and next time you try to take a backup, you will find it defaults to that path instead.

Wednesday, 15 February 2012

Refresh SQL Intellisense cache

It may not be as good as Red-Gates SQLPrompt, but I have to admit I'm getting used to SQL 2008's Intellisense.

What I was starting to get fed up with though was creating new objects or fields, and finding them underlined by intellisense who refused to acknowledge that I was correct.

I figured it must have an internal cache, but I was always too lazy to hunt around to find out how to refresh it - until now.
To be honest, it wasn't that tricky and there are mouse click or keyboard shortcut methods.

Keyboard method
In the query window with the offending underlined object - press Ctrl + Shift + R.

Mouse method
Take your hand off the mouse, put a finger on a shift key, a Ctrl key, and the R key, and press down!!!
In fact you could keep your right hand on the mouse, and press the keys with your left hand.
However, if you really want to increase your chances of RSI, and insist of doing everything through slow and cumbersome graphical interface menus, far be it from me to get grumpy (too late), go up to the Edit menu in SSMS, and under "Intellisense", you will find "Refresh Local Cache" for you to click away on to your hearts content.

It even tells you there though just to press Ctrl + Shift + R, so just do that OK?!!

Wednesday, 8 February 2012

SSRS 2008 R2 - Reports are blank on Chrome

There is a frustrating issue with SQL 2008 R2 Reporting Services when rendering on Chrome (and Safari) - in that they simply display a completely blank screen.

After eventually convincing the users in question that it wasn't SQL's fault, or a problem with the report definition, since they displayed fine on internet explorer, I set about tracking down a fix.

There doesn't seem to be a lot of documentation out there on this problem at the moment, and the only place I found anything helpful was here: StackOverflow

Basically though, you need to update the ReportingServices.js file on the report server, as Chrome & Safari render a certain type of element in a slightly different way to IE.


Connect to your Reporting Services server, and find the relevant ReportingServices.js file - this will probably be somewhere like "C:\Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\ReportManager\js\ReportingServices.js", or on a 64 bit machine, "C:\Program Files (x86)\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting Services\ReportManager\js\ReportingServices.js"

First up, take a copy of this file - let's not be silly here and assume this is a 100% foolproof solution!

Next, edit this file in notepad. Do a quick search for PageLoad, to ensure no-one else has already hacked applied this fix, then jump to the bottom of the existing text.

Simply append the following code:
function pageLoad() {
var element = document.getElementById("ctl31_ctl10");
if (element)
{ = "visible";
} }

Save the file, and restart Reporting Services - you should now find that Chrome displays your reports again.

ADDITIONAL - if you have a problem getting reports to show up on IE 10 on Windows 8, it may be this fix you need instead:  SSRS Reports blank on Windows 8

UPDATE: the following combination of ctl codes seems to work on later versions :

function pageLoad() {
var element = document.getElementById("ctl32_ctl09");
if (element)
{ = "visible";
} }

Thursday, 2 February 2012

Changing SSRS subscription owner

When someone creates a subscription in reporting services, that user is given ownership of the subscription, even though they may have nothing to do with that subscription.

This can cause a problem if that person leaves and their account is disabled, as the recipients of the subscription stop receiving their e-mail report, despite the SQL agent job completing successfully.  Looking at the subscription details of the relevant reports, you may see a message that:

Failure sending mail: The permissions granted to user 'Grumpydomain\Old.DBA' are insufficient for performing this operation.Mail will not be resent.

To fix this, we need to change the owner of the subscription.  Easy, you think - let's just select "change owner" on the Subscriptions tab and ... hmm ... well let's just edit the subscription and over-type the owner field ... ahh ... OK, well it must be in the reporting services config app under the email settings ... Rats!

Give in?

Well, there are probably other ways of doing it, but I found the easiest fix was simply to change the data directly in the ReportServer database.

The table you need to change is Subscriptions, wherever the OwnerID value matches the relevant UserID field from the Users table.

Once you've identified an alternate UserID to use, just update the Subscriptions table

UPDATE dbo.Subscriptions 
SET OwnerID = '12345678-ABCD-1234-5678-0123456789AB'  -- new owner ID
WHERE OwnerID = 'ABCDEFGH-4321-WXYZ-9876-ABCDEFGHIJKL'  -- old owner ID

You should now find the subscription runs (see this other post for working out which job you need to run though to test this!)

Wednesday, 25 January 2012

More deciphering of waitresource

While continuing to look at what is causing blocks, you come across another format of information in the waitresource column in sysprocesses.  This time you see:

PAG: 7:1:11223344

Again - thanks for the help SQL - object_ID too much trouble for you I guess??  Anyway, gives a DBA something to do I suppose.
This time it's not a hobt_id, but a page id, and is read as "DB_id, file_id, page_id"

To get this one into text, we need to do a couple of things.
First Run:
DBCC TraceOn (3604)

 - this just tells DBCC commands to output to the screen rather than errorlog, and just lasts for your current connection, otherwise the next command just tells you "DBCC execution completed" which is even less helpful than "PAG 7:1:11223344"!

Then run
DBCC PAGE (7, 1, 11223344)

(Note you could run this adding WITH TABLERESULTS for an even tidier output)

From the result set returned, you should be able to spot a few useful Id's, such as

Metadata: IndexId
Metadata: ObjectId

From here, with OBJECT_NAME() or just scanning sys.indexes, you should be able to ascertain what table and index you are being blocked by.