Category Archives: MS SQL Server

Prepopulate a table with data in SSDT

The new database project in SSDT, is truly a great tool. It is not perfect thou, and one of its shortcomings, is that there is no out of the box way, to define some static/initial data in in a project, when you deploy it.

In all the projects I do, I always want to have some static data in the database. That could be meta data about the solution, customer type, product type etc.

Of course there is a workaround to this. One is to define the data in a post deployment script. To do that, right click your project and click “Add script”. Select the post deployment script.

To maintain data in a table, each time we deploy, we use a merge statement. I’ll use my stock table, that I created in a previous post. Then the SQL will look like this:

MERGE INTO [Stock] AS Target
USING (VALUES
('VWS','Vestas Wind System'),
('DANSKE', 'Danske Bank')
)
AS Source (StockSymbol, StockName)
ON Target.StockSymbol= Source.StockSymbol
-- update matched rows
--(If we don't want the application to update the rows)
WHEN MATCHED THEN
UPDATE SET StockName = Source.StockName
-- insert new rows
WHEN NOT MATCHED BY TARGET THEN
INSERT (StockSymbol, StockName)
VALUES (StockSymbol, StockName)
-- delete rows that are in the target but not the source
--(If we don't want the application to insert data in our table)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;

 

Now, just right click your project, and click publish. And badabing, now you have data in your table.

A more pretty way, could be to put the merge statements into SP, and the call the SP’s from the post deployment script. Then the application could also reset the data with a call to a SP.

The conclusion: To prepopulate your solution with data, put the data into a merge statement, and put it into the post deployment script. This is a fine solution as long as you don’t have to many or to big datasets. Hopefully MS will come up with a better solution in the next version of SSDT.

Analysis services -> Clear Cache

Just a short note about analysis services and its cache 🙂

One of the reasons for analysis services fast respond time is its cahce. But what if you want to test a query without using the cache? Well, just use this XMLA Script:

<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
 <ClearCache>
 <Object>
 <DatabaseID>Your Database Goes here </DatabaseID>
 <CubeID>And or cube id goes here</CubeID>
 </Object>
 </ClearCache>
 </Batch>

Also, if you do partial processing, the cache, can in some cases also be invalid. Therefore, after a partial processing, you also want to run the above XMLA script.

Change ”select top 1000” to ”Select top XXX”

In SQL server management studio, when you right click on a table, there is a shortcut to select top 1000 or edit top 200 rows. Se pic below.

It is an easy way, when you just want to see some of the data in a table, or if you want to edit some values in a small table. If you are using the 2 shortcuts you can change 1000 and 200 to values that make sense to you.

Click options->SQL Server Object Explorer. Then you get the options as shown in the pic below.

Change the values to your needs, and voila, now you have select top yyyy 🙂

Unboxing Azure part 1: The new Portal & how to create an Azure SQL database

Not so long ago, Microsoft(MS) announced a lot of new and cool features to their Azure platform at their event “Meet windows Azure”

There is no doubt that MS is expanding fast in the cloud, and with this blog series I will try to give a short overview of the business intelligence(BI) capabilities that are currently available in Azure.

The new Portal

MS have updated there portal, so that it is now primarily is using HTML5. It looks like this.

The new portal is intuitive and easy to use. One of the new exiting things is the virtual machines, where you for instance can create a windows server, and do the stuff that are not currently supported in other services in Azure. In the BI world, that could be installing a SQL server to use the SQL server agent to run SSIS packages or to host analysis services, since these services are not supported other places in Azure right now.

The new portal is still in “Preview”. I don’t know precisely what that means, but if there is something you are missing, you can go to the old Silverlight portal by clicking on the portal link in the top right corner.

SQL Azure

But first things first. When we are talking BI we have to start with a database. Click the SQL databases item, and click create database, and follow the instructions. For information regarding edition and pricing, read here

When you create a SQL azure database, it comes with a firewall that blocks all traffic. To get access to it, you need to open for the IP addresses that need to access it. In the new portal that is hard to find. Click “SQL databases”->”Servers”->Click on the server name-> Clik configure and add the IP addresses. While you’re developing, you can start by adding your own IP.

When you have done that, you can start designing your database. There is a management tool in Azure you can use. Just click the manage database link in the portal.

From the management tool you can design tables(See below), views and SP’s.

You can also create new databases and deploy databases from your database projects in Visual studio, which is the correct way to develop SQL databases. (I’ll cover that in a later blog post)

You can off course also logon to the database from management studio. Get the connection string from the portal, and type it in. See example below.

As before it is very limited what you can do from management studio. You can’t design anything with the designer. You have to do everything in SQL, and there is also something you can’t do with SQL either. (Again the correct way to develop database projects is to use the database project in visual studio.)

But to conclude on this first look. We have created a SQL database using the new portal. From what we have seen up until not, not so much have changed since the last version. How we can do SQL development have however changed, and I’ll cover that in the next blog post in the series.

Monitor the process of long running SQL queries like DBCC SHRINKDATABASE

When a query runs a little longer than I would expect, I sometimes starts to get nervous. Especially in the cases, where there is no status telling me when it will finish; Like for instance when I run a shrink database cmd.

You can get information about the queries running like this.


SELECT 
T.text, 
R.Status, 
R.Command, 
DatabaseName = db_name(R.database_id) , 
R.cpu_time, 
R.total_elapsed_time, 
R.percent_complete
FROM sys.dm_exec_requests R 
CROSS APPLY sys.dm_exec_sql_text(R.sql_handle) T


Especially the percent_complete is nice.

Search text in SP

*Just as a note to myself*

I’ve just joined a big project with thousands of stored procedures. To get an overview of it, it would be nice if I could get the SP’s which inserted data in to table a, or update column b in table c etc.

That is easily done, like so:

SELECT Name
FROM sys.procedures
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE ‘%table a%’

and

SELECT Name
FROM sys.procedures
WHERE OBJECT_DEFINITION(OBJECT_ID) LIKE ‘%column b%’
AND OBJECT_DEFINITION(OBJECT_ID) LIKE ‘%table c %’

MS, seriously!!! SSIS and System.Runtime.InteropServices.COMException (0x800A03EC)

Let me describe my problem. Maybe you have the same, if you have googled for the error code, and found this page.

So, I’m downloading an old Excel file from a German webpage. The Excel file is in a spreadsheetML format, which is impossible to read from SSIS.

No worries, I can easily with a script task, convert the file to CSV, using C# and the Excel API. Like so

….
using MS.Internal.Xml;
using Microsoft.Office.Interop.Excel;
…….

Microsoft.Office.Interop.Excel.Workbook wb1 ;
Microsoft.Office.Interop.Excel.Application wb2 =
new Microsoft.Office.Interop.Excel.Application();
wb2.DisplayAlerts = false;
wb1 = (Microsoft.Office.Interop.Excel.Workbook)wb2.Workbooks._Open(filename, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);

if (wb1.FileFormat == Microsoft.Office.Interop.Excel.XlFileFormat.xlXMLSpreadsheet)
{
wb1.SaveAs(filename.Replace(“.xls”,”.csv”) , Microsoft.Office.Interop.Excel.XlFileFormat.xlCSVMSDOS , Type.Missing, Type.Missing, false, false, Microsoft.Office.Interop.Excel.XlSaveAsAccessMode.xlNoChange,false,false,false,false,false);
}
else
{
wb2.Workbooks.Close();
}
wb2.Workbooks.Close();

Now I can read the data with a SSIS CSV data source. This Works fine, when I run my SSIS package from BIDS. When I put it in to production, and run the package with the SQL server agent, I get the following error:

Service cannot be started. System.Runtime.InteropServices.COMException (0x800A03EC): Microsoft Office Excel cannot access the file ‘c:\XXXX’. There are several possible reasons:
• The file name or path does not exist.
• The file is being used by another program.
• The workbook you are trying to save has the same name as a currently open workbook.

None of the above is my problem. I think I tried everything to find a solution to the problem. It turns out, that it is a bug in windows server 2008. There are some different solutions on the web, which include changing the registry values on the server.

Since I’m a consultant, and hence an expensive resource, I would like not to fuck anything up, playing around with registry values.

I found an easier fix however. Apparently, the folder C:\Windows\SysWOW64\config\systemprofile\Desktop has been removed from Windows server 2008. It was present in windows server 2003.

When I created the folder, everything worked fine. Apparently the desktop folder is necessary, if you want to manipulate Excel files with code from the SQL server agent. Scary.

So, the answer: if you have problems manipulating office files from code executed from the SQL server agent on windows server 2008 create the folder C:\Windows\SysWOW64\config\systemprofile\Desktop (C:\Windows\System32\config\systemprofile\Desktop if it is 32bit)

That was it for now. Bye bye:-)

OLE DB error: OLE DB or ODBC error: Invalid column name ‘x’.; 42S22.

After spending time debugging my analysis services cube for the second time having this error, I guess it will be a good idea to blog about it, so I can remember it to another time 🙂

I got the above error after I had modified a dimension, so used it three fields for the key, instead of one. I made the relationship from my facts, based on the 3 keys, and then I deployed and processed the cube. And then I got the above error.

The reason for the error is, that I had the partitions for the fact table, based on some SQL sentences, and I had forgotten to modify these partitions after I had made the new relation.

So the conclusion: When you get the error “Invalid column name ‘x’.; 42S22” in analysis services, it is a problem related to the partitions design.

Warning: Null value is eliminated by an aggregate or other SET operation

I just had a SSIS package that had been running every day for the last couple of years without any problems, which failed on me with the following error message:

Warning: Null value is eliminated by an aggregate or other SET operation

The SQL source had a group by, where I was taking a minimum on some datetime fields. I needed it to return NULL values because I had some additional logic that took these NULL values into account.

What was even stranger was, that when I ran the package by double clicking it on the server, it completed with success.

So apparently a warning from the SQL server causes an error in SSIS, but only sometimes….. Nice….

The solution is to either change your SQL, so you don’t get NULL values, by for instance use COALESCE or ISNULL, or if you can’t do that, put this statement in front of your SQL : SET ANSI_WARNINGS OFF

Read more about the problem/bug here:
https://connect.microsoft.com/SQLServer/feedback/details/483175/failure-because-of-warning-about-null-aggregation

[rsAggregateOfMixedDataTypes] The Value expression for ‘x’ uses an aggregate function on data of varying data types. Aggregate functions other than First, Last, Previous, Count, and CountDistinct can only aggregate data of a single data type

Yet another irritating error message from reporting services 🙂

You will typically get that error, when you try to create a calculated member in the visual designer like so:

IIF( [Measures].[x]=0 , [Measures].[y] , 0,00)

And then use the SUM or another aggregate function on the new measure.

If you create the same calculated member in pure MDX, and run it in for instance management studio, it will not yield any errors. Like so:

WITH MEMBER [Measures].[z] AS
IIF( [Measures].[x]=0 ,
[Measures].[y] , 0,00)

Just to describe the calculated member in words. If the measure x equals 0, then the new measure z will be the value of the measure y, and if the measure x not equals 0, then the new measure z will take the value of 0,00.
The problem is that, when reporting services SUM the new measure z, it Sums data from y and 0,00, and reporting services will in some cases see the values as different data types. In my example the measure x was a decimal, and reporting services interpreted the 0,00 as an integer.

To avoid this problem, you have to tell reporting services that 0,00 is a decimal like this:

IIF( [Measures].[x]=0 , [Measures].[y] ,  Cdec(  0,00))