SQL server 2005 publishing views to SQL 2000 subscribers

If you are in the process of migrating your SQL Server 2000 scenario to SQL Server 2005 and you have replication configured, here is a hint that you must have in mind before changing the publisher to be a SQL Server 2005: SQL Server 2005 has changed the syntax for SELECT clauses (when ORDER BY is used) in the following way:
SQL 2000: SELECT TOP 100 PERCENT Field1, Field2 FROM Table1 ORDER BY Field1
SQL 2005: SELECT TOP (100) PERCENT Field1, Field2 FROM Table1 ORDER BY Field1
Please note the parentesys. This newer syntax is not recognised by SQL Server 2000 engine, thus giving you errors when the initial snapshot is tried to be applied. If this is your case and you need to set up SQL Server 2005 as a publisher to SQL Server 2000 subscriber(s), you will need to change those views that need to be published to remove the ORDER BY so that no TOP [n] PERCENT is needed (and also be removed), and modify your application accordingly. Besides, according ORDER BY Clause (Transact-SQL)(in BOL):
When ORDER BY is used in the definition of a view, inline function, derived table, or subquery, the clause is used only to determine the rows returned by the TOP clause. The ORDER BY clause does not guarantee ordered results when these constructs are queried, unless ORDER BY is also specified in the query itself.
In other words, it is useless to do a SELECT TOP 100 PERCENT... ORDER BY FieldX in a view (let's say View1), since all records will be returned, but not granted to be ordered as expected, unless we do a SELECT * FROM View1 ORDER BY FieldX from the view itself. See also: SQL Server 2005 Ordered View and Inline Function Problems from OakLeaf Systems' blog.


Microsoft SQL Server Search Engine using Google co-op

Google has recently launched a beta service called Google co-op. According their overview:
Using it you can create a highly specialized Custom Search Engine that reflects your knowledge and interests.
Since it seemed to be a good idea, I gave it a try and created a customised search engine for specialised content on Microsoft SQL Server. This search engine allows you to use the power and knowledge of Google and refine the results of your queries, according additional keywords, giving a higher weight to those sites you consider to be more interesting, in order to return more precise results. In summary, all I have done is customise the search engine with websites related to SQL Server (Microsoft, blogs, usenet groups and so on). Feel free to give it a try and suggest me more sites to include to refine the searches further. It seems to work fine.


INNER JOIN with a comma separated values CSV field

Scenario: Sometimes we use varchar fields to store little pieces of data that, extriclty talking, should be normalized and stored on another table that references the former main table. Think for instance in a list of items referenced by a bigger entity, using a 1..n relationship. In certain particular scenarios that list might be as short as 4 or 5 items, and sure less than 10. In that case, and if you plan that you will never need to do a join with that data and another table you are tempted to store de data as a list of items (varchar), and simplify things not adding another table to your already huge schema, despite the normalization rules. And time goes by and needs change and then you need to do that JOIN that you planned you will never need to do, and you have just a list of integers such:

K1  List
1  '1, 5, 22, 31'
2  '4, 9, 48'
3  '5, 13, 22'

And you need to generate something like:

K1  Item
1  1
1  5
1  22
1  31
2  4
2  9
2  48
3  5
3  13
3  22

Solution: I must confess that I was not able to find a good solution by myself and asked for help on microsoft.public.es.sqlserver. Miguel Egea showed me a solution based on Itzik Ben Gan's work that was worth making this blog entry. Pure Transact-SQL, and efficient:

IF NOT OBJECT_ID('tempdb..#t') IS NULL

SELECT 1 K1, '1, 5, 22, 31' List INTO #t
SELECT 2, '4, 9, 48'
SELECT 3, '5, 13, 22'

/* CommonTableExpression to retrieve the first 1000 numbers */
( SELECT 1 id
  SELECT id+1 from cte WHERE id<1000
  --SELECT *,
  --  SUBSTRING(List, id+1, PATINDEX('%,%', SUBSTRING(List, id+1, LEN(List)))-1)
    CONVERT(int, SUBSTRING(List, id+1, PATINDEX('%,%', SUBSTRING(List, id+1, LEN(List)))-1)) Item
  FROM cte INNER JOIN (SELECT K1, ',' + List + ',' List FROM #t) t
    ON SUBSTRING(t.List, id, 1) = ',' AND LEN(List)>id


MS Word tries to connect to server hosting templates

We had set our computers running MS Office 2003 to use group templates from a corporate server so that every one uses the same templates with our company logo, font styles, etc (Tools -> Options -> File locations -> Group templates). Some years ago the templates were located on a shared resource (everyone readonly access) in one of our servers (let's call it \\OLDSERVER\OfficeTemplates$\Word). Years have passed by and with Windows 2003 R2's new features, we have moved on to use replication, DFS and domain-based namespaces to unify all the shares into a single entry point for everyone. The new group templates location was set to H:\Common files\Office templates\Word. Everything seemed to be perfectly set up and working. Eventually, the server that used to host the templates was removed and the problem showed up: Every time you try to open a .doc file that was created using the original path template (based on a shared resource) MS Word took from 2 to 3 minutes to show up the file. It finally opens, but most users think there is a problem with the servers, the file, MS Word or their computers and rush to call IT support. We started to investigate the issue and found that Adam Leinss was experiencing the same problem. From Adam Leinss’ Tech Tips: Remove Novell and Microsoft Word Goes Ape:

Our company is moving from Novell to Microsoft for our file and directory services. We removed the Novell client from everyone’s workstation and that seemed to work just fine. Then we removed everyone’s rights to said Novell server and everything is still fine. Then shut down said Novell server and bam: opening some Word documents takes 2 to 3 minutes!
We tried to use the macro/script he suggests. You need to download dsofile.dll - Microsoft Developer Support OLE File Property Reader 2.0 Sample (KB 224351) in order to make it run and still make some modifications to the code since there seem to be two different dsofile.dll versions and minor changes are needed in order to run it using the new dsofile.dll version. Extracted from March 2005 Tales from the Script (Historias de Secuencias de Comandos):
Old dsofile.dll version:
Set objPropertyReader = CreateObject("DSOleFile.PropertyReader")
Set objDocument = objPropertyReader.GetDocumentProperties("C:\Scripts\New_users.xls")
Wscript.Echo "Author: " & objDocument.Author

Code modified to make it run with the new version:

Set objFile = CreateObject("DSOFile.OleDocumentProperties")
Wscript.Echo "Author: " & objFile.SummaryProperties.Author

Summing up, here is the modified version of the script that we used:

Sub TemplateBatchChangeModified()
    Dim objPropertyReader
    Dim strFolder As String
    Dim strFileName As String
    Dim objThisDoc As Word.Document
    Dim strFindTemplate As String
    Dim strReplaceTemplate As String
    Dim strAffectedDocs As String

    On Error Resume Next

    'Create the PropertyReader object

    Set objPropertyReader = CreateObject("DSOFile.OleDocumentProperties")
    If Err.Number <> 0 Then
        MsgBox "You must install the DSOleFile component. See " & _
        GoTo FinishUp
    End If

    'Get the template names
    strFindTemplate = UCase(InputBox("Name of template to find (exclude the .dot)") & ".dot")

    strReplaceTemplate = InputBox("Name of replacement template (exclude the .dot)") & ".dot"

    'Make sure it's a valid template. Try to create a new document based on it.
    Set objThisDoc = Word.Documents.Add(strReplaceTemplate, Visible:=False)
    If Err.Number <> 0 Then
        'No such template
        MsgBox "There is no accessible template named " & strReplaceTemplate
        GoTo FinishUp
    End If
    'Close the test document
    objThisDoc.Close wdDoNotSaveChanges

    On Error GoTo ErrorHandler
    'Get the current documents path
    strFolder = "drive:\directory from with you want to replace templates\"

    'Examine all Word documents in the directory

    'Get the first document name
    strFileName = Dir(strFolder & "*.doc")

    While strFileName <> ""
        'Look at the template name
        objPropertyReader.Open (strFolder & strFileName)
        If UCase(objPropertyReader.SummaryProperties.Template) = strFindTemplate Then

            'It matches. Open the document
            Set objThisDoc = Word.Documents.Open(FileName:=strFolder & strFileName, _

            'Change the attached template
            objThisDoc.AttachedTemplate = strReplaceTemplate

            'Save the change
            objThisDoc.Close wdSaveChanges

            'Note the document
            strAffectedDocs = strAffectedDocs & strFileName & ", "
        End If
        'Get the next document
        strFileName = Dir

    'Report the results
    If strAffectedDocs = "" Then
        MsgBox "No documents were changed.", , "Template Batch Change"
        'Remove the trailing comma and space
        strAffectedDocs = Left(strAffectedDocs, Len(strAffectedDocs) - 2)

        MsgBox "These documents were changed: " & strAffectedDocs, , "Template Batch Change"
    End If
    GoTo FinishUp

    Set objThisDoc = Nothing
    Set objPropertyReader = Nothing
    Err.Raise vbError + 1001, "TemplateBatchChange", "TemplateBatchChange encountered an error: " & Err.Description

    'Release object references
    Set objThisDoc = Nothing
    Set objPropertyReader = Nothing
End Sub

There is however a problem with this script, or with MS Word API to be more precise: In our case the old and the new templates are called the same. The only change is WHERE do they are located (the unexisting shared resource and the new location). Having that in mind, if you run: /

objThisDoc.AttachedTemplate = strReplaceTemplate

no change is actually done, and the file is not modified at all (despite that the scripts tells you that it has been changed). The only way to effectively change the files showing this delay when being opened is to change the name of the template. For instance we had a template called Corporate+TÜV Envelope English.dot, we would need to create a copy of it with the name Corporate+TÜV Envelope EN.dot for instance and then run the script telling to change Corporate+TÜV Envelope English.dot into Corporate+TÜV Envelope EN.dot FOR EVERY .DOC FILE created during the last 3 years that resides on the server, and this script repeated FOR EVERY template we use. If you have set your group templates on a shared resource I advise you not doing it any more. You will run into problems whenever you need to replace your server. You can check how the full path to the template is saved with your documents saving them as a .xml file and the opening it with notepad. You will find the following key:

<w:attachedTemplate w:val="\\OLDSERVER\OfficeTemplates$\Word\Corporate+TÜV Envelope English.dot"/>

We have finally decided not to change any single file of our servers. Instead of doing such a lot of changes, we just have created an A record in our DNS Servers to point OLDSERVER to the IP address of any of the live servers. Having done that MS Words 2003 opens the files that poing to \\OLDSERVER\OfficeTemplates$\Word without the delay of 3 minutes. There is no need to create the share on the server pointed by OLDSERVER, it only needs to be there, alive and 'pingable'. It is not a solution, just a workaround. The real solution is MS Word being updated (maybe in the next SP or in Office 2007) so that it does not look for the path of the original template anymore.


Rename files to match (or include) their date/time stamp

This article is quite old but I find it really useful. I'm sure you have had such a need sometime and either you managed without it, circumventing the problem or solving it yourself (doing a little in-home application). The article is related to digital cameras and renaming the files in a more convenient way instead of the useless schemas that they use. However this can be applied to rename a backup file just after it has finished, using the date of the file so so that the next backup does not overwrites the old one or any other scenario that you can think about. Rename Files to Match Date/Time Stamp
My digital camera generates filenames like IMG001.jpg, IMG002.jpg, and so on. This isn't useful, because the filenames don't describe anything, and I often run into problems with duplicate filenames when I transfer images to my computer. Is there a simple way in Windows XP Professional to rename a group of files, with each filename made up of the date and time of file creation? For example, I'd like to see filenames such as 20030730_143222.
A sample that just echoes the names of the files in the current directory including the date/time just before the file's extension.
FOR %%V IN (%1) DO ECHO %%V [%%~tV] %%~xV
And here the final command. In this case the original file name is completely discarded.
FOR %%V IN (%1) DO FOR /F "tokens=1-6 delims=/: " %%J IN ("%%~tV") DO IF EXIST %%L%%J%%K_%%M%%N%%O%%~xV (ECHO Cannot rename %%V) ELSE (RENAME "%%V" %%L%%J%%K_%%M%%N%%O%%~xV)
If you want to add date/time information before the extension and keep the oringinal name in place, you can use:
RENAME "%%V" "%%V-%%L%%J%%K_%%M%%N%%O%%~xV"
Instead the former rename.


Error 0x80070002 applying updates

We are permanently receiving the following error message in System EventLog of a particular server:
Event Type: Error
Event Source: Windows Update Agent
Event Category: Instalación
Event ID: 20
Date:  03/11/2006
Hour:  12:36:11
User:  Not available
Error de instalación: Windows no puede instalar la siguiente instalación con error 0x80070002: MSXML 6.0 RTM Security Update (925673).

Para obtener más información, vea el Centro de ayuda y soporte técnico en http://go.microsoft.com/fwlink/events.asp.

The rest of servers had this update (MSXML 6.0 RTM Security Update 925673)applied without any problem. The problem persists through restarts and the update is permanently shown in system tray bar as pending. We deliver updates using Windows Server Update Services (WSUS) 2.0.

After some research we found KB922582 - Error message when you try to update a Microsoft Windows-based computer: "0x80070002" (last reviewed September 19, 2006).

SYMPTOMS When you try to update a Microsoft Windows-based computer by using any of the following Microsoft tools, you may receive error code 0x80070002:
  • The Windows Update Web site
  • The Microsoft Update Web site
  • The Inventory Tool for Microsoft Updates (ITMU) for Microsoft Systems Management Server (SMS) 2003
  • Software Update Services (SUS) 1.0
  • Windows Server Update Services (WSUS) 2.0

Nov, 7, 2006 Update: Even though the server experiencing this problem was patched with the former KB922582 and rebooted several times and even re-patched and rebooted once again, the pending MSXML 6.0 RTM Security Update was still there and the error appeared once and again. At the end we decided to manually download the pending update and install it interactively instead of allowing it to be applied through WSUS2.0: Vulnerabilities in Microsoft XML Core Services 6.0 RTM Could Allow Remote Code Execution (925673) The server experiencing this behaviour was a Dell PowerEdge 2950 (Dual Xeon with 4Gb RAM) running Server 2003 R2 x64 Standard and serving SQL Server 2005 Standard as domain controller.

2007-01-12 Update: Too late for me, but if you run into this problem, please see KB910336: You receive a "0x80070002" or "0x80070003" error code after you download an update from Windows Update, from Microsoft Update, or from Windows Server Update Services for more information and resolution about this issue. The article is last reviewed on June 30, 2006 but I couldn't find it when I needed. Maybe you are luckier if you reach this point...


#N/A displayed in cells in Analysis Services 2005

Some days ago I realised that, after having created a Cube reader role and after having granted read access on the cubes, the some data was not still available and the cells showed a #N/A message under particular conditions. The problem arose when the user dragged a measure (any) to the center of the table and then a dimension (any) to the filter area, and then used the filter to select everything but a single dimension value.

If the filter was not showed or not used (selected everything) the measure was display correctly, but if a single dimension value was excluded from the calculation, the #N/A message appeared.

I immediately thought of this problem as a lack of rights on the cube, so I started to browse through the properties for the Cube reader role. In particular, I changed the cube to allow Drillthrough access and then Drillthrough and local cube access, but none of them solved the problem.

After doing some research I found Analysis Services / Cell Level Security - Null Measures and Read Access where Dan Meyers pointed me in the right direction: On the Cell Data tab, we had set (enabled) the checkbox for Enable read permissions for every cube, and Allow reading of cube content field was set to:

According Dan's information, we just disabled Enable read permissions checkbox for every cube and everything worked as expected.


SQL Server transactional replication issues regarding NOT FOR REPLICATION triggers. Part II

Scenario: Transactional replication with immediate updates among SQL Server 2005 publisher and subscribers. Articles/Tables have user defined triggers marked as NOT FOR REPLICATION (NFR) and those triggers are deployed to subscribers for load balancing.

Problem description: Even though the triggers are marked as NOT FOR REPLICATION (NFR), when the origin of the triggering action is a subscriber, the trigger is fired twice, at both the subscriber (expected behaviour) and at the publisher (changed behaviour from SQL Server 2000 point of view).

The solution: In this page the solution to the problem exposed in SQL Server transactional replication issues regarding NOT FOR REPLICATION triggers is explained: The main idea is using SET CONTEX_INFO logic so that triggers thrown at the server are capable of distinguishing when the call has been made directly by a user (either at the publisher or the subscriber) or has been made in order to replicate a command.

In the former URL we saw an example of a trigger that should not be executed at all when fired at the publisher by means of an insert/update done at the subscriber. The procedure explained here is to avoid the execution of those unwanted double-fired triggers.

You must ensure that your application executes:


Everytime it makes a connection to the server. We will use 0x80 (that in binary is 10000000 00000000). In fact, we will only use the first bit from the first byte (8 bits) from CONTEXT_INFO, that is sized 128 bytes, just in case you might need the rest of the variable for other purposes.

If you already use CONTEXT_INFO in your scenario, select another bit position for this purpose and change the following code accordingly.

And now, the other side of the change: You have to include the following code at the very beginning in every NOT FOR REPLICATION trigger deployed to subscribers that might give you problems when fired more than once:

  -- In order to workaround the double triggering issue in both subscribers 
  -- and publisher when the action is executed on the subscriber, we need 
  -- to check here if the trigger has been fired by a user action.
  -- For this reason, client applications MUST execute SET CONTEXT_INFO 0x80
  -- in order to identify themselves. Only first bit of first byte of the 
  -- 128bytes of CONTEXT_INFO is used here. The rest of CONTEXT_INFO could 
  -- still be used for other purposes. We only check that first bit of first
  -- byte is set to 1.
  IF NOT EXISTS(SELECT * FROM master.dbo.sysprocesses
                WHERE spid = @@SPID AND 
                      CONVERT(tinyint, SUBSTRING(context_info, 1, 1)) & 0x80 = 0x80)
       RETURN -- If no context info exists, return
  -- If we reach to this point, the trigger has identified correctly, the user
  -- action and the execution will continue either on the publisher or the 
  -- subscriber, but not on both.

  -- The rest of the trigger follows...


How to download more than 2 files simultaneously using Internet Explorer

As stated in 8.1.4 Practical Considerations from RFC2616:
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.
IE follows that suggestion, and it will not open more than two connections to the same server, preventing you from downloading more than 2 files simultaneusly (again from the same server). There is, however, a way to override this behaviour (after all, this is just a suggestion). It's a classic, but it always takes me some time to find it whenever I need to do this registry hack. Here it is:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings
Taking into account that RFC2616 is dated June 1999, and client's bandwidth (and also server's) has grown a lot since then, I think that it should not be a problem increasing that limit from 2 to 6 or 8 files at a time.


Identity columns in a view treated as an ordinary int column

I was experiencing the behavior shown in KB920974: FIX: SQL Server 2005 treats an identity column in a view as an ordinary int column when the compatibility level of the database is set to 80 and asked Microsoft support for the hotfix opening a new support incident. After some hours, technical staff sent me the links for the files and installed them on a virtualised environment (testing is always a good thing).

After having checked that the problem was solved if the database compatibility level was 80, I changed it back to 90 and the problem appeared again. I did not accept the ms incident as being resolved and still complained about it. They told me that the shown behaviour was BY DESIGN. In other words, if you are using a database with compatibility level 80, your identity fields will still behave as identity when viewed through a view (check the following example in how to reproduce the problem), but if you want to use compatibility level 90, the identity behavior is lost by design if the table is viewed through a view. I could not believe it! Why did they do that?

As stated in SQL Server 2005 BOL regarding Modifying Data Through a View:

You can modify the data of an underlying base table through a view, in the same manner as you modify data in a table by using UPDATE, INSERT and DELETE statements or by using the bcp utility and BULK INSERT statement. However, the following restrictions apply to updating views, but do not apply to tables:
  • Any modifications, including UPDATE, INSERT, and DELETE statements, must reference columns from only one base table.
  • The columns that are being modified in the view must reference the underlying data in the table columns directly. They cannot be derived in any other way, such as through:
    • An aggregate function (AVG, COUNT, SUM, MIN, MAX, GROUPING, STDEV, STDEVP, VAR and VARP).
    • A computation; the column cannot be computed from an expression using other columns. Columns formed using set operators (UNION, UNION ALL, CROSSJOIN, EXCEPT, and INTERSECT) amount to a computation and are also not updatable.
  • The columns that are being modified cannot be affected by GROUP BY, HAVING, or DISTINCT clauses.
  • TOP cannot be used anywhere in the select_statement of the view when WITH CHECK OPTION is also specified.
[...] The following additional guidelines apply:[...]
  • INSERT statements must specify values for any columns in the underlying table that do not allow null values and have no DEFAULT definitions.

Steps to reproduce the problem: Create a new [test] database in compatibility mode 90 (2005). Define the following two tables: TMaterials and TMeasureUnits with the script:

CREATE TABLE [dbo].[TMeasureUnits](
 [MeasureUnitId] [int] NOT NULL,
 [Description] [varchar](50) COLLATE Modern_Spanish_CI_AS NOT NULL,
 [MeasureUnitId] ASC
CREATE TABLE [dbo].[TMaterials](
 [MaterialId] [int] IDENTITY(1,1) NOT NULL,
 [Description] [varchar](50) COLLATE Modern_Spanish_CI_AS NULL,
 [MeasureUnitId] [int] NOT NULL,
 [MaterialId] ASC

  ADD CONSTRAINT [FK_TMaterials_TMeasureUnits] FOREIGN KEY([MeasureUnitId])
  REFERENCES [dbo].[TMeasureUnits] ([MeasureUnitId])
ALTER TABLE [dbo].[TMaterials] CHECK CONSTRAINT [FK_TMaterials_TMeasureUnits]
INSERT INTO [dbo].[TMeasureUnits] (MeasureUnitId, Description) VALUES (1, 'Units')
INSERT INTO [dbo].[TMeasureUnits] (MeasureUnitId, Description) VALUES (2, 'Boxes')
INSERT INTO [dbo].[TMeasureUnits] (MeasureUnitId, Description) VALUES (3, 'm.')
INSERT INTO [dbo].[TMeasureUnits] (MeasureUnitId, Description) VALUES (4, 'Kg.')

Now we will create the offending view:

CREATE VIEW [dbo].[VMaterials]
SELECT dbo.TMaterials.MaterialId,
       dbo.TMeasureUnits.Description AS MeasureUnit
FROM dbo.TMaterials INNER JOIN
     dbo.TMeasureUnits ON dbo.TMaterials.MeasureUnitId = dbo.TMeasureUnits.MeasureUnitId

Now execute:

SELECT name, is_identity
FROM sys.columns
WHERE object_id = object_id('VMaterials')

And you will see that NONE of the fields have is_identity = 1. This behavior is told to be BY DESIGN if database compatibility level 90. The fix is only intended to restore original behavior when database is in level 80 (2000).

The real problem: Despite this different behavior, the view is still insertable using T-SQL. Our real problem then, is due to MS Access.

We are using MS Access 2003 as front-end for our SQL Server 2005 corporate database. With such a new behavior, MS Access 2003 is not able to do INSERTs in such views. When MS Access asks SQL Server for view's metadata (to know field types, lengths, primary keys, nullable fields, and identities), SQL incorrectly returns that the identity field is not an identity field anymore:

  1. Create a new MS Access 2003 database project and connect to your server and database.
  2. Change to see Queries objects and open VMaterials (note that the expected first field should be marked as (Autonumeric) and it is not).
  3. Try to insert a new record:
    • If you try to leave MaterialId blank, and enter Description='Sand', MeasureId=2, you will get the error: Invalid input parameter values. Check the status values for details.Invalid input parameter values. Check the status values for details.
    • If you try to give any value to MaterialId and Description='Sand', MeasureId=2 as before, you get the following error message: Cannot insert explicit value for identity column in table 'TMaterials' when IDENTITY_INSERT is set to OFFCannot insert explicit value for identity column in table 'TMaterials' when IDENTITY_INSERT is set to OFF

Up to now, our incident is still opened and no solution nor workaround has been given yet (apart from the obvious fall back to compatibility level 80).


Numbered stored procedures will be deprecated

I was just browsing the BOL regarding CREATE PROCEDURE and found some words about numbered stored procedures:

This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
I have some of them in my corporate application. The purpose is to make somewhat like (or behaves like) overloaded functions and methods in OOP (object oriented programming), i.e:
void foo(int x) {
   printf("x is %d\n", x);
void foo(char *x) {
   printf("x is '%s'\n", x);

In T-SQL we would have:

CREATE PROCEDURE [dbo].[foo];1 @x int AS
  PRINT 'x is ' + CONVERT(varchar(8), @x)
CREATE PROCEDURE [dbo].[foo];2 @x char AS
  PRINT 'x is ' + @x

The fact is that in order to call those stored procedures you need to know the 'suffix' to be applied, such as:

EXEC foo;1 33
-- x is 33
EXEC foo;2 'a'
-- x is a
EXEC foo;1 'a'
--Msg 8114, Level 16, Status 1, Procedure foo, Line 0
--Error converting data type varchar to int.

Thus makes this feature useless from the point of view of behaving like OOP overloaded functions. For that reason and, since Microsoft has planned to deprecate numbered stored procedures for next releases of SQL Server, I will remove them from my application (and so should you do too). The easiest way to achieve the same functionality with a minimum effort is to make:

CREATE PROCEDURE [dbo].[foo_1] @x int AS
  PRINT 'x is ' + CONVERT(varchar(8), @x)
CREATE PROCEDURE [dbo].[foo_2] @x char AS
  PRINT 'x is ' + @x

If you want to know if your database uses numbered stored procedures somewhere you can execute this query:

SELECT DISTINCT sys.objects.name, sys.sql_modules.definition
FROM sys.sql_modules
  INNER JOIN sys.objects on sys.sql_modules.object_id = sys.objects.object_id
WHERE sys.sql_modules.definition LIKE '%;1%'
   OR sys.sql_modules.definition LIKE '%;2%'
   OR sys.sql_modules.definition LIKE '%;3%'


How to deploy foreign keys to subscribers that are using queued updates

Scenario: Transactional replication with queued updates. 'Subscriber wins' conflict resolution policy ( exec sp_addpublication ... @conflict_policy = N'sub wins' ...). Several publications are used to deploy database instead of a single big publication. Since publications are queued, subscribers must know about foreign key (FK) restrictions in order to prevent users from breaking the rules, thus referential integrity must be replicated to subscribers. The command to create articles matching the former requirements is:

exec sp_addarticle ... @schema_option = 0x0000000008035FDF ...

and a sparse explanation of @schema_option bitmask is:

         0x8000000 Create any schemas not already present on the subscriber.
           0x10000 Replicates CHECK constraints as NOT FOR REPLICATION ...
           0x20000 Replicates FOREIGN KEY constraints as NOT FOR REPLICATION...
            0x4000 Replicates UNIQUE constraints. Any indexes related to the...
            0x1000 Replicates column-level collation.
             0x100 Replicates user triggers on a table article, if defined...
             0x200 Replicates foreign key constraints. If the referenced table ...
             0x400 Replicates check constraints. Not supported for Oracle...
             0x800 Replicates defaults. Not supported for Oracle Publishers.

You can read more about sp_addarticle and @schema_option in BOL. If you use Management Studio rather than T-SQL you don't have to worry about this bitmask mess: the GUI does everything for you.

Problem: Even though SQL Server 2005 has the option to publish referential integrity among tables in a publication, this option only works for tables that are both inside the publication. If there is any referential integrity that goes TO a table outside the publication or comes FROM a table outside the publication, this foreign key (FK) is not replicated to subscribers.

Besides, if you are using SQL Server 2000, this option is not available at all. If you want to use queued updates, the subscribers must have all foreign constraints applied in order to avoid problems. The stored procedures and semi-automated way of publishing foreign keys described in this document can be used also if you are still using SQL Server 2000.

Subscribers are unaware of those restrictions and there is a chance that a subscriber receives a command that could violate a foreign key on the publisher and still be executed and queued without any warning/error to the user/client application. Afterwards, when the queued command is replicated to the publisher, it is rolled back because it goes against a foreign key that only publisher knows about.

In the following picture you can see a sample of the problem under discussion. There we have 2 publications, QUOTATIONS and ORDERS and there are foreign keys that are completely inside the publications, but there is another one FK_TOrders_TQuotations) that goes beyond the limits of a single publication and starts in ORDERS and finishes in QUOTATIONS.

Orders and Quotations

This kind of foreign keys are not replicated to subscribers even though the option to replicate FKs is enabled. If you are using 'publisher wins' policy, the offending command is rolled back and the original record is restored at the subscriber (seconds or minutes later). The user there is not warned about this fact; he might still think that his changes were commited (but they were not).

On the worst possible scenario, if you are using 'subscriber wins' policy, when the offending command is sent to publisher for replication, it cannot be commited because it breaks the referential integrity, but since 'subscriber wins' is set, the command is not sent back to subscriber to be undone. In this case, subscriber will have the new (but inconsistent) record, and publisher will hold the old (but consistent) record.

After some hours, or even days, if the publication is validated on a scheduled basis using sp_publication_validation, the publication will appear has having validation errors. If the publication is then reinitialised, the subscribers will receive the publishers version of the data (foreign key consistent).

In any case, the changes committed initially at the subscriber are undone without any error nor warning by the time the statements are being executed at the subscriber.

My feedback to Microsoft on this issue: Subscriptions get desynched easily when subscriber uses queued updates and there are foreign keys that point outside the articles in the publication

Solution: The main idea behind this solution is using post-snapshot scripts to create the foreign keys needed at the subcribers that are not replicated automatically by SQL Server.

We must create pre-snapshots scripts also to drop the FKs that might prevent the tables from being dropped and recreated by the initial snapshots at the subscribers.

In our example, in order to successfully publish QUOTATIONS publication, we need first to try dropping FK_TOrders_TQuotations because it points to TQuotations, and since that FK might be already in place, TQuotations could not be dropped in order to be recreated and populated by the initial snapshot (this only applies if the action for in use object is drop and recreate ( exec sp_addarticle ... @pre_creation_cmd = N'drop' ... ).

So, we will need pre-snapshot scripts to drop those FKs that might be already in place and post-snapshot scripts to re-create them.

Is there any way to automate the process of creating those scripts? If the database you are trying to replicate is somewhat big, sure you will have hundreds of tables with at least an average of 3 or 4 foreign keys for each one. Having to manually find and script the creation (and deletion) of those FKs that are beyond the limit of a publication is a headache (at least).

Besides, you will need to mantain those scripts if the schema changes or new tables are created. Having to manually manage pre-snapshot scripts and post-snapshot scripts is not a solution for real environments.

Prerequisites: As commented in the former feedback to Microsoft, there are some conditions that your database must met in order to be able to use the scripts.

  • The tables must be published using their original names.
  • The scripts will try to create/delete both (incoming and outgoing) foreign keys for every single article/table. This will ensure that a FK is created if, in first instance, one of the tables involved in a FK is not already published. It will be created during the post-snapshot script of the second publication initialization.
  • FKs will be created at the subscribers after publisher's name (the same name as publisher's) and, if a FK with the same name already exists, the scripts suppose it is already in place and it will not be created twice, nor with another name.
  • If a FK references tables that are already (both) at the subscriber, the scripts will try to create the FK. The schema will not be checked, this is the reason for the need to deploy the tables with their original names.
  • If horizontal or vertical partitions (row filters) are applied to any of the articles, you must be sure not to remove rows/columns that would violate any of the existing FK at the publisher when tried to be applied at the subscriber (the scripts will not check this).

Scripting: We will discuss here a procedure to automate the process of creating those scripts. We will use an auxiliary publication SyncForeignKeysForSubscribers that will contain 2 articles (tables) called TSyncForeignKeysForSubscribersCREATE and TSyncForeignKeysForSubscribersDROP. The scripts to create those 2 tables are:

CREATE TABLE [dbo].[TSyncForeignKeysForSubscribersCREATE](
  [PublicationName] [varchar](256) COLLATE Modern_Spanish_CI_AS NOT NULL,
  [TableName] [varchar](1024) COLLATE Modern_Spanish_CI_AS NOT NULL,
  [FKName] [varchar](1024) COLLATE Modern_Spanish_CI_AS NOT NULL,
  [TheRest] [varchar](1024) COLLATE Modern_Spanish_CI_AS NOT NULL,
CREATE TABLE [dbo].[TSyncForeignKeysForSubscribersDROP](
  [PublicationName] [varchar](256) COLLATE Modern_Spanish_CI_AS NOT NULL,
  [TableName] [varchar](1024) COLLATE Modern_Spanish_CI_AS NOT NULL,
  [FKName] [varchar](1024) COLLATE Modern_Spanish_CI_AS NOT NULL,

Those two tables will be filled at the publisher with the information about FKs and replicated to subscribers. When that information is replicated to subscribers, it will be ready to be used by pre and post-snapshot scripts to drop and recreate the FKs. UpdateForeignKeysInformationForReplicationOnAllPublications This stored procedure is to be called at the publisher. It is a wrapper for calling to UpdateForeignKeysInformationForReplicationOnPublication for every every publication on the current database.

CREATE PROCEDURE [dbo].[UpdateForeignKeysInformationForReplicationOnAllPublications]
    @PublicationName varchar(256)

  -- SET NOCOUNT ON added to prevent extra result sets from
  -- interfering with SELECT statements.

  -- We retrieve the list of publications that are published 
  -- inside the database we are running into.
    SELECT publication FROM distribution.dbo.MSpublications
    WHERE publisher_db = (SELECT db_name(dbid) FROM master..sysprocesses WHERE spid=@@SPID)
    ORDER BY publication

  -- Just iterate on them calling to the other SP that we will discuss later
  OPEN ListOfPublications
  FETCH NEXT FROM ListOfPublications INTO @PublicationName
      PRINT 'Processing publication ' + @PublicationName
      EXEC dbo.UpdateForeignKeysInformationForReplicationOnPublication @PublicationName
    FETCH NEXT FROM ListOfPublications INTO @PublicationName
  CLOSE ListOfPublications
  DEALLOCATE ListOfPublications

UpdateForeignKeysInformationForReplicationOnPublication @PublicationName varchar(256) This stored procedure is to be called at the publisher too and does all the work of retrieving FK information in order to publish it to subscribers. It should be called everytime your schema changes or new FKs are created. It iterates on FKs defined in de database that @PublicationName belongs to, looking for those that go TO a table outside the articles of the publication or comes FROM a table that is not included in it. Once a matching FK is found, its information is saved in the replicated tables so that subscribers could use this info later.

CREATE PROCEDURE [dbo].[UpdateForeignKeysInformationForReplicationOnPublication] 
  @PublicationName varchar(256)
    @TableName varchar(128),
    @PKTABLE_NAME varchar(128),
    @PKCOLUMN_NAME varchar(128),
    @FKTABLE_NAME varchar(128),
    @FKCOLUMN_NAME varchar(128),
    @FK_NAME varchar(128),
    @sql varchar(2048)

  -- SET NOCOUNT ON added to prevent extra result sets from
  -- interfering with SELECT statements.

  -- Create a temporary table to store middle calculations
    PKTABLE_QUALIFIER varchar(1024),
    PKTABLE_OWNER varchar(1024),
    PKTABLE_NAME varchar(1024),
    PKCOLUMN_NAME varchar(1024),
    PKTABLE_QUEALIFIER varchar(1024),
    FKTABLE_OWNER varchar(1024),
    FKTABLE_NAME varchar(1024),
    FKCOLUMN_NAME varchar(1024),
    KEY_SEQ int,
    UPDATE_RULE int,
    DELETE_RULE int,
    FK_NAME varchar(1024),
    PK_NAME varchar(1024),

  -- 1st step: Retrieve all outgoing FK: from articles within the publication to other tables 
  -- outside this publication. We will use a cursor to iterate through all those articles
  -- inside the publication and retrieve any FK pointing out to any other table. Afterwards, we 
  -- will delete those FK that fall inside the publication articles.
  PRINT 'Step 1: Inside -> Outside FKs from ' + @PublicationName
    SELECT DISTINCT A.destination_object
    FROM distribution.dbo.MSpublications P 
      INNER JOIN distribution.dbo.MSarticles A ON P.publication_id = A.publication_id
    WHERE P.publication = @PublicationName
    ORDER BY A.destination_object

  OPEN ListOfArticles
  FETCH NEXT FROM ListOfArticles INTO @TableName
      SELECT @sql = 'sp_fkeys @fktable_name = N''' + @TableName + ''''
      INSERT INTO #tmpFKs EXEC(@sql)
      -- From those just inserted, delete those FKs that are internal (among articles inside the
      -- publication). At the end, we are just interested in those FKs pointing out to tables 
      -- NOT in this publication.
      DELETE FROM #tmpFKs
      WHERE PKTABLE_NAME IN (  -- Subselect with publication's own articles.
        SELECT A.destination_object
        FROM distribution.dbo.MSpublications P
          INNER JOIN distribution.dbo.MSarticles A ON P.publication_id = A.publication_id
        WHERE P.publication = @PublicationName
    FETCH NEXT FROM ListOfArticles INTO @TableName
  CLOSE ListOfArticles
  DEALLOCATE ListOfArticles

  -- 2nd step: We need to retrieve all incoming FKs: from tables NOT included in the publication
  -- to articles inside it. We will retrieve all incoming FK for the tables inside the publication
  -- and then remove those that fall again within tables inside it.
  PRINT 'Step 2: Inside <- Outside FKs from ' + @PublicationName
    SELECT [name] TableName FROM sys.objects T
    WHERE T.type='U' AND 
          T.is_ms_shipped = 0 AND
          T.[name] NOT LIKE 'conflict_%' /* AND
          T.is_published = 1 */ AND
          T.[name] NOT IN (SELECT A.destination_object
                           FROM distribution.dbo.MSpublications P 
                             INNER JOIN distribution.dbo.MSarticles A 
                               ON P.publication_id = A.publication_id
                           WHERE P.publication = @PublicationName)

  OPEN ListOfOutterArticles
  FETCH NEXT FROM ListOfOutterArticles INTO @TableName
      SELECT @sql = 'sp_fkeys @fktable_name = N''' + @TableName + ''''
      INSERT INTO #tmpFKs EXEC(@sql)
      -- From those just inserted, delete FKs that are completely out the publication
      -- (from tables outside to tables outside)
      -- We are just interested in those pointing to articles inside this publication
      DELETE FROM #tmpFKs
        FKTABLE_NAME = @TableName AND
        PKTABLE_NAME NOT IN (  -- Subselect with publication's own articles.
          SELECT A.destination_object
          FROM distribution.dbo.MSpublications P
            INNER JOIN distribution.dbo.MSarticles A ON P.publication_id = A.publication_id
          WHERE P.publication = @PublicationName
    FETCH NEXT FROM ListOfOutterArticles INTO @TableName
  CLOSE ListOfOutterArticles
  DEALLOCATE ListOfOutterArticles

  -- 3rd step: Some format and cleaning of the auxiliary table.
  -- If there are FKs with more than one key column, KEY_SEQ indicates the 
  -- order we need follow when considering FKCOLUMN_NAME and PKCOLUMN_NAME
  -- We are going to combine those records where KEY_SEQ > 1, flattening 
  -- FKCOLUMN_NAME and FKCOLUMN_NAME into the single record where KEY_SEQ=1
  PRINT 'Step 3: Reorganize #tempFKs   from ' + @PublicationName

  -- First of all, we use QUOTENAME for not having problems afterwards
  UPDATE #tmpFKs 

  -- Cursor with records belonging to FKs defined with more than one key column
  -- except this very first column ( > 1 )
    FROM #tmpFKs

  OPEN ListOfKeys
                                  @FKTABLE_NAME, @FKCOLUMN_NAME, @FK_NAME
      -- Update the 'main' record of this FK (the one with KEY_SEQ=1)
      -- and matching the rest of variables...
      UPDATE #tmpFKs
              FK_NAME      = @FK_NAME AND
              KEY_SEQ      = 1
                                    @FKTABLE_NAME, @FKCOLUMN_NAME, @FK_NAME
  CLOSE ListOfKeys

  -- 4th Step: Clean and final inserts
  PRINT 'Step 4: Clean & final inserts from ' + @PublicationName

  -- Remove records with KEY_SEQ>1 (we don't need them anymore)

  -- Now we have all the information we need. We just need to do de inserts
  -- into TSyncForeignKeysForSubscribersDROP formatting according the information 
  -- in this auxiliary table. This table is to be read by the stored procedure
  -- UseForeignKeysInformationAndDoDropsOnPublication thrown by a pre-snapshot script
  -- at the subscriber.
  DELETE TSyncForeignKeysForSubscribersDROP WHERE PublicationName = @PublicationName
  INSERT INTO TSyncForeignKeysForSubscribersDROP (PublicationName, TableName, FKName)
    SELECT @PublicationName, FKTABLE_NAME, FK_NAME FROM #tmpFKs
  -- Now the commands to recreate those FKs after the initialization of the
  -- publication is done. This table is to be read by the stored procedure
  -- UseForeignKeysInformationAndDoCreatesOnPublication included in every 
  -- post-snapshot script.
  DELETE TSyncForeignKeysForSubscribersCREATE WHERE PublicationName = @PublicationName
    TSyncForeignKeysForSubscribersCREATE (PublicationName, TableName, FKName, TheRest)
      FROM #tmpFKs

  -- Finally drop the temporary table.

  -- Once executed, the tables TSyncForeignKeysForSubscribersCREATE and 
  -- TSyncForeignKeysForSubscribersDROP contain the updated instructions that 
  -- subscribers need to apply to replicate the foreign keys.
  PRINT ''

UseForeignKeysInformationAndDoDropsOnPublication @PublicationName varchar(256) This stored procedure should be called only on subscribers, since it DROPS all FKs contained in the table TSyncForeignKeysForSubscribersDROP dealing with @PublicationName. The idea is to include this command into a pre-snapshot script for every publication, so that FKs that would create conflicts when tables should be dropped and recreated are not in place when the actual DROP TABLE command arrives (that is why it should be in a pre-snapshot script).

CREATE PROCEDURE [dbo].[UseForeignKeysInformationAndDoDropsOnPublication]
  @PublicationName varchar(256)
    @i int,
    @TableName varchar(1024),
    @FKName varchar(1024),
    @sql nvarchar(2048),
    @IsSQL2000 BINARY

  -- SET NOCOUNT ON added to prevent extra result sets from
  -- interfering with SELECT statements.

  -- Are we running on a SQL 2000 version?
  -- There must be 2 spaces before 2000%, this is not a typo
  SELECT @IsSQL2000 = COUNT(*) FROM (SELECT @@version AS V) AUX
    WHERE AUX.V LIKE 'Microsoft SQL Server  2000%'

    SELECT TableName, FKName FROM TSyncForeignKeysForSubscribersDROP
      WHERE PublicationName = @PublicationName

      SET @i = 0
      IF @IsSQL2000<>0
        SELECT @i = COUNT(*) FROM dbo.sysobjects
          WHERE xtype='F' AND type='F' AND Name = @FKName
        SELECT @i = COUNT(*) FROM sys.foreign_keys
          WHERE Name = @FKName

      IF (@i = 1) BEGIN -- The FK is in place, we need to remove it
        SELECT @i = COUNT(*) FROM sysobjects WHERE TYPE = 'U' AND Name = @TableName
        IF @i > 0 BEGIN -- If the table exists, we can go on
          SELECT @sql = N'ALTER TABLE ' + QUOTENAME(@TableName) +
                        ' DROP CONSTRAINT ' + QUOTENAME(@FKName)
          EXEC sp_executesql @sql
          PRINT 'Successfully dropped: ' + QUOTENAME(@FKName)
           PRINT 'No action done:       ' + QUOTENAME(@TableName) + ' doesn''t exist.'
      END ELSE BEGIN -- The FK is not there, do nothing
        PRINT 'No action done:       ' + QUOTENAME(@FKName)

UseForeignKeysInformationAndDoCreatesOnPublication @PublicationName varchar(256) This stored procedure is to be called on subscribers. It uses the information stored in TSyncForeignKeysForSubscribersCREATE to issue CREATE commands for FKs related to @PublicationName. It should be inserted in every post-snapshot script so that FKs are recreated after the initial snapshot is sent.

CREATE PROCEDURE [dbo].[UseForeignKeysInformationAndDoCreatesOnPublication]
  @PublicationName varchar(256)
    @i INT,
    @FKTableName varchar(1024),
    @PKTableName varchar(1024),
    @FKName varchar(1024),
    @TheRest varchar(1024),
    @sql nvarchar(2048),
    @IsSQL2000 BINARY

  -- SET NOCOUNT ON added to prevent extra result sets from
  -- interfering with SELECT statements.

  -- Are we running on a SQL 2000 version?
  -- There must be 2 spaces before 2000%, this is not a typo
  SELECT @IsSQL2000 = COUNT(*) FROM (SELECT @@version AS V) AUX
    WHERE AUX.V LIKE 'Microsoft SQL Server  2000%'

    SELECT FKTableName, PKTableName, FKName, TheRest FROM TSyncForeignKeysForSubscribersCREATE
      WHERE PublicationName = @PublicationName

  FETCH NEXT FROM FKsToCREATE INTO @FKTableName, @PKTableName, @FKName, @TheRest
      SET @i = 0
      IF @IsSQL2000<>0
        SELECT @i = COUNT(*) FROM dbo.sysobjects
          WHERE xtype='F' AND type='F' AND Name = @FKName
        SELECT @i = COUNT(*) FROM sys.foreign_keys
          WHERE Name = @FKName
      IF (@i = 0) BEGIN -- If the FK is not in place, we will try to create it

        SELECT @i = COUNT(*) FROM sysobjects
          WHERE TYPE = 'U' AND (Name = @FKTableName OR Name = @PKTableName)
        IF @i = 2 BEGIN -- If both tables exist, we can go on
          SELECT @sql = N'ALTER TABLE ' + QUOTENAME(@FKTableName) + 
                        ' WITH CHECK ADD CONSTRAINT ' + QUOTENAME(@FKName) + ' ' + @TheRest
          EXEC sp_executesql @sql
          SELECT @sql = N'ALTER TABLE ' + QUOTENAME(@FKTableName) +
                        ' CHECK CONSTRAINT ' + QUOTENAME(@FKName)
          EXEC sp_executesql @sql
          PRINT 'Successfully created and applied:  ' + QUOTENAME(@FKName)
          PRINT 'Nothing done, tables do not exist: ' + QUOTENAME(@FKTableName) +
                                               ' or ' + QUOTENAME(@PKTableName)
      END ELSE BEGIN -- If it is not there, do nothing
        PRINT 'No action done, already in place:  ' + QUOTENAME(@FKName)
    FETCH NEXT FROM FKsToCREATE INTO @FKTableName, @PKTableName, @FKName, @TheRest

Download: You can also download the scripts described in this article.

Keywords: transactional replication, queued updates, foreign keys, SQL Server 2005, Microsoft, bug, error, workaround, resolution, transact-sql, stored procedure, replicate foreign keys to subscribers


Grant VIEW DEFINITION permission to tables and/or views

SQL Server 2005 still supports all of the fixed server and database roles introduced in SQL Server 2000, along with their default permissions. However, custom database roles, application roles, as well as server logins and database users can be granted (or denied) more granular permissions on a much wider range of objects. Some of the most commonly used permission types present in earlier version of SQL Server, such as EXECUTE, SELECT, or TAKE OWNERSHIP offer new possibilities in their new implementation (since they can be applied on different levels and offer inheritance capabilities). There are also others, newly introduced ones, including, for example:

  • CONTROL - functionally equivalent to all permissions granted to the object's owner and inherited by all subentities within its scope,
  • ALTER - provides the ability to alter properties of an object. Depending on the scope, you can limit its inheritance to objects of a specific type (for example, its variation in the form ALTER ANY 'object_type' grants permissions to modify every instance of 'object_type' within server or database scope). ALTER TRACE allows running Profiler without membership in SysAdmin fixed server role.
  • IMPERSONATE - as mentioned earlier, permits impersonating another user (without requiring SysAdmin or dbo privileges, as was the case in SQL Server 2000),

According to BOL about VIEW DEFINITION Permission:

The VIEW DEFINITION permission lets a user see the metadata of the securable on which the permission is granted. However, VIEW DEFINITION permission does not confer access to the securable itself. For example, a user that is granted only VIEW DEFINITION permission on a table can see metadata related to the table in the sys.objects catalog view. However, without additional permissions such as SELECT or CONTROL, the user cannot read data from the table.

And now the reasons for this blog entry: If you are using MS Access as your front end for SQL Server 2005, and you have tables with not-nullable columns with default values, as for instance:

CREATE TABLE [dbo].[MyTable] (
  [Id] [int] NOT NULL,
  ... more columns...
  [ColumnName] [int] NOT NULL CONSTRAINT [DF_TMyTable_ColumnName] DEFAULT ((0))

your MS Access application will not be aware of the default values for columns in the table and your insertions will fail unless the client manually sets a value for those default values. According, to BOL on Viewing Object Definitions, this is by design:

If the user is not the table owner of myTable and has been granted only SELECT permission on the table, when the user accesses the following catalog views, the definition columns are NULL.
Catalog view name Transact-SQL definitions for
sys.sql_modules All triggers in myTable
sys.computed_columns All computed column in myTable
sys.check_constraints All CHECK constraints in myTable
sys.default_constraints All DEFAULT constraints in myTable

If you, your application or your users were used to work with SQL Server 2000 this will appear as a new behaviour, since granting SELECT access to a table in SQL Server 2000 also granted to read the structure of the table itself. Now, in SQL Server 2005, this is a different right and has to be granted explicitly. In fact it is the least privilege: you can only read the structure of the object (securable) but nothing else, you cannot even read data (SELECT). To solve the problem with MS Access not inserting the default values for tables and views, you simply have to grant users the right to VIEW DEFINITION of the tables/views. Keywords: SQL Server 2005, identity, not null, not nullable, default, default value, constraint, view definition, ms access, problem, issue, insert, select, permission.


Offline files errors during synchronization

If you ever find the following error message when synching your Offline Files (Client Side Caching - CSC):
Unable to merge offline changes on \\server_name\share_name. The parameter is incorrect.
Or any of the following:
Using last known good configuration
File not found
More Details Available

You can apply the following procedure: How to re-initialize the offline files cache and database

Links: How to re-initialize the offline files cache and database Clint's post on Debugging Problems With Offline Files dated Tuesday, November 08, 2005 8:56 PM.

More errors: But there are other things that can happen (and really happen after all) sometimes. You might also get an Access is Denied error and it is not always because of wrong permissions. If you are sure that your NTFS permissions are correctly configured (because it works perfectly for other users, for instance) and still get the access denied error, you can try this:

  1. Disable the NIC
  2. Create an new file (anything will do) in the directory you are trying to synchronise
  3. Re-enable the NIC
  4. Synchronise
More links: Anonymous comments on Power Up Offline Files dated Tuesday - 1/13/04 says:
I know it is easy to complain but administrators need less fluff more substance. Your article gives a 5000 foot view. I can get all the 5000 foot view information from MS documentation... Offline files is great feature not just for mobile users but as a way to have “My Documents” be available to users when network connectivity is down and have them on a central file server for easy backup. Unfortunately, it either works or it does not. There is no middle ground and virtually nothing in the way of troubleshooting. For instance, it is not uncommon to receive an "Access is Denied" error when first configured. MS states that it is a permissions issue. However, this is not always the case. In a recent migration of 2000 users, we ran into this error 5% of the time. The fix is to disable the NIC, create a new file, enable the NIC and synchronize. Why? I do not know but it fixes it. This is the type of information that is truly useful.
Simply the kind of things that NOBODY would ever think about, but DO solve the problem.


Using TOP 100 PERCENT in a view (SQL 2000 vs 2005 differences)

I'm still in the middle of the process of migrating our SQL servers 2000 to 2005. By now, all I have is a SQL 2005 publisher, but the subscriber is still a 2000 version. I run into problems when I started publishing all my views because of different syntax between 2000 and 2005 versions regarding TOP clause.

In SQL Server 2000 version, the syntax is TOP n [PERCENT] while in SQL Server 2005 the syntax is TOP (expression) [PERCENT]. Please note the parenthesis.

Those parenthesis are the reason for the publications to fail when the initial snapshot is sent to the subscriber (2000 version). I have been trying to find a way for SQL 2005 for not generating the extra parenthesis, but if you use Management Studio and the query designer to create the view, it seems there is no way to get rid of them. I tried also setting the database compatibility mode to 80 (2000) but it makes no difference. It does not matter whether the database mode is 90 (2005) or 80 (2000), the extra parenthesis are still there, making it impossible to publish a view that has an ORDER BY clause (for instance) for SQL Server 2000 subscribers.

Solution: I had to remove all TOP (n) PERCENT clauses and ORDER BY clauses to be able to publish my views correctly and modify my application accordingly. Of course, if you do not mind having to manually handle with T-SQL instead of using the visual query designer, you could still create your views with 2000 syntax (without the parenthesis) which is still valid for 2005, but in my case it did not worth the extra job and decided to keep on using the query designer and remove the TOP clauses.

Just for your information, I found this document SQL 2000 v. 2005 - Using Top 100 Percent in a View that throw me some light about the drawbacks of having TOP clauses in a view:

The obvious Pro is simplicity in access. While adding the ORDER BY to the query against the view really isn't all that difficult, it does make it a bit easier for quick/simple query access. BUT - there's a HUGE con here too. If the view starts getting used for other purposes (like in joins to other tables), then the being ordered before the joins, etc. can cause you an additional step that is NOT necessary. As a result, performance was compromised.

Because of those performance problems, I decided to solve my problem by the tough way: removing them completely. Now I have to use the ORDER BY clauses elsewhere (the client application), but not in the views.


Account Lockout and Management Tools (Additional Account Info)

This one is quite old and I used to have it already installed on my servers, but now we are moving them to new hardware and I needed to google it to find it again. Account Lockout and Management Tools If you did not know it already, this is a suite of (free) tools, released by Microsoft, that can help you troubleshoot password and account information for your Active Directory users. One of the gems of this suit is AcctInfo.dll that installs an extra tab in Active Directory Users and Computers snap-in. This new tab (called Additional Account Info) shows you:
  • Password last set (date & time)
  • Password expires (date & time)
  • Locked (Yes / No)
  • Last-Logon-Timestamp (date & time)
  • SID
  • GUID
  • Last Logon (date & time)
  • Last Logoff (date & time)
  • Last Bad Logon (date & time)
  • Logon Count (integer)
  • Bad Password Count (integer)
Extracted from Microsoft documentation:
ALTools.exe includes:
  • AcctInfo.dll. Helps isolate and troubleshoot account lockouts and to change a user's password on a domain controller in that user's site. It works by adding new property pages to user objects in the Active Directory Users and Computers Microsoft Management Console (MMC).
  • ALockout.dll. On the client computer, helps determine a process or application that is sending wrong credentials.Caution: Do not use this tool on servers that host network applications or services. Also, you should not use ALockout.dll on Exchange servers, because it may prevent the Exchange store from starting.
  • ALoInfo.exe. Displays all user account names and the age of their passwords.
  • EnableKerbLog.vbs. Used as a startup script, allows Kerberos to log on to all your clients that run Windows 2000 and later.EventCombMT.exe. Gathers specific events from event logs of several different machines to one central location.
  • LockoutStatus.exe. Determines all the domain controllers that are involved in a lockout of a user in order to assist in gathering the logs. LockoutStatus.exe uses the NLParse.exe tool to parse Netlogon logs for specific Netlogon return status codes. It directs the output to a comma-separated value (.csv) file that you can sort further, if needed.
  • NLParse.exe. Used to extract and display desired entries from the Netlogon log files.
In order to install 'Additional Account Info':
  • Copy acctinfo.dll to the %systemroot%\system32 folder
  • Register the service and dll running regsvr32 acctinfo.dll
  • Close, then open Active Directory Users and Computers.
NOTE 1: If you are using Windows 2003, you should use acctinfo.dll provided with Windows 2003 Resource Kit Tools instead. NOTE 2: If you are using x64 versions of Windows, please note that Windows 2003 Resource Kit Tools are not supported on 64-bit platforms.


VSS & ntbackup errors

We run our backups through scheduled tasks with ntbackup. We do a SystemState backup every day (for us it is enough, maybe in other scenarios more frecuent backups are needed). As part of our dayly checks, we check last run status in the Scheduled Tasks for every server to see if it was successfull (error code 0). Some weeks ago, in order to verify that the backups were restorable, we went to see the actual file (.bkf) and, for our surprise, it was only 2Kb in size. Fortunately, we did not really need to restore the server, it was running fine but... not so fine if the backups were not being done. Why did not we notice it before? Scheduled Task last status said: 0 Ok We went to check the Application Event Log and found:
Type: Error
Source: NTBackup
Category: None
Event ID: 8019
Date: 7/17/2006
Time: 10:16:03 PM
User: N/A
Description: End Operation: Warnings or errors were encountered.
Consult the backup report for more details.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
So we went to check the ntbackup log file, located in %USERPROFILE$\Local settings\Application data\Microsoft\Windows NT\NTBackup\data, and found:
Backup Status
Operation: Backup
Active backup destination: File
Media name: "System State SERVERNAME.bkf created 2006-07-17 at 22:15"

Volume shadow copy creation: Attempt 1.
"MSDEWriter" has reported an error 0x800423f4. This is part of System State.
The backup cannot continue.

Error returned while creating the volume shadow copy: 800423f4
Aborting Backup.


The operation did not successfully complete.

In spanish:
Estado de la copia de seguridad
Operación: copia de seguridad
Destino de la copia de seguridad activo: Archivo
Nombre del medio: "System State MYSERVER.bkf creado 17/07/2006 a las 22:15"

Creación de instantánea de volumen: Intento 1.
"MSDEWriter" informó acerca de un error 0x800423f4. Esto forma parte del estado del sistema.
La copia de seguridad no puede continuar.

Error devuelto al crear la instantánea de volumen: 800423f4
Anular la copia de seguridad.


Operación cancelada.

After some research we found that we were experiencing the symptoms exposed in KB828481 - Error 800423f4 appears in the backup log file when you back up a volume by using the Volume Shadow Copy service in Windows Server 2003 with some particularities: Our server is a Windows 2003 Server R2 Standard x64 running as domain controller and with SQL Server 2005 Standard + SP1 installed. Some of our SQL databases have a recovery mode set as Full and should be kept in this mode (changing them into Simple is not an option). However, if we check the server file versions and timestamps for them, and compare them with the data shown in the former KB828481 we found that our server's are more up to date (newer) than those shown in the KB828481:
ntbackup.exe  5.2.3790.1830 (srv03_sp1_rtm.050324-1447)   30/Nov/2005 14:00 
ws03res.dll   5.2.3790.1830 (srv03_sp1_rtm.050324-1447)   30/Nov/2005 14:00 
Wws03res.dll  Not found in our server
Being the files in our server newer than those exposed in KB828481, we understand that the hotfix described there is not suitable for our case, even though we are experiencing the behaviour shown there. After a little more research we found KB913648 - A new Volume Shadow Copy Service update is now available that fixes various Volume Shadow Copy Service problems in Windows Server 2003. In this case the document is newer (Juy 26, 2006) but having a look into the problems fixed by this hotfix suggests us that this is not our case either (too complex things for our little simple configuration/server). Besides, the server had been doing the backups for months (confirmed) before we noticed that it was not doing them anymore. Just for your information, when we run vssadmin list writers in a command prompt, the process seemed to wait indefinitely and, after 30 minutes, we press CTRL+C:
C:\WINDOWS\system32>vssadmin list writers
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001 Microsoft Corp.

Waiting for responses.
These may be delayed if a shadow copy is being prepared.

The server experiencing the problem has not a very high workload, not from the fileserver point of view (litle company with 25 employees), nor from the SQL Server's transactional point of view (somewhat near 1000 transactions per hour). Solution: Finally, the steps that solved the problem were:
cd %windir%/system32
net stop vss
regsvr32 /s ole32.dll
regsvr32 /s vss_ps.dll
vssvc /Register
regsvr32 /s /i swprv.dll
regsvr32 /s /i eventcls.dll
regsvr32 /s es.dll
regsvr32 /s stdprov.dll
regsvr32 /s vssui.dll
regsvr32 /s msxml.dll
regsvr32 /s msxml3.dll
regsvr32 /s msxml4.dll
And a reboot of the server. Links:


ISA Server 2004: fwsrv stopped responding to all requests

We recently installed ISA Server 2004 Service Pack 2 and then applied the recommended update (point 3 is KB897716) Microsoft Internet Security and Acceleration (ISA) Server 2004 Standard Edition RPC Filter Blocks Outlook Traffic from Computers Running Windows Server 2003 Service Pack 1 (SP1) After doing it we did not have to restart the server and everything seemed to be working perfectly. I also read about BITS Caching (Background Intelligent Transfer Service, used by windowsupdate) being supported and decided to use it (you can read about how to do it searching for Creating the Microsoft Update Cache Rule in Planning, Deployment, and Integration for ISA Server 2004 Service Pack 2). Some days later, due to other reasons, we had to restart the server. After that, the Firewall Service failed to start, thus leaving us disconnected from the internet. All netwok traffic (VPNs, http, email) was blocked. As you can imagine, we were into real trouble. In our application event log there were 2 kind of error messages:
Event Type: Error
Event Source: Microsoft ISA Server Control
Event Category: None
Event ID: 14079
Date:  8/2/2006
Time:  12:30:01 PM
User: N/A
Due to an unexpected error, the service fwsrv stopped responding to all
requests. Stop the service or the corresponding process if it does not
respond, and then start it again. Check the Windows event Viewer for
related error messages.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Event Type: Error
Event Source: Microsoft ISA Server
Event Category: None
Event ID: 1000
Date:  8/2/2006
Time:  12:28:55 PM
User: N/A
Faulting application wspsrv.exe, version 4.0.2165.610, stamp 442d48f1,
faulting module w3filter.dll, version 4.0.2165.610, stamp 442d48dd,
debug? 0, fault address 00094cff.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
In the system event log we had:
Event Type: Error
Event Source: Service Control Manager
Event Category: None
Event ID: 7034
Date: 8/2/2006
Time: 12:29:08 PM
User: N/A
The Microsoft Firewall service terminated unexpectedly. It has done this 6

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
With those clues, we found The Firewall service stops responding and Event IDs 14079, 1000, and 14057 are logged in the Application event log in ISA Server 2004 that suggested installing Update for HTTP issues in Internet Security and Acceleration Server 2004 Service Pack 2. We did it but it did not solve the problem either. One of the symptoms in the latter document was:
917134 The "Background Intelligent Transfer Service" option is incorrectly available for any non-Microsoft Update cache rule that you create in ISA Server 2004
I revised the cache rules that I had configured, but the only one that had the BITS Cache enabled was the one created by the wizard. None of the rest had that option enabled. Just in case, I deleted the Microsoft Update Cache Rule that I had created some days earlier. No luck either: the service kept on stopping. And then I had another idea: since all the clues guided me towards the cache... why not to disable the cache completely, delete c:\urlcache\Dir1.cdat file manually, and then re-enable the cache again? If the problem was the data that ISA had already saved in the cache, that would be the only way to get rid of it. The idea was quite simple and seemed risk-free. We had not read a word about this self-made procedure but after doing it, the Firewall service restarted without any problem. Summing up:
  1. Install ISA Server 2004 SP2.
  2. Install KB897716.
  3. Install KB916106.
  4. Do NOT add any cache rule with BITS Caching enabled (just in case).
If you run into the same problems as I did:
  1. Remove any of the rules that had BITS Caching enabled.
  2. Disable caching for all drives.
  3. Manually delete the cache files.
  4. Re-enable the caching for the drives you had previously configured.


Distribution agents & network efficiency for transactional replication with immediate updates

Scenario: We have a VPN set with two xDSL between our main office and a branch site. There are two SQL Servers (at each side) with transactional replication with immediate updates. As you might imagine, bandwidth is critical (it is not a T1 line, we have only 300kbps maximum) and it must be used for other purposes also (file transfer/replication using Windows DFS, web browsing, email, etc.) We have a database of ~150 tables divided in 28 publications. Hence we have 28 distribution agents running against the same subscriber. We have also set up a network bandwidth monitoring software (http://www.cacti.net/) to be able to check the status of the physical lines, and VPN, and see their evolution in time, if there are bottlenecks, and so on.

Problem description: We have been using SQL Server 2000 for quite a long time, both at the publisher and at the subscriber. One month ago, more or less, we installed SQL Server 2005 at the main office as new publisher (we are migrating step by step, subscriber will go next), and the network traffic has boosted.

We are now consuming all the 300kbps (uploading from main office to the branch site) permanently, and 95kbp of downloading (from branch to main office). We have confirmed that all that traffic is caused by SQL Server, and not other kind of traffic.

Furthermore, after some tests, we have trace it down and guessed that Distribution Agent is to blame. Default agent profile for distribution agents has as profile parameters one called PollingInterval, set by default to 5 seconds. We have created a new user profile based on the default one, and changed PollingInterval from 5 to 60 seconds.

The traffic has been reduced in somewhat near 12 times less ( 60/5 = 12 ) from 300kbps to something near 40kbps (being so small, those 40kbps can contain other kind of traffic).

The reasons for Distribution Agents being to blame is explained later. According to SQL Server 2005 BOL and regarding PollingInterval parameter:

Is how often, in seconds, the distribution database is queried for replicated transactions. The default is 5 seconds.

Both the agents and the distribution database are at the same computer: the publisher at the main office. Being that way, in theory, changing that parameter from 5 to 60 seconds would only affect local traffic.

Even more, during the night, when there is no activity at none of the offices, no matter the frequency the agent checks the distribution database, there should be no pending transactions to be applied to subscriber, and the subscriber should not be contacted.

Our experience shows that there is traffic between publisher and subscriber, even when there are no transactions to be applied, and the bandwidth of that traffic depends directly on the value of Distributor Agents' PollingInterval parameter.

One year of network traffic

In this one we can see how, since the middle of June, the date when SQL Server 2005 started to act as publisher, the network traffic reaches the maximum (300kbps).

One day of network traffic

In this capture, we can see the last 24 hours. The valley in traffic around 13:00h belongs to a change of all Distributor Agents' profiles, with a PollingInterval=60 and a restart of the SQL Server Agent service. Half an hour later (more or less), in order to check, the default agent profile is set again (PollingInterval=5) for every Distributor Agent and a new restart of the SQL Server Agent service is done also.

It is very strange that the traffic stalls permanently around 300kbps, even during the night, when there is no activity (just a few jobs that call to some stored procedures to update a few administrative tables, scheduled to run at 00:00 and 5:00am)

Steps to reproduce the problem: In order to find what kind of traffic is being executed against the subscriber, I used Profiler at the subscriber:

  1. 1. Create a publication with transactional replication with immediate updates.
  2. 2. Create a subscription to that publication.
  3. 3. Initialise the publication and wait until the subscriber gets the data.
  4. 4. Start Profiler on the subscriber with a filter:
    HostName:        PUBLISHERNAME
    ApplicationName: PUBLISHERNAME_dbname_publicationname
You will see, every 5 second a RPC:CommandCompleted:
exec sp_executesql
  N'update MSreplication_subscriptions
    set transaction_timestamp = cast(@P1 as binary(15)) +
        cast(substring(transaction_timestamp, 16, 1) as binary(1)), "time" = @P2
    where UPPER(publisher) = UPPER(@P3) and publisher_db = @P4 and
          publication = @P5 and subscription_type = 0 and
          (substring(transaction_timestamp, 16, 1) = 0 or datalength(transaction_timestamp) < 16)',
  N'@P1 varbinary(14),@P2 datetime,@P3 nvarchar(5),@P4 nvarchar(5),@P5 nvarchar(7)',
  'Jul 24 2006 9:03:59:000AM',

Since our database has ~150 tables in 28 publications (agents), if we use the default profile (5secs) and do a simple calculation: 60*28/5 = 336 RPC calls to the subscriber 24/7 without doing any user legitimate transaction (only this unjustified traffic). This consumes all our uploading bandwidth at the main office (300kbps at the publisher). If there is nothing to replicate, I though that the subscriber should not be contacted (at least, not so frequently as 5 seconds) because it goes against network efficiency for slow-links configurations.

Other possibility is using shared agents (default for SQL Server 2000 when dealing with transactional replication) vs. Independent agents (default for SQL Server 2005 for transactional replication). You can read about this on BOL Administering Replication Agents when dealing with Independent and Shared Agents:

An independent agent is an agent that services one subscription. A shared agent services multiple subscriptions; if multiple subscriptions using the same shared agent need to synchronize, by default they wait in a queue, and the shared agent services them one at a time. Latency is reduced when using independent agents because the agent is ready whenever the subscription needs to be synchronized. Merge replication always uses independent agents, and transactional replication uses independent agents by default for publications created in the New Publication Wizard (in previous versions of SQL Server, transactional replication used shared agents by default).

My feedback to Microsoft on this issue: Distribution agents & network efficiency for transactional replication with immediate updates This issue was resolved as 'By Design' and my suggested workaround confirmed as the solution.

Workaround/Solution: As you can guess, the solution is changing PollingInterval to a value that fits with your network bandwidth and desired latency. You need to find a compromise between both of those: with a higher latency, less bandwidth is used; if you need a lower latency, you will consume more bandwidth. Instead of having a single new user profile fo all distribution agents, you could create a set of user profiles with PollingInterval set to (for instance) 15, 60 and 300 seconds.

Publications that require realtime latency, can be assigned the default profile (5 seconds), for the rest, depending of the nature of tables included in the publication, you can assign the 15 secs. profile for near realtime, and the 300 seconds for subscriptions that are seldom updated.

Keywords: SQL Server 2005, transactional replication with immediate updates, Microsoft, slow link, bandwidth, latency, traffic, distribution agent, nothing to replicate, error, workaround, resolution


Reporting Services & MS Word format (.doc) rendering.

Problem description: Feature request One of the most frequently asked questions in Reporting Services (RS) forums and newsgroups is: Why did not RS include the option to render documents into MS Word (.doc) format? According to BOL, Reporting Services 2005 has the ability to render/output documents into:

  • CSV
  • Acrobat (PDF) file
  • Excel
  • Web archive
  • TIFF File
  • XML

Why did I marked XML? Because RS is not only able to output the results to raw XML format but also can make transformations based on XSLT stylesheets! That means that you can render into ANY format that you want, provided you feed Reporting Services with the proper transformation file. This document is based (and extends) the idea from CarlosHM and his blog document: WordML in Reporting Services.

The solution In order to illustrate the process, we will render a report into MS Word format.

1. Create your template using Microsoft Word 2003. You can make it as complex as you need/want. When you reach the point where data fields are expected to be filled, just write the name of the fields surrounded by special characters (to be able to find them afterwards). For instance, if there will be a field called company that should be on bold, arial 16, left justified, etc... just write «Company» and format that piece of text as needed. Add every fixed text as usual (I suppose you know how to work with Word). If you want a new page for every record returned by Reporting Services, remember to add a hard break (CTRL+Enter) at the end of the format.

Screen capture of the template inside MS Word 2003

2. Save the document When you have your layout ready, Save document as... XML document. and name it TemplateDocument.xml. Just for your information, you can see that, despite the fact that its extension is .xml, its icon is not the standard for an XML file and it is still linked to Word. That is because in the inside of it there is a line:

<?mso-application progid="Word.Document"?>

3. Reformat the generated .xml file. MS Word generates XML files without indenting (smaller file size), thus making them hardly human readable. Since we need to manually modify this file, we will reformat it and indent it to avoid making mistakes. For this you might need tidy, a little command line program to reformat html/xml files or use your favourite xml editor.

tidy -utf8 -xml -i -wrap 999 -output TidyOutput1.xml TemplateDocument.xml

Note that if your input document has embedded images, using -i option will break them. There is no problem with this since they can be repaired later, calling tidy again without passing -i parameter.

4. Modify the formatted .xml in order to achieve .xsl Now we have a human readable .xml file. Let's use it in order to create the .xsl file we need to upload to RS. First, make a copy of it, renamed it to .xsl and open it with Notepad:

copy TidyOutput1.xml TidyOutput1.xsl
notepad TidyOutput1.xsl

We will need to do some search and replaces of some blocks of text. Search for the line that starts with <o:DocumentProperties> and remove everything from the top of the file up to that particular line (not included). In place of the removed text, insert this one:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<xsl:stylesheet version="1.0"
  <xsl:output method="xml" media-type="text/xml" version="1.0"
  encoding="UTF-8" standalone="yes" indent="yes"
  omit-xml-declaration="no" />
  <xsl:template match="/">
    <xsl:processing-instruction name="mso-application">
    <w:wordDocument xmlns:w="http://schemas.microsoft.com/office/word/2003/wordml"
w:macrosPresent="no" w:embeddedObjPresent="no" w:ocxPresent="no"
<xsl:apply-templates />
  <xsl:template match="Report">
    <o:SmartTagType o:namespaceuri="urn:schemas-microsoft-com:office:smarttags"
    o:name="PersonName" />

Now go to the end of the file. You will find that the last tag is:


Delete it and insert these two lines:


Now save your document. Let's check if it is well-formed yet:

tidy -utf8 -xml -i -wrap 999 -output TidyOutput2.xsl TidyOutput1.xsl

If you see No warnings or errors were found in the output, you can continue (using TidyOutput2.xsl). If any error appears, revise the previous steps to see where the error was. You need to have a valid (well-formed) XSL file in order to continue.

5. Modify .xsl file to include the database fields. In step 1, when we created the template using MS Word, we named the fields as «FieldName». Now we will use that names to search and replace every field that you entered with the correct xsl command to replace it with the value of the Reporting Services returned xml file. Open TidyOutput2.xsl with Notepad and do a search and replace:

Search: «FieldName»
Replace: <xsl:value-of select="@FieldName" />

Repeat the search & replace for every field. When you think you had finished, search for « to check that you did not leave any field behind. Save as TidyOutput3.xsl. As I told you before, if your original template had embedded images, reformatting using tidy with -i option broke them. Now we will run:

tidy -utf8 -xml -output TidyOutput4.xsl TidyOutput3.xsl

Without the -i option everything will be reformatted to start at column 0 and the images ( <w:binData> tags) will be repaired.

6. Upload .xsl file to RS. In order to do our test we will create a new simple report, using MS Visual Studio, with the underlying query:

SELECT 'Company 1' As Company, 'Address 1' As Address
SELECT 'Company 2' As Company, 'Address 2' As Address

This will return us 2 records, without involving any database. Of course, you will need to create your own report with your own underlying query so that you can retrieve all the information you need. Then design the report as simple as you can, since it will only be used to retrieve the data. The formatting will be applied using TidyOutput4.xsl we had prepared before. Both the generated report (Report1.rdl) and TidyOutput4.xsl must be deployed to Reporting Services.

Screen capture of the html generated report inside MS Explorer

7. Test it If you have reached this point, now it's time to check if RS can send us a Word document. Type this on your explorer navigation bar:


If everything has gone nicely, you should be asked for opening a file called Report1.xml. Click open and see the results. MS Word should open and you should see something like your template, but without the data. The reason for this is that we forgot to add...

8. <xsl:for-each> Most of the reports have a master/details structure. You can think about an invoice, with master part with customer, invoice number, address etc, and a details part with a list of items. In our Report1.rdl we used a table (named table1) and we need to use <xsl:for-each> in order to iterate for every record in it. Open TidyOutput3.xsl (indented) and save it as TidyOutput5.xsl. Now search for the occurrence of your first field (@Company). Now move the cursor some lines up, until you reach the opening <w:p ...> (word paragraph) in which your field is placed. Right between the previous closing paragraph tag ( </w:p> ) and the opening tag you have found, insert the following:

<xsl:for-each select="table1/Detail_Collection/Detail">

We use table1 because that is the name of the table in Report1.rdl. Now we need to find the place for the closing tag:


In our example, we have placed it right before the <w:sectPr...> tag. The correct place for your case will depend on your particular layout. To be able to easily find the correct places for the starting and ending xsl:for-each tags, it might be usefull to use placeholders (special characters) as we did with the field names (during the layout preparation). Since TidyOutput5.xsl is an indented version (images are broken), we generate the non-indented version of it:

tidy -utf8 -xml -output TidyOutput6.xsl TidyOutput5.xsl

and upload TidyOutput6.xsl to Reporting Services.

9. Production Now the report is in place and the correct .xsl is also deployed to reporting services server. You can retrieve the document using:


Screen capture of the resulting Word XML report inside MS Word 2003

Note that the final document is not a Microsoft Word Document file (.doc), but a XML file that contains information to be interpreted by Microsoft Word 2003 and be displayed as if it were a DOC file. If you really need the file to be a Microsoft Word Document file you can then (once opened inside Word) Save as... and select the .doc format. This can be achieved also using VBScript:

Set oApp = CreateObject("Word.Application")
oApp.WindowState = 1  ' 0=wdWindowStateNormal; 1=wdWindowStateMaximize; 2=wdWindowStateMinimize
oApp.Application.Visible = False
oApp.Application.Documents.Open XMLFile ' Dim XMLFile As String
oApp.Application.ActiveDocument.SaveAs DOCFile, wdFormatDocument ' Dim DOCFile As String
oApp.Application.Visible = True ' Now the currently opened file is a DOC file

Download: You can download all the sample files together with the resulting report from here.

Keywords: reporting services, rs, render, export, generate, word, ms word, word document, word file, doc, xml, xsl, templates