Friday, March 30, 2012

Is Version 3.1 really 3.05

I just went through the upgrade to the compact edition and all the documentation talks of Compact Edition 3.1, but when I look at all the versions of the DLLs they are all dated 2007, but are 3.0.5x.

Just wanted to make sure I did get everything updated correctly because I didn't get any new directories under my VS smart devices directory either.

Thanks.

It’s happening for me also, I have installed version 3.1 of following stuff:

1. Microsoft SQL Server 2005 Compact Edition 3.1

2. Microsoft SQL Server 2005 Compact Edition Developer Software Development Kit 3.1

3. Microsoft SQL Server 2005 Compact Edition Tools for Visual Studio 2005 Service Pack 1 3.1

But when I go and see the connection properties in Sql server 2005 Management studio the version of the database is showing up: 3.0.xxx.

Please let me know if anybody knows what is wrong here.

|||

The product name is version 3.1, but the DLL files are all version 3.0.5300.0, previous versions (SQL Mobile 2005) were version 3.0.52xx.x

So version 3.1 is really build 3.0.5300.0, yes.

Is VB.net required on rendering server?

I have VB.net, SQL Server 2000, and SRS installed on my development computer.
I can create and test reports just fine on my computer.
However, I have to go through a third party for them to install SQL SRS on
our web server. They installed SQL SRS with a warning about VB.net not being
present. They are unable to render any reports (sample or otherwise).
Is VB.net required on the web server running SQL Server 2000 development and
SQL SRS development?
Are there step-by-step instructions for this type of installation that I can
provide to our web server team?
Thank you!VS (the warning specifies visual studio not VB.Net) is just a warning that
it will not be installing the development tools. The development tools
require some edition of Visual Studio (VB.Net being the most common that
people install). The server does not require it. If you install and accept
defaults in most cases you should be fine. I have found that my install does
better when I install running as System not as network account. The most
common reasons people have trouble with an install is installing on a domain
controller or moving the webs created after install. RS is a set of
applications that depend on configuration information on where things are
installed. If the webs are not the default website then you can end up in a
situation where RS cannot find things. It is not just web pages that you can
move without a problem. Another issue is enabling anonymous access (but
things would come up you would just not have any adminstrative
functionality).
I suggest seeing if they have done any of the above.
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"Owen Ott" <Owen Ott@.discussions.microsoft.com> wrote in message
news:D01CB0F4-91BB-4012-9928-B2C1B93B8F7E@.microsoft.com...
>I have VB.net, SQL Server 2000, and SRS installed on my development
>computer.
> I can create and test reports just fine on my computer.
> However, I have to go through a third party for them to install SQL SRS on
> our web server. They installed SQL SRS with a warning about VB.net not
> being
> present. They are unable to render any reports (sample or otherwise).
> Is VB.net required on the web server running SQL Server 2000 development
> and
> SQL SRS development?
> Are there step-by-step instructions for this type of installation that I
> can
> provide to our web server team?
> Thank you!sql

is vb.net 2003 compatible with sql server express?

I have vb.net 2003 standard installed on my xp home edition computer. I downloaded sql server express(which works) and tried to establish a connection through vb.net but could not. I used the server name I made from sql server express in the data link properties of vb.net 2003 and used the northwind database but it would not connect. Gave an error I can't recall because vb.net hangs up. Are the 2 compatible? If so, how do I go about establishing a connection between the 2? If they are not compatible, what will I need? An upgrade to vs 2005? Any help will be greatly appreciated.

Thanks,

Andy

They are compatible. What error did you receive?|||

".....unable to connect and that
this version of Visual Studio can only use the SQL Server Desktop Edition and
MS Access as data sources....".

This is the error I receive.

Thanks,

Andy

|||

Sorry, what I meant by compatible is that in code you are able to connect to SQL Server Express.

It looks like via the database tools in Visual Studio Standard, it isn't compatitble.

Is VARCHAR data type same as UTF-8?

Hello
Is data fields of varchar type internally encoded as UTF-8?
How is cyrillic text stored in varchar data fileds, as UTF-8 or not?No. Char, Varchar and Text are a CodePage representation.
If you don't want to have a lot of problems with Cyrillic and other things
like the Euro symbol; you should use nchar, nvarchar and ntext instead.
See:
http://msdn.microsoft.com/library/d...ataencoding.asp
Sylvain Lafontaine, ing.
MVP - Technologies Virtual-PC
E-mail: http://cerbermail.com/?QugbLEWINF
<s_alexander04@.list.ru> wrote in message
news:1143694534.065217.68140@.i40g2000cwc.googlegroups.com...
> Hello
> Is data fields of varchar type internally encoded as UTF-8?
> How is cyrillic text stored in varchar data fileds, as UTF-8 or not?
>|||You can check windows region ,SQL Server settings and Database,Column
Collation.
To support globalization, you use Unicode data type such as
nchar,nvarchar,ntext.
"s_alexander04@.list.ru"?? ??? ??:

> Hello
> Is data fields of varchar type internally encoded as UTF-8?
> How is cyrillic text stored in varchar data fileds, as UTF-8 or not?
>

Is using varchar() as index a bad idea?

Hi ,
I have an application whose data is stored in some other db and I am
converting
to SQL. The existing app has a table, Contractors with a field ContractorCo
de
char(10) as an index to the table. For some reason, I am against using
recognizable data as an index an instead opt for GUID. I s having a char or
varchar
field as an index slower than a GUID field index?
ThanksOpa
I personally try to avoid creating an index on GUID . ( I try keep my index
as small as possible) Having index on VARCHAR/CHAR depends on your
specific requirements . See an execution plan of the query, is an optimizer
used the index and then make a decision?
"Opa" <Opa@.discussions.microsoft.com> wrote in message
news:694D881A-1A70-4233-AC08-7392F3816A19@.microsoft.com...
> Hi ,
> I have an application whose data is stored in some other db and I am
> converting
> to SQL. The existing app has a table, Contractors with a field
> ContractorCode
> char(10) as an index to the table. For some reason, I am against using
> recognizable data as an index an instead opt for GUID. I s having a char
> or
> varchar
> field as an index slower than a GUID field index?
>
> Thanks|||There is a lot of debate on using GUIDs as primary keys. Here is an example:
http://www.sql-server-performance.c...erformance.asp. There is no
problem in using CHAR(10) as your index as long as the column helps you in
identifying the entity uniquely.
--
HTH,
SriSamp
Email: srisamp@.gmail.com
Blog: http://blogs.sqlxml.org/srinivassampath
URL: http://www32.brinkster.com/srisamp
"Opa" <Opa@.discussions.microsoft.com> wrote in message
news:694D881A-1A70-4233-AC08-7392F3816A19@.microsoft.com...
> Hi ,
> I have an application whose data is stored in some other db and I am
> converting
> to SQL. The existing app has a table, Contractors with a field
> ContractorCode
> char(10) as an index to the table. For some reason, I am against using
> recognizable data as an index an instead opt for GUID. I s having a char
> or
> varchar
> field as an index slower than a GUID field index?
>
> Thanks|||Xref: TK2MSFTNGP08.phx.gbl microsoft.public.sqlserver.programming:584774
Index on GUID will take up more storage therefore bigger index therefore
more I/0. This would be pronounced on a clustered index.
The Contarctors table with Contractor code, what is the variety of data
within the col ? and what types of searches (if any) will be commited
on that column?
Jack Vamvas
________________________________________
__________________________
Receive free SQL tips - register at www.ciquery.com/sqlserver.htm
"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uS4EcQjMGHA.2040@.TK2MSFTNGP14.phx.gbl...
> Opa
> I personally try to avoid creating an index on GUID . ( I try keep my
index
> as small as possible) Having index on VARCHAR/CHAR depends on your
> specific requirements . See an execution plan of the query, is an
optimizer
> used the index and then make a decision?
>
> "Opa" <Opa@.discussions.microsoft.com> wrote in message
> news:694D881A-1A70-4233-AC08-7392F3816A19@.microsoft.com...
char
>|||Opa (Opa@.discussions.microsoft.com) writes:
> I have an application whose data is stored in some other db and I am
> converting to SQL. The existing app has a table, Contractors with a
> field ContractorCode char(10) as an index to the table. For some
> reason, I am against using recognizable data as an index an instead opt
> for GUID. I s having a char or varchar field as an index slower than a
> GUID field index?
The last question is not really meaningful, because there are always a lot
of "it depends".
But generally, as a guid is 16 bytes, it's longer than the code, and the
longer the field, the less keys you get on a page, and the bigger the
index gets, and the more pages to read.
I would suspect, though, that the Contractors table is not of a size
where this is a much of an issue. Then again, the code may be used as an
FK in a larger table where it may matter.
Personally, I rather use the code as key, than GUID which is much more
difficult to manage. If you want artificial keys, integer is probably
better.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Ok so if I have varchar(16) vs GUID will this index field always perform bet
ter
or equal to GUID is SELECTS, INSERTS, JOINS etc?
BTW, I disagree with your comment stating the column should help identify
the entity uniquely in a visual way.
"SriSamp" wrote:

> There is a lot of debate on using GUIDs as primary keys. Here is an exampl
e:
> http://www.sql-server-performance.c...erformance.asp. There is no
> problem in using CHAR(10) as your index as long as the column helps you in
> identifying the entity uniquely.
> --
> HTH,
> SriSamp
> Email: srisamp@.gmail.com
> Blog: http://blogs.sqlxml.org/srinivassampath
> URL: http://www32.brinkster.com/srisamp
> "Opa" <Opa@.discussions.microsoft.com> wrote in message
> news:694D881A-1A70-4233-AC08-7392F3816A19@.microsoft.com...
>
>|||The performance aspect is something that you need to check for. Regarding my
other comment, what I meant was, if the CHAR(10) field is the "natural" key
for your table, you should go ahead and use it, rather than defining another
key.
--
HTH,
SriSamp
Email: srisamp@.gmail.com
Blog: http://blogs.sqlxml.org/srinivassampath
URL: http://www32.brinkster.com/srisamp
"Opa" <Opa@.discussions.microsoft.com> wrote in message
news:824B3723-E44D-410F-930F-58A4F31F5DB1@.microsoft.com...
> Ok so if I have varchar(16) vs GUID will this index field always perform
> better
> or equal to GUID is SELECTS, INSERTS, JOINS etc?
> BTW, I disagree with your comment stating the column should help identify
> the entity uniquely in a visual way.
>
> "SriSamp" wrote:
>|||Have a look at this excellent article from Kimberly L Tripp
http://www.sqlskills.com/blogs/kimb...
f-a290db645159
Regards
Roji. P. Thomas
http://toponewithties.blogspot.com
"Opa" <Opa@.discussions.microsoft.com> wrote in message
news:694D881A-1A70-4233-AC08-7392F3816A19@.microsoft.com...
> Hi ,
> I have an application whose data is stored in some other db and I am
> converting
> to SQL. The existing app has a table, Contractors with a field
> ContractorCode
> char(10) as an index to the table. For some reason, I am against using
> recognizable data as an index an instead opt for GUID. I s having a char
> or
> varchar
> field as an index slower than a GUID field index?
>
> Thanks|||You, sir, are a moron. I usually don't say that about people, but I'll
make an exception in your case.
Stu|||When you use "index", I'm assuming you really mean to say "primary key". A
primary key is a type of index, but an index is not a primary key.
I don't believe there is any basic performance benefit to using integer
based keys over char based keys. As far as SQL Server is concerned, a key is
just a block of bytes, and the smaller the better. A GUID is typically
displayed as a string of text, but is stored internally as a 16 byte
integer. Consider instead an int (4 bytes) or smallint (2 bytes) identity.
The only use I can see for using GUID (globally unique) keys is if the
Contractor data is to be merged with Contractor data from another system and
you want to retain the same surrogate key values. However, this could be
better implemented by adding a SystemCode to the Contractors table and have
SystemCode + ContractorCode as the primary key.
Perhaps keeping ContractorCode as the primary key would be the best solution
unless there is a strong and compelling reason to implement a surrogate key.
"Opa" <Opa@.discussions.microsoft.com> wrote in message
news:694D881A-1A70-4233-AC08-7392F3816A19@.microsoft.com...
> Hi ,
> I have an application whose data is stored in some other db and I am
> converting
> to SQL. The existing app has a table, Contractors with a field
> ContractorCode
> char(10) as an index to the table. For some reason, I am against using
> recognizable data as an index an instead opt for GUID. I s having a char
> or
> varchar
> field as an index slower than a GUID field index?
>
> Thanks

Is using SQL security a big security hole?

If a user of my application is connecting to their sql server using
integrated security doesn't this allow them to fire up something like QA and
do things to the database that they shouldn't. I know I should have
everything secure even if the user is connected that way but assuming at
least some security functionality is implemented in the app wouldn't they be
able to get around at least what is implemented in the app?
Cheers,
MichaelCorrect. They can login using their Windows account using QA and do whatever
that login has
permissions to do. Consider having your app using application roles.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Michael C" <mculley@.NOSPAMoptushome.com.au> wrote in message
news:OQAf%23ZerFHA.1168@.TK2MSFTNGP11.phx.gbl...
> If a user of my application is connecting to their sql server using integr
ated security doesn't
> this allow them to fire up something like QA and do things to the database
that they shouldn't. I
> know I should have everything secure even if the user is connected that wa
y but assuming at least
> some security functionality is implemented in the app wouldn't they be abl
e to get around at least
> what is implemented in the app?
> Cheers,
> Michael
>|||Hello,
Yes. But it doesn't matter if they connected via Intergrated Security or SQL
security if the the database isn't secure then they will be able to change
what they wanted.
You can look at Application Roles (which are ) or by using SP for all
your interactions with the data. You should be using SP anyway what you may
not be doing is removing the access rights from the tables.
kind regards
Greg O
Need to document your databases. Use the first and still the best AGS SQL
Scribe
http://www.ag-software.com
"Michael C" <mculley@.NOSPAMoptushome.com.au> wrote in message
news:OQAf%23ZerFHA.1168@.TK2MSFTNGP11.phx.gbl...
> If a user of my application is connecting to their sql server using
> integrated security doesn't this allow them to fire up something like QA
> and do things to the database that they shouldn't. I know I should have
> everything secure even if the user is connected that way but assuming at
> least some security functionality is implemented in the app wouldn't they
> be able to get around at least what is implemented in the app?
> Cheers,
> Michael
>|||As pointed out, it doesn't matter HOW they get in. Once in, it's up to your
SQL Server security to keep them from doing things you don't want them to.
They could just as easily use an application log in/password to connect with
QA (assuming they could see it).|||"Todd Beaulieu" <nothanks@.enoughspam.bad> wrote in message
news:ObMbQmirFHA.2876@.TK2MSFTNGP12.phx.gbl...
> As pointed out, it doesn't matter HOW they get in. Once in, it's up to
> your
> SQL Server security to keep them from doing things you don't want them to.
> They could just as easily use an application log in/password to connect
> with
> QA (assuming they could see it).
Oops, I forgot I asked this questions :-) The thing is if they login using
SQL security they won't necessarily have the password because it might be
stored by the program they are using and entered by an administrator. So if
users are using integrated security you need to make your sql database
bulletproof. You could use stored procs and security to control it but that
means all of your security has to implemented in the database and none in
your app, which would be fairly hard to achieve.
I'll look into application roles as suggested to see if this gives me what I
want.
Regards,
Michael|||> You could use stored procs and security to control it but that
> means all of your security has to implemented in the database and none in
> your app, which would be fairly hard to achieve.
Yes, that's a design that's commonly used. Deny users permissions on
tables and other objects. I don't know why you say that's hard to
achieve.
Other alternatives are application security or an N-tier architecture
where the users only have access to the middle tier. In those scenarios
you should still generally use stored procedures for all data access -
procs have plenty of advantages other than security.
David Portas
SQL Server MVP
--|||"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> Yes, that's a design that's commonly used. Deny users permissions on
> tables and other objects. I don't know why you say that's hard to
> achieve.
I mean it's hard to achieve fully. Sure if I want to stop someone creating a
customer without a surname that's easy to do but I could have some
extrememly complicated restrictions on what data could be entered and in
what way, these restrictions would be impossible or very difficult to
implement within the database. In all of those cases connecting direct to
the database would give the user greater access. I just can't see any app
having every single restriction implemented in the database. Surely there'd
be hundreds of restrictions placed within the source code outside the
database.

> Other alternatives are application security or an N-tier architecture
> where the users only have access to the middle tier. In those scenarios
> you should still generally use stored procedures for all data access -
> procs have plenty of advantages other than security.
True, although not likely to happen unless my app gets a full rewrite.
Michael

Is using Max in varchar a bad thing?

I just had a situation where I was creating a temp table in a stored proc. I
create the temp table and then do a select into it from a couple of other
tables. Problem is, we had changed the size of one of the field in the table
from varchar(250) to varchar(500). But the temp table didn't reflect that so
data was being truncated.
So the easiest way around this would be to make the field varchar(max)
instead of 500 so if and when that field grows later, I don't get an error
in the stored proc.
Question is - does using varchar(max) have any repurcusions? Could I just
use it everywhere and then not have to worry about this problem?
TIA - Jeff.
> So the easiest way around this would be to make the field varchar(max)
> instead of 500 so if and when that field grows later, I don't get an error
> in the stored proc.
Or maybe you can avoid using a temp table for this data?
Or maybe if you change the data type of the base column (which shouldn't
happen very often) you also change the other places it is referenced (e.g.
SP params, variable declarations, etc.)?

> Question is - does using varchar(max) have any repurcusions? Could I just
> use it everywhere and then not have to worry about this problem?
This is like trading in a sub-compact for a minivan because you don't like
the way the tennis racket fits on the seat.
I would strongly recommend only using MAX when you absolutely need to have >
4000 or 8000 characters. In this case, if you changed from 250 to 500 now,
you will probably change it again. I would say envision the largest # of
characters that column will ever need to hold, then double it, and fix it
everywhere once. This drastically reduces the likelihood you will have to
worry about it again.
|||> As an alternative, why not create your temp table from the underlying
> source columns - that way your temp table columns will always match
> the source columns.
That's a possible solution, and I don't know the op's requirements, but I
often opt for CREATE TABLE and then INSERT INTO, in case I need to have
additional columns, or in case I want to define indexes/keys/constraints
etc. BEFORE all of the data is in the destination table...
A
|||I would create the table that way but I actually am doing a couple of
different selects putting data in the table so I need to do insert intos.
<jhofmeyr@.googlemail.com> wrote in message
news:861f7e9e-1970-4538-99c8-e599c1847a0b@.f10g2000hsf.googlegroups.com...
> Hi Jeff,
> I would avoid using varchar(max) unless you actually need to store
> very long values (> 8000 chars)
> As an alternative, why not create your temp table from the underlying
> source columns - that way your temp table columns will always match
> the source columns.
> You can do this very easily by creating the table like this:
> SELECT <col list>
> INTO #tmp_table
> FROM <table list>
> WHERE 1 = 0
> Good luck!
> J
|||>I would create the table that way but I actually am doing a couple of
>different selects putting data in the table so I need to do insert intos.
But the very first one could be a select into, no? I assume that the source
table that drives that column would have the same data type as the column
that fills that data if you have source data from other tables, so all the
tables should be updated if you increase the size again...
A
sql

Is using Max in varchar a bad thing?

I just had a situation where I was creating a temp table in a stored proc. I
create the temp table and then do a select into it from a couple of other
tables. Problem is, we had changed the size of one of the field in the table
from varchar(250) to varchar(500). But the temp table didn't reflect that so
data was being truncated.
So the easiest way around this would be to make the field varchar(max)
instead of 500 so if and when that field grows later, I don't get an error
in the stored proc.
Question is - does using varchar(max) have any repurcusions? Could I just
use it everywhere and then not have to worry about this problem?
TIA - Jeff.Hi Jeff,
I would avoid using varchar(max) unless you actually need to store
very long values (> 8000 chars)
As an alternative, why not create your temp table from the underlying
source columns - that way your temp table columns will always match
the source columns.
You can do this very easily by creating the table like this:
SELECT <col list>
INTO #tmp_table
FROM <table list>
WHERE 1 = 0
Good luck!
J|||> So the easiest way around this would be to make the field varchar(max)
> instead of 500 so if and when that field grows later, I don't get an error
> in the stored proc.
Or maybe you can avoid using a temp table for this data?
Or maybe if you change the data type of the base column (which shouldn't
happen very often) you also change the other places it is referenced (e.g.
SP params, variable declarations, etc.)?
> Question is - does using varchar(max) have any repurcusions? Could I just
> use it everywhere and then not have to worry about this problem?
This is like trading in a sub-compact for a minivan because you don't like
the way the tennis racket fits on the seat.
I would strongly recommend only using MAX when you absolutely need to have >
4000 or 8000 characters. In this case, if you changed from 250 to 500 now,
you will probably change it again. I would say envision the largest # of
characters that column will ever need to hold, then double it, and fix it
everywhere once. This drastically reduces the likelihood you will have to
worry about it again.|||> As an alternative, why not create your temp table from the underlying
> source columns - that way your temp table columns will always match
> the source columns.
That's a possible solution, and I don't know the op's requirements, but I
often opt for CREATE TABLE and then INSERT INTO, in case I need to have
additional columns, or in case I want to define indexes/keys/constraints
etc. BEFORE all of the data is in the destination table...
A|||I would create the table that way but I actually am doing a couple of
different selects putting data in the table so I need to do insert intos.
<jhofmeyr@.googlemail.com> wrote in message
news:861f7e9e-1970-4538-99c8-e599c1847a0b@.f10g2000hsf.googlegroups.com...
> Hi Jeff,
> I would avoid using varchar(max) unless you actually need to store
> very long values (> 8000 chars)
> As an alternative, why not create your temp table from the underlying
> source columns - that way your temp table columns will always match
> the source columns.
> You can do this very easily by creating the table like this:
> SELECT <col list>
> INTO #tmp_table
> FROM <table list>
> WHERE 1 = 0
> Good luck!
> J|||>I would create the table that way but I actually am doing a couple of
>different selects putting data in the table so I need to do insert intos.
But the very first one could be a select into, no? I assume that the source
table that drives that column would have the same data type as the column
that fills that data if you have source data from other tables, so all the
tables should be updated if you increase the size again...
A

Is using linked server with MSDE the way to go for this problem?

We have a problem and I'm not sure of the best way to solve it. Our situation:
We have an ASP.NET application which uses a full SQL Server database.
We want to add the capability for salespeople to initially install (in a
separate
procedure) our app locally on a tablet PC with a blank MSDE
database. We'll figure this part out.
Then, periodically, when they are able to access our
'master' ASP.NET app from the road, we would like for them to be able
to download selected rows from a number of tables from the
central SQL db to their local MSDE. They would make changes locally
(offline),
then reconnect later on and upload the data back to the central db.
I am thinking that creating a linked server temporarily on the master
db (via a call to sp_addlinkedserver) to transfer this data back and forth
to the MSDE db may be the way to go, but haven't found any specific info/case
studies/etc. that supports that. In addition, I created an MSDE database on
a tablet PC which is hooked to our network, but haven't found a way to
successfully 'broadcast' its presence to the SQL Server db so that I can
create a linked-server definition via Enterprise Manager.
Does anyone have any thoughts either about how to get this approach to work,
or a better way to solve this problem?
Thanks in advance for your help!
hi Rick,
"Rick Pinson" <rickpinson@.yahoo.com(donotspam)> ha scritto nel
messaggio news:69A042D0-CEF0-4AC8-A210-7BC29A6CD5E2@.microsoft.com
> We have a problem and I'm not sure of the best way to solve it. Our
> situation:
> We have an ASP.NET application which uses a full SQL Server database.
> We want to add the capability for salespeople to initially install
> (in a separate
> procedure) our app locally on a tablet PC with a blank MSDE
> database. We'll figure this part out.
> Then, periodically, when they are able to access our
> 'master' ASP.NET app from the road, we would like for them to be able
> to download selected rows from a number of tables from the
> central SQL db to their local MSDE. They would make changes locally
> (offline),
> then reconnect later on and upload the data back to the central db.
> I am thinking that creating a linked server temporarily on the master
> db (via a call to sp_addlinkedserver) to transfer this data back and
> forth
> to the MSDE db may be the way to go, but haven't found any specific
> info/case studies/etc. that supports that.
this seems like you want to perform a push operation, from the "master" to
all "remote" MSDE instances...
I'd go the opposite way... when the remote MSDE instance is able to connect
to the "master", add a linked server from remote to master, perform the
operations you need, drop the the linked server...

>In addition, I created an
> MSDE database on
> a tablet PC which is hooked to our network, but haven't found a way to
> successfully 'broadcast' its presence to the SQL Server db so that I
> can create a linked-server definition via Enterprise Manager.
the broadcasting mechanism to propagate server presence over the lan is not
always granted as lot of stuff is involved...
for instance: have a look at ListAvailableServer exposed by SQL-DMO:
ListAvailableServer uses ODBC function SQLBrowseConnect() provided by ODBC
libraries installed by Mdac;
this is a mechanism working in broadcast calls, which result never are
conclusive and consistent, becouse results are influenced of various
servers's answer states, answer time, etc.
Until Mdac 2.5, SQLBrowseConnect function works based on a NetBIOS
broadcast, on which SQL Servers respond (Default protocol for SQL Server
7.0), while in SQL Server 2000 the rules changed, because the default client
protocol changed to TCP/IP and now a UDP broadcast is used, beside a NetBIOS
broadcast, listening on port 1434:
which is using a UDP broadcast on port 1434, if instance do not listen or
not respond on time they will not be part of the enumeration.
Some basic rules for 7.0 are:
- SQL Servers have to be running on Windows NT or Windows 2000 and have to
listen on Named Pipes, that is why in 7.0 Windows 9x SQL Servers will never
show up, because they do not listen on Named Pipes.
- The SQL Server has to be running in order to respond on the broadcast.
There is a gray window of 15 minutes after shutdown, where a browse master
in the domain may respond on the broadcast and answer.
- If you have routers in your network, that do not pass on NetBIOS
broadcasts, this might limit your scope of the broadcast.
- Only servers within the same NT domain (or trust) will get enumerated.
In SQL Server 2000 using MDAC 2.6 this changes a little, because now the
default protocol has been changed to be TCP/IP sockets and instead of a
NetBIOS broadcast, they use a TCP UDP to detect the servers. The same logic
still applies roughly.
- SQL Server that are running
- SQL Server that listening on TCP/IP
- Running on Windows NT or Windows 2000 or Windows 9x
- If you use routers and these are configured not to pass UDP broadcasts,
only machines within the same subnet show up.
Upgrading to Service Pack 2 of SQL Server 2000 is required in order to have
..ListAvailableServer method to work properly, becouse precding release of
Sql-DMO Components of Sql Server 2000 present a bug in this area.
The Service Pack 3a introduced some new amenity in order to prevent MSDE
2000 to be hit by Internet worms like Slammer and Saphire virus and to
increase security, so that Microsoft decided to default for disabling
SuperSockets Network Protocols on new MSDE 2000 installation.
Instances of SQL Server 2000 SP3a or MSDE 2000 SP3a will stop listening on
UDP port 1434 when they are configured to not listen on any network
protocols. This will stop enlisting these servers.
of course, Id suggest you to test your network protocols settings...
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.9.1 - DbaMgr ver 0.55.1
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply

Is using Enterprise Manage across the internet secure?

If I am using enterprise manager across the internet to access an SQL Server
, is it secure? Can I make database changes, passwords changes and view con
fidential table data securely. Or do I need to use another mechanism.Enterprise Manager is just an application, it sounds like you are worried
about the data transferred over the wire. If so, you should look into SSL
encryption.
276553 HOW TO: Enable SSL Encryption for SQL Server 2000 with Certificate
Server http://support.microsoft.com/?id=276553
316898 HOW TO: Enable SSL Encryption for SQL Server 2000 with Microsoft...
http://support.microsoft.com/?id=316898
324777 WebCast: Microsoft SQL Server 2000: How to Configure SSL Encryption
http://support.microsoft.com/?id=324777
318605 INF: How SQL Server Uses a Certificate When the Force Protocol
http://support.microsoft.com/?id=318605
325757 INF: Using SQL Server 2000 with FIPS 140-1 Ciphers
http://support.microsoft.com/?id=325757
302409 FIX: Unable to Connect to SQL Server 2000 When Certificate
Authority... http://support.microsoft.com/?id=302409
Cindy Gross, MCDBA, MCSE
http://cindygross.tripod.com
This posting is provided "AS IS" with no warranties, and confers no rights.

Is using a SP return value bad technique?

I need to write a query in an SP that returns either yes or no. Is it bad technique to use the return value of the SP for this? Should I use a scalar or output parameter instead? If so which?
Thanks,
SteveUsing a return value to return simple integer scalal values is *THE* way to do what you want.

Returning a scalar values using a recorset with just one row and one column is too expensive.

You cannot return "Yes" or "No" with return values anyway, just integers are allowed. If you need to return "yes" or "no" in a string format use output values.|||My fault. That was a complete lapse of brainpower on my part. What you described is exactly what I meant to say(either a 1 or 0 for true or false). Oh well. That's what I get for working on a Saturday.

Thanks,
Steve|||Using a return value to return simple integer scalal values is *THE* way to do what you want.

Absolutely not.

Use an ouput variable and leave the return code alone...

Even if you specify

Return -1

For example, SQL Server in some cases can and will override the value...

So if you code for it, it could be a problem.|||Brett I've never had any problem using return values. Even BOL doesn't mention it. That would be awful! :)

Anyway what i wanted to evidence is that returning as scalar value in a recordset is a bad idea. Some more info here:

http://www.sqlteam.com/item.asp?ItemID=2644|||Yeah, I remeber Bills article.

But it was after a long thread that I think Arnold or Nigel identied/explained the problem.

I then went on and posted an example of where the return value was over ridden, making an output variable the only safe way.

I should blog that one...|||Well, surely it'll be an interesting read. Please do it.

Btw this "feature" seems to be more a bug than anything else...isn't it?|||Here's the thread...

http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=35642sql

Is using a named instance better over using a default instance?

If you were asked to install SQL 2005 on a machine, would u install a default instance or a named one? And why would u choose one over the other? Also, r there any issues with using a default instance?

Thank you for all your help.

Named instances allow for somewhat better server management -the names could reflect the SQL version. In an enviroment with lots of servers, various migration projects in play, and future upgrades, a simple way to know about the server is quite valuable.

For example:

Instance Name: LON_ACCT_Std05
Instance Name: NYC_HR_Ent05
Instance Name: DEN_INV_Exp05
Instance Name: LAX_ORD_Dev00

|||In addition, there is only one default instance on a machine. If you want to install multiple instances of SQL Server including 2000 and 2005, the rest should be named instances. Otherwise, setup will complain.|||

The other thing to think about is that the default instance name, MSSQLSERVER, is merely the defaul "named instance". No difference at all, but as pointed out, naming them allows you to describe them a bit.

Thanks,
Sam Lester (MSFT)

Is user logged-on?

Dear Group

I wondered whether there's a function or script that will show me
whether a user is currently logged-on to a MSSQL 2000 database? I'm
using SQL Authentication.

Thanks very much for your help & efforts!
Have a nice day!

MartinYou can use sp_who or sp_who2, or if you need something from code then
a statement like this will work:

if exists (select * from master.dbo.sysprocesses
where loginame = 'MyLogin'
and dbid = db_id('MyDB'))
begin
/* Do something here */
end

Simon|||Hi Simon!

Thanks for the message. Works great!

M

is user dba

Is this statement correct :
In SQL Server 2000 the SQL that checks to see "Is User a DBA" is not valid.
One must use suser_sname instead of suser_name.
See example below:
select 1 where suser_sname()='sa'
go
And if so is there a patch available to correct this ?
Don
"don" <don@.discussions.microsoft.com> wrote in message
news:549733A3-F0B5-4CAD-819A-141BCAF05497@.microsoft.com...
> Is this statement correct :
> In SQL Server 2000 the SQL that checks to see "Is User a DBA" is not
> valid.
> One must use suser_sname instead of suser_name.
> See example below:
> select 1 where suser_sname()='sa'
> go
> And if so is there a patch available to correct this ?
>
Um.
From BOL: SUSER_NAME always returns NULL when used in Microsoft SQL
ServerT 2000. This system built-in function is included only for backward
compatibility. Use SUSER_SNAME instead.
But suser_sname() is not right either. Since many different logins may be
system administrators.
Try
select IS_SRVROLEMEMBER('sysadmin')
David

is user dba

Is this statement correct :
In SQL Server 2000 the SQL that checks to see "Is User a DBA" is not valid.
One must use suser_sname instead of suser_name.
See example below:
select 1 where suser_sname()='sa'
go
And if so is there a patch available to correct this ?
Don"don" <don@.discussions.microsoft.com> wrote in message
news:549733A3-F0B5-4CAD-819A-141BCAF05497@.microsoft.com...
> Is this statement correct :
> In SQL Server 2000 the SQL that checks to see "Is User a DBA" is not
> valid.
> One must use suser_sname instead of suser_name.
> See example below:
> select 1 where suser_sname()='sa'
> go
> And if so is there a patch available to correct this ?
>
Um.
From BOL: SUSER_NAME always returns NULL when used in Microsoft SQL
ServerT 2000. This system built-in function is included only for backward
compatibility. Use SUSER_SNAME instead.
But suser_sname() is not right either. Since many different logins may be
system administrators.
Try
select IS_SRVROLEMEMBER('sysadmin')
David

Is user allowed?

I thought this would be easy, but it's now seeming way more complex - so I'm
hoping someone has a shortcut.
How can you figure out if a User_Name is allowed in a specific role? Ok,
sounds easy at first, but what roles that that role is a sub-role of?
That's where it's gotten more complex than I thought --
So, pretend that we've got MasterRole of which BabyRole is a member. MyUser
is a member of BabyRole. And since BabyRole is a member of MasterRole,
MyUser is allowed to do all the things that MasterRoll can do. However if I
run sp_helplogins on MyUser, it will list BabyRole but not MasterRole. Is
there any sp or easy code that can enumerate all those roles (or even
better, is there any sp that I can call with a role name and a user and it
returns if the user is allowed in that role?)
TIA!Duhhh.. Ok, I found the command IS_MEMBER... I feel dummmbbb... Sorry to
waste the bandwidth :)
"Brett Wickard" <brettwickard@.nospam.nospam> wrote in message
news:OX7xqibfGHA.4828@.TK2MSFTNGP05.phx.gbl...
>I thought this would be easy, but it's now seeming way more complex - so
>I'm hoping someone has a shortcut.
> How can you figure out if a User_Name is allowed in a specific role? Ok,
> sounds easy at first, but what roles that that role is a sub-role of?
> That's where it's gotten more complex than I thought --
> So, pretend that we've got MasterRole of which BabyRole is a member.
> MyUser is a member of BabyRole. And since BabyRole is a member of
> MasterRole, MyUser is allowed to do all the things that MasterRoll can do.
> However if I run sp_helplogins on MyUser, it will list BabyRole but not
> MasterRole. Is there any sp or easy code that can enumerate all those
> roles (or even better, is there any sp that I can call with a role name
> and a user and it returns if the user is allowed in that role?)
> TIA!
>
>sql

Is UPDATE worth the effort (in this case)?

Users will modify (add, delete, update) their configuration settings with th
e
subset of all available settings. The more I think about about the effort
which will have to go into figuring out whether the row is an update or
insert, and then (if update) wheter the column value is valid the more
I'm leaning towards query/delete/insert methodology. Some of the INSERTs,
of course, could have been an UPDATEs but this way I'd be free from having
the obligation to figure it out.
There is no way that I could throw set of results at the configuration table
and say "you (DB in this case) figure it out which is UPDATE and which is
INSERT", is there? Is deleting and then inserting new rows, without regard
for the fact that for example only one row out of ten or twenty would have
been updated if I went thru the trouble of identifying it, a reasonable
solution?If you have a fairly busy system, the overhead of deleting and inserting
instead of just updating could cause some locking contention and other
activity that could hurt performance. INSERT's and DELETE's are not only
changing data on pages but perhaps allocating and deallocating pages,
causing page splits, affecting indexes, etc. -- a lot more for the server
to do.
I'd say you are better off doing an
IF EXISTS (...)
UPDATE ...
ELSE
INSERT ...
Just make sure the columns on the WHERE clause are properly indexed and your
performance will be good. When it boils down to it, the extra effort is
very little and in the long run I think you will be better off
Mike
"Art" <Art@.discussions.microsoft.com> wrote in message
news:6ACD7684-FC66-4AFA-8FC0-736E620A2B6F@.microsoft.com...
> Users will modify (add, delete, update) their configuration settings with
> the
> subset of all available settings. The more I think about about the effort
> which will have to go into figuring out whether the row is an update or
> insert, and then (if update) wheter the column value is valid the more
> I'm leaning towards query/delete/insert methodology. Some of the
> INSERTs,
> of course, could have been an UPDATEs but this way I'd be free from having
> the obligation to figure it out.
> There is no way that I could throw set of results at the configuration
> table
> and say "you (DB in this case) figure it out which is UPDATE and which is
> INSERT", is there? Is deleting and then inserting new rows, without regard
> for the fact that for example only one row out of ten or twenty would have
> been updated if I went thru the trouble of identifying it, a reasonable
> solution?|||On Wed, 19 Apr 2006 08:23:02 -0700, Art wrote:

>Users will modify (add, delete, update) their configuration settings with t
he
>subset of all available settings. The more I think about about the effort
>which will have to go into figuring out whether the row is an update or
>insert, and then (if update) wheter the column value is valid the more
>I'm leaning towards query/delete/insert methodology. Some of the INSERTs,
>of course, could have been an UPDATEs but this way I'd be free from having
>the obligation to figure it out.
>There is no way that I could throw set of results at the configuration tabl
e
>and say "you (DB in this case) figure it out which is UPDATE and which is
>INSERT", is there? Is deleting and then inserting new rows, without regard
>for the fact that for example only one row out of ten or twenty would have
>been updated if I went thru the trouble of identifying it, a reasonable
>solution?
Hi Art,
In addition to Mike's reply, the DELETE/INSERT approach will also cause
problems if the table is referenced by any foreign keys.
Mike posted a code skeleton for single-row operation; for multi-row
operation, use this instead:
UPDATE t
SET Column1 = new.Column1,
Column2 = new.Column2
FROM YourTable AS t
INENR JOIN OtherTable AS new
ON new.KeyColumn1 = t.KeyColumn1
AND new.KeyColumn2 = t.KeyColumn2
--
INSERT INTO YourTable (KeyColumn1, KeyColumn2, Column1, Column2)
SELECT new.KeyColumn1, new.KeyColumn2, new.Column1, new.Column2
FROM OtherTable AS new
WHERE NOT EXISTS
(SELECT *
FROM YourTable AS t
WHERE new.KeyColumn1 = t.KeyColumn1
AND new.KeyColumn2 = t.KeyColumn2)
Add error-handling as required, of course! :-)
Hugo Kornelis, SQL Server MVP|||Well Celko and ilk may leap at my throat, but I have previously solved
this problem as follows, it may be potentially shaky, I guess, but it
reduces overhead to a minimum (I think):
UPDATE MyTable ... WHERE CriteriaOfRowsWhichMayNotExist
IF @.@.ROWCOUNT=0
INSERT MyTable...|||On 25 Apr 2006 07:17:47 -0700, ben.simkins@.gmail.com wrote:

>Well Celko and ilk may leap at my throat, but I have previously solved
>this problem as follows, it may be potentially shaky, I guess, but it
>reduces overhead to a minimum (I think):
>UPDATE MyTable ... WHERE CriteriaOfRowsWhichMayNotExist
>IF @.@.ROWCOUNT=0
> INSERT MyTable...
Hi Ben,
Good, but not perfect.
If the number of rows to insert or update will never be more than one,
you'll have no pproblems (but in that case, you have llimited the
scalability of yoour solution).
In code that is designed to handle multi-row operations, this might
break in the following scenario:
- A total of 5 rows should be updated OR inserted.
- Only 3 of them already exist.
- The UPDATE will affect these 3 rows.
- @.@.ROWCOUNT = 3, not 0. Therefore the INSERT is skipped.
A safer way that handles both single-row and multi-row operations is
UPDATE MyTable ... WHERE CriteriaOfRowsWhichMayNotExist
INSERT MyTable ... WHERE NOT CriteriaOfRowsWhichMayNotExist
Hugo Kornelis, SQL Server MVP

Is unicode necessary for a English and Spanish application/database?

We may add a spanish version of the application we are currently developing in the future. Do I need to use unicode datatypes in SQL2K or are plain varchar, etc. datatypes fine? Or can we convert in the future if necessary? Thanks.
Randy
"Randy" <anonymous@.discussions.microsoft.com> wrote in message
news:FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com...
> We may add a spanish version of the application we are currently
developing in the future. Do I need to use unicode datatypes in SQL2K or
are plain varchar, etc. datatypes fine? Or can we convert in the future if
necessary? Thanks.
> Randy
If you find a code page that has all the characters you need, then you can
use varchar instead of nvarchar.
The most common collation and charset in the US is
SQL_Latin1_General_CP1_CI_AI
http://www.microsoft.com/globaldev/r.../sbcs/1252.htm
It has the whole spanish alphabet, but lacks the Euro character.
David
|||Actually that charset (cp1252) does have the Euro char (at position
0x80). Randy, if you use any 1252-based collation such as the one that
David referenced you can store English or Spanish data without using
Unicode data types. In fact, code page 1252 is capable of storing most
of the Western European languages like English, Spanish, Portuguese,
German, French, Italian, etc in non-Unicode columns.
You may wish to consider Unicode for further down the road, though... if
you're adding support for Spanish today, is it possible that you may find
a market in Japan or Israel tomorrow? It's easier to start using
Unicode from the start of an app's lifetime than it is to convert an
existing one. It is possible to convert down the road, although you may
end up needing to change not just the table data types, but also stored
proc and trigger code as well as application code.
HTH,
Bart
Bart Duncan
Microsoft SQL Server Support
Please reply to the newsgroup only - thanks.
This posting is provided "AS IS" with no warranties, and confers no
rights.
From: "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com>
References: <FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com>
Subject: Re: Is unicode necessary for a English and Spanish
application/database?
Date: Mon, 26 Apr 2004 12:31:32 -0500
Lines: 26
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2800.1409
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409
Message-ID: <#YJUSR7KEHA.2236@.TK2MSFTNGP09.phx.gbl>
Newsgroups: microsoft.public.sqlserver.server
NNTP-Posting-Host: airband-66-226-246-66.airband.net 66.226.246.66
Path:
cpmsftngxa10.phx.gbl!TK2MSFTNGXA05.phx.gbl!TK2MSFT NGP08.phx.gbl!TK2MSFTNGP
09.phx.gbl
Xref: cpmsftngxa10.phx.gbl microsoft.public.sqlserver.server:338669
X-Tomcat-NG: microsoft.public.sqlserver.server
"Randy" <anonymous@.discussions.microsoft.com> wrote in message
news:FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com...
> We may add a spanish version of the application we are currently
developing in the future. Do I need to use unicode datatypes in SQL2K or
are plain varchar, etc. datatypes fine? Or can we convert in the future
if
necessary? Thanks.
> Randy
If you find a code page that has all the characters you need, then you can
use varchar instead of nvarchar.
The most common collation and charset in the US is
SQL_Latin1_General_CP1_CI_AI
http://www.microsoft.com/globaldev/r.../sbcs/1252.htm
It has the whole spanish alphabet, but lacks the Euro character.
David

Is unicode necessary for a English and Spanish application/database?

We may add a spanish version of the application we are currently developing in the future. Do I need to use unicode datatypes in SQL2K or are plain varchar, etc. datatypes fine? Or can we convert in the future if necessary? Thanks
Randy"Randy" <anonymous@.discussions.microsoft.com> wrote in message
news:FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com...
> We may add a spanish version of the application we are currently
developing in the future. Do I need to use unicode datatypes in SQL2K or
are plain varchar, etc. datatypes fine? Or can we convert in the future if
necessary? Thanks.
> Randy
If you find a code page that has all the characters you need, then you can
use varchar instead of nvarchar.
The most common collation and charset in the US is
SQL_Latin1_General_CP1_CI_AI
http://www.microsoft.com/globaldev/reference/sbcs/1252.htm
It has the whole spanish alphabet, but lacks the Euro character.
David|||Actually that charset (cp1252) does have the Euro char (at position
0x80). Randy, if you use any 1252-based collation such as the one that
David referenced you can store English or Spanish data without using
Unicode data types. In fact, code page 1252 is capable of storing most
of the Western European languages like English, Spanish, Portuguese,
German, French, Italian, etc in non-Unicode columns.
You may wish to consider Unicode for further down the road, though... if
you're adding support for Spanish today, is it possible that you may find
a market in Japan or Israel tomorrow? :) It's easier to start using
Unicode from the start of an app's lifetime than it is to convert an
existing one. It is possible to convert down the road, although you may
end up needing to change not just the table data types, but also stored
proc and trigger code as well as application code.
HTH,
Bart
--
Bart Duncan
Microsoft SQL Server Support
Please reply to the newsgroup only - thanks.
This posting is provided "AS IS" with no warranties, and confers no
rights.
From: "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com>
References: <FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com>
Subject: Re: Is unicode necessary for a English and Spanish
application/database?
Date: Mon, 26 Apr 2004 12:31:32 -0500
Lines: 26
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2800.1409
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409
Message-ID: <#YJUSR7KEHA.2236@.TK2MSFTNGP09.phx.gbl>
Newsgroups: microsoft.public.sqlserver.server
NNTP-Posting-Host: airband-66-226-246-66.airband.net 66.226.246.66
Path:
cpmsftngxa10.phx.gbl!TK2MSFTNGXA05.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP
09.phx.gbl
Xref: cpmsftngxa10.phx.gbl microsoft.public.sqlserver.server:338669
X-Tomcat-NG: microsoft.public.sqlserver.server
"Randy" <anonymous@.discussions.microsoft.com> wrote in message
news:FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com...
> We may add a spanish version of the application we are currently
developing in the future. Do I need to use unicode datatypes in SQL2K or
are plain varchar, etc. datatypes fine? Or can we convert in the future
if
necessary? Thanks.
> Randy
If you find a code page that has all the characters you need, then you can
use varchar instead of nvarchar.
The most common collation and charset in the US is
SQL_Latin1_General_CP1_CI_AI
http://www.microsoft.com/globaldev/reference/sbcs/1252.htm
It has the whole spanish alphabet, but lacks the Euro character.
David

Is unicode necessary for a English and Spanish application/database?

We may add a spanish version of the application we are currently developing
in the future. Do I need to use unicode datatypes in SQL2K or are plain var
char, etc. datatypes fine? Or can we convert in the future if necessary? Th
anks.
Randy"Randy" <anonymous@.discussions.microsoft.com> wrote in message
news:FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com...
> We may add a spanish version of the application we are currently
developing in the future. Do I need to use unicode datatypes in SQL2K or
are plain varchar, etc. datatypes fine? Or can we convert in the future if
necessary? Thanks.
> Randy
If you find a code page that has all the characters you need, then you can
use varchar instead of nvarchar.
The most common collation and charset in the US is
SQL_Latin1_General_CP1_CI_AI
http://www.microsoft.com/globaldev/...e/sbcs/1252.htm
It has the whole spanish alphabet, but lacks the Euro character.
David|||Actually that charset (cp1252) does have the Euro char (at position
0x80). Randy, if you use any 1252-based collation such as the one that
David referenced you can store English or Spanish data without using
Unicode data types. In fact, code page 1252 is capable of storing most
of the Western European languages like English, Spanish, Portuguese,
German, French, Italian, etc in non-Unicode columns.
You may wish to consider Unicode for further down the road, though... if
you're adding support for Spanish today, is it possible that you may find
a market in Japan or Israel tomorrow? It's easier to start using
Unicode from the start of an app's lifetime than it is to convert an
existing one. It is possible to convert down the road, although you may
end up needing to change not just the table data types, but also stored
proc and trigger code as well as application code.
HTH,
Bart
--
Bart Duncan
Microsoft SQL Server Support
Please reply to the newsgroup only - thanks.
This posting is provided "AS IS" with no warranties, and confers no
rights.
From: "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com>
References: <FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com>
Subject: Re: Is unicode necessary for a English and Spanish
application/database?
Date: Mon, 26 Apr 2004 12:31:32 -0500
Lines: 26
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2800.1409
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409
Message-ID: <#YJUSR7KEHA.2236@.TK2MSFTNGP09.phx.gbl>
Newsgroups: microsoft.public.sqlserver.server
NNTP-Posting-Host: airband-66-226-246-66.airband.net 66.226.246.66
Path:
cpmsftngxa10.phx.gbl!TK2MSFTNGXA05.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP
09.phx.gbl
Xref: cpmsftngxa10.phx.gbl microsoft.public.sqlserver.server:338669
X-Tomcat-NG: microsoft.public.sqlserver.server
"Randy" <anonymous@.discussions.microsoft.com> wrote in message
news:FD264AD2-770E-4E25-B38B-1F887B5F1111@.microsoft.com...
> We may add a spanish version of the application we are currently
developing in the future. Do I need to use unicode datatypes in SQL2K or
are plain varchar, etc. datatypes fine? Or can we convert in the future
if
necessary? Thanks.
> Randy
If you find a code page that has all the characters you need, then you can
use varchar instead of nvarchar.
The most common collation and charset in the US is
SQL_Latin1_General_CP1_CI_AI
http://www.microsoft.com/globaldev/...e/sbcs/1252.htm
It has the whole spanish alphabet, but lacks the Euro character.
David

Is UMS threads different from worker threads ?

I have read that there is only one UMS thread per processor . So for a 4 way
box, there are only 4 UMS threads . What are the default 255 worker threads
then ?
So if i have 5 processes that want to run in parallel... does that mean only
1 process will be worked upon while the remaining 4 would be queued to be
processed ? Can someone bring some more light here ?Hassan,
I think what you call UMS threads are listener threads that put work in the
scheduler queue and wakeup sleeping/waiting threads.
(most of )The other threads are scheduled to work for the client that has a
connection to sqlserver, everything ranging from network I/O, disk I/O,
sqlprocessing, etc is done by the other worker threads. WHen a thread does
an I/O (network or disk) it will wait/sleep until the I/O event finishes.
The (worker) thread is then availabel for other work. When the I/O finishes
it (the sql process) it is rescheduled again.
.
--
Mario
www.sqlinternals.com
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:eJ5H0B$lDHA.2732@.TK2MSFTNGP11.phx.gbl...
> I have read that there is only one UMS thread per processor . So for a 4
way
> box, there are only 4 UMS threads . What are the default 255 worker
threads
> then ?
> So if i have 5 processes that want to run in parallel... does that mean
only
> 1 process will be worked upon while the remaining 4 would be queued to be
> processed ? Can someone bring some more light here ?
>sql

Is Truncating Log commiting data the the actual DB?

Newbie question running SQL 2k
My last entry in the .mdb file was January and everything else has been
sitting in the Transaction log files. Everything works fine. I'm just
wondering if Truncating the logs mean I'm going to commit the data from the
log files to the .mdb file. So if I truncate and look in the .mdb file,
will I see all my entries up to the most current period?
Thanks!AFAIK you will not see the data in the datafile until it is commited. And SQ
L
Server will not truncate stuff that is not commited or saved (for example
with a log backup).
Truncating the log means, that you truncate the unused or unneccessary data
from the logfile that is not neccessary any more.
regards
Andreas
"Phillip Vong" wrote:

> Newbie question running SQL 2k
> My last entry in the .mdb file was January and everything else has been
> sitting in the Transaction log files. Everything works fine. I'm just
> wondering if Truncating the logs mean I'm going to commit the data from th
e
> log files to the .mdb file. So if I truncate and look in the .mdb file,
> will I see all my entries up to the most current period?
> Thanks!
>
>|||Phillip,
The data in the transaction log files is committed to the database files not
long after the transaction is written to the log. There are a number of
factors to consider, but if nobody has changed anything since January it all
should be in the database by now.
Truncating the log is throwing away the transaction records that could be
used to recover in case of a failure. Actually, you almost never should
truncate. If you do not want the logs, change to the SIMPLE recovery model.
And make sure that you backup the database regularly.
The behavior of transaction logs is discussed under the backup and restore
strategy portions of the Books Online, so you need to familiarize yourself
with those. It is not too complicated.
RLF
"Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
news:eSU5ZElYHHA.3256@.TK2MSFTNGP04.phx.gbl...
> Newbie question running SQL 2k
> My last entry in the .mdb file was January and everything else has been
> sitting in the Transaction log files. Everything works fine. I'm just
> wondering if Truncating the logs mean I'm going to commit the data from
> the log files to the .mdb file. So if I truncate and look in the .mdb
> file, will I see all my entries up to the most current period?
> Thanks!
>|||Russell, thanks for the update.
Can you tell me why the data in my tranaction log files has not committed to
the db files? You said "Shortly", but the last entry was 1/18/2007 in the
db file. Things are added to this db everyday by my user. Why are they all
staying in the transaction log file and not being committed. Can I force it
to commit some how? There are 1027 records in the log files and I want to
commit them to the db file. Will changing db to SIMPLE force it to commit
the data? I'm not going to lose what I have in the Log files am I?
"Russell Fields" <russellfields@.nomail.com> wrote in message
news:%23Gnin2lYHHA.992@.TK2MSFTNGP02.phx.gbl...
> Phillip,
> The data in the transaction log files is committed to the database files
> not long after the transaction is written to the log. There are a number
> of factors to consider, but if nobody has changed anything since January
> it all should be in the database by now.
> Truncating the log is throwing away the transaction records that could be
> used to recover in case of a failure. Actually, you almost never should
> truncate. If you do not want the logs, change to the SIMPLE recovery
> model. And make sure that you backup the database regularly.
> The behavior of transaction logs is discussed under the backup and restore
> strategy portions of the Books Online, so you need to familiarize yourself
> with those. It is not too complicated.
> RLF
> "Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
> news:eSU5ZElYHHA.3256@.TK2MSFTNGP04.phx.gbl...
>|||"Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
news:uWxxX%23mYHHA.3984@.TK2MSFTNGP02.phx.gbl...
> Russell, thanks for the update.
> Can you tell me why the data in my tranaction log files has not committed
> to the db files? You said "Shortly", but the last entry was 1/18/2007 in
> the db file. Things are added to this db everyday by my user. Why are
> they all staying in the transaction log file and not being committed. Can
> I force it to commit some how? There are 1027 records in the log files and
> I want to commit them to the db file. Will changing db to SIMPLE force it
> to commit the data? I'm not going to lose what I have in the Log files am
> I?
Umm, how are you determining that things are NOT being committed?
If you do a select against inserted data, if you're seeing it, (and not
using read uncommitted) the data is definitely committed.
If you're basing your assumptions on FILE dates, then two things:
1) The file date on the file is the last time the file was OPENED or the
SIZE of it was changed.
So it sounds like you're looking at the file size of the MDF and assuming
nothing is committed since the data hasn't changed. This isn't really true.
2) If the size of the transaction log is GROWING, this means you're not
doing transaction log backups most likely. Which means your recovery plans
probably wont' be good.

>
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com

Is Truncating Log commiting data the the actual DB?

Newbie question running SQL 2k
My last entry in the .mdb file was January and everything else has been
sitting in the Transaction log files. Everything works fine. I'm just
wondering if Truncating the logs mean I'm going to commit the data from the
log files to the .mdb file. So if I truncate and look in the .mdb file,
will I see all my entries up to the most current period?
Thanks!
AFAIK you will not see the data in the datafile until it is commited. And SQL
Server will not truncate stuff that is not commited or saved (for example
with a log backup).
Truncating the log means, that you truncate the unused or unneccessary data
from the logfile that is not neccessary any more.
regards
Andreas
"Phillip Vong" wrote:

> Newbie question running SQL 2k
> My last entry in the .mdb file was January and everything else has been
> sitting in the Transaction log files. Everything works fine. I'm just
> wondering if Truncating the logs mean I'm going to commit the data from the
> log files to the .mdb file. So if I truncate and look in the .mdb file,
> will I see all my entries up to the most current period?
> Thanks!
>
>
|||Phillip,
The data in the transaction log files is committed to the database files not
long after the transaction is written to the log. There are a number of
factors to consider, but if nobody has changed anything since January it all
should be in the database by now.
Truncating the log is throwing away the transaction records that could be
used to recover in case of a failure. Actually, you almost never should
truncate. If you do not want the logs, change to the SIMPLE recovery model.
And make sure that you backup the database regularly.
The behavior of transaction logs is discussed under the backup and restore
strategy portions of the Books Online, so you need to familiarize yourself
with those. It is not too complicated.
RLF
"Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
news:eSU5ZElYHHA.3256@.TK2MSFTNGP04.phx.gbl...
> Newbie question running SQL 2k
> My last entry in the .mdb file was January and everything else has been
> sitting in the Transaction log files. Everything works fine. I'm just
> wondering if Truncating the logs mean I'm going to commit the data from
> the log files to the .mdb file. So if I truncate and look in the .mdb
> file, will I see all my entries up to the most current period?
> Thanks!
>
|||Russell, thanks for the update.
Can you tell me why the data in my tranaction log files has not committed to
the db files? You said "Shortly", but the last entry was 1/18/2007 in the
db file. Things are added to this db everyday by my user. Why are they all
staying in the transaction log file and not being committed. Can I force it
to commit some how? There are 1027 records in the log files and I want to
commit them to the db file. Will changing db to SIMPLE force it to commit
the data? I'm not going to lose what I have in the Log files am I?
"Russell Fields" <russellfields@.nomail.com> wrote in message
news:%23Gnin2lYHHA.992@.TK2MSFTNGP02.phx.gbl...
> Phillip,
> The data in the transaction log files is committed to the database files
> not long after the transaction is written to the log. There are a number
> of factors to consider, but if nobody has changed anything since January
> it all should be in the database by now.
> Truncating the log is throwing away the transaction records that could be
> used to recover in case of a failure. Actually, you almost never should
> truncate. If you do not want the logs, change to the SIMPLE recovery
> model. And make sure that you backup the database regularly.
> The behavior of transaction logs is discussed under the backup and restore
> strategy portions of the Books Online, so you need to familiarize yourself
> with those. It is not too complicated.
> RLF
> "Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
> news:eSU5ZElYHHA.3256@.TK2MSFTNGP04.phx.gbl...
>
|||"Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
news:uWxxX%23mYHHA.3984@.TK2MSFTNGP02.phx.gbl...
> Russell, thanks for the update.
> Can you tell me why the data in my tranaction log files has not committed
> to the db files? You said "Shortly", but the last entry was 1/18/2007 in
> the db file. Things are added to this db everyday by my user. Why are
> they all staying in the transaction log file and not being committed. Can
> I force it to commit some how? There are 1027 records in the log files and
> I want to commit them to the db file. Will changing db to SIMPLE force it
> to commit the data? I'm not going to lose what I have in the Log files am
> I?
Umm, how are you determining that things are NOT being committed?
If you do a select against inserted data, if you're seeing it, (and not
using read uncommitted) the data is definitely committed.
If you're basing your assumptions on FILE dates, then two things:
1) The file date on the file is the last time the file was OPENED or the
SIZE of it was changed.
So it sounds like you're looking at the file size of the MDF and assuming
nothing is committed since the data hasn't changed. This isn't really true.
2) If the size of the transaction log is GROWING, this means you're not
doing transaction log backups most likely. Which means your recovery plans
probably wont' be good.

>
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com

Is Truncating Log commiting data the the actual DB?

Newbie question running SQL 2k
My last entry in the .mdb file was January and everything else has been
sitting in the Transaction log files. Everything works fine. I'm just
wondering if Truncating the logs mean I'm going to commit the data from the
log files to the .mdb file. So if I truncate and look in the .mdb file,
will I see all my entries up to the most current period?
Thanks!AFAIK you will not see the data in the datafile until it is commited. And SQL
Server will not truncate stuff that is not commited or saved (for example
with a log backup).
Truncating the log means, that you truncate the unused or unneccessary data
from the logfile that is not neccessary any more.
regards
Andreas
"Phillip Vong" wrote:
> Newbie question running SQL 2k
> My last entry in the .mdb file was January and everything else has been
> sitting in the Transaction log files. Everything works fine. I'm just
> wondering if Truncating the logs mean I'm going to commit the data from the
> log files to the .mdb file. So if I truncate and look in the .mdb file,
> will I see all my entries up to the most current period?
> Thanks!
>
>|||Phillip,
The data in the transaction log files is committed to the database files not
long after the transaction is written to the log. There are a number of
factors to consider, but if nobody has changed anything since January it all
should be in the database by now.
Truncating the log is throwing away the transaction records that could be
used to recover in case of a failure. Actually, you almost never should
truncate. If you do not want the logs, change to the SIMPLE recovery model.
And make sure that you backup the database regularly.
The behavior of transaction logs is discussed under the backup and restore
strategy portions of the Books Online, so you need to familiarize yourself
with those. It is not too complicated.
RLF
"Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
news:eSU5ZElYHHA.3256@.TK2MSFTNGP04.phx.gbl...
> Newbie question running SQL 2k
> My last entry in the .mdb file was January and everything else has been
> sitting in the Transaction log files. Everything works fine. I'm just
> wondering if Truncating the logs mean I'm going to commit the data from
> the log files to the .mdb file. So if I truncate and look in the .mdb
> file, will I see all my entries up to the most current period?
> Thanks!
>|||Russell, thanks for the update.
Can you tell me why the data in my tranaction log files has not committed to
the db files? You said "Shortly", but the last entry was 1/18/2007 in the
db file. Things are added to this db everyday by my user. Why are they all
staying in the transaction log file and not being committed. Can I force it
to commit some how? There are 1027 records in the log files and I want to
commit them to the db file. Will changing db to SIMPLE force it to commit
the data? I'm not going to lose what I have in the Log files am I?
"Russell Fields" <russellfields@.nomail.com> wrote in message
news:%23Gnin2lYHHA.992@.TK2MSFTNGP02.phx.gbl...
> Phillip,
> The data in the transaction log files is committed to the database files
> not long after the transaction is written to the log. There are a number
> of factors to consider, but if nobody has changed anything since January
> it all should be in the database by now.
> Truncating the log is throwing away the transaction records that could be
> used to recover in case of a failure. Actually, you almost never should
> truncate. If you do not want the logs, change to the SIMPLE recovery
> model. And make sure that you backup the database regularly.
> The behavior of transaction logs is discussed under the backup and restore
> strategy portions of the Books Online, so you need to familiarize yourself
> with those. It is not too complicated.
> RLF
> "Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
> news:eSU5ZElYHHA.3256@.TK2MSFTNGP04.phx.gbl...
>> Newbie question running SQL 2k
>> My last entry in the .mdb file was January and everything else has been
>> sitting in the Transaction log files. Everything works fine. I'm just
>> wondering if Truncating the logs mean I'm going to commit the data from
>> the log files to the .mdb file. So if I truncate and look in the .mdb
>> file, will I see all my entries up to the most current period?
>> Thanks!
>>
>|||"Phillip Vong" <phillip_vong*at*yahoo*dot*com> wrote in message
news:uWxxX%23mYHHA.3984@.TK2MSFTNGP02.phx.gbl...
> Russell, thanks for the update.
> Can you tell me why the data in my tranaction log files has not committed
> to the db files? You said "Shortly", but the last entry was 1/18/2007 in
> the db file. Things are added to this db everyday by my user. Why are
> they all staying in the transaction log file and not being committed. Can
> I force it to commit some how? There are 1027 records in the log files and
> I want to commit them to the db file. Will changing db to SIMPLE force it
> to commit the data? I'm not going to lose what I have in the Log files am
> I?
Umm, how are you determining that things are NOT being committed?
If you do a select against inserted data, if you're seeing it, (and not
using read uncommitted) the data is definitely committed.
If you're basing your assumptions on FILE dates, then two things:
1) The file date on the file is the last time the file was OPENED or the
SIZE of it was changed.
So it sounds like you're looking at the file size of the MDF and assuming
nothing is committed since the data hasn't changed. This isn't really true.
2) If the size of the transaction log is GROWING, this means you're not
doing transaction log backups most likely. Which means your recovery plans
probably wont' be good.
>
Greg Moore
SQL Server DBA Consulting
sql (at) greenms.com http://www.greenms.com

Is triggers multi-threaded?

Hi,
I need to call sp_OACreate inside an INSERT trigger to create COM component
that's developed in VB6 (STA), my question is if there are many records
inserted at very short period of time, is it multi-threaded (sp_OACreate -
MTA) from trigger point of view? If this is true, then I have to use the MTA
component in trigger.
Thanks!
John
John Lee wrote:
> Hi,
> I need to call sp_OACreate inside an INSERT trigger to create COM component
> that's developed in VB6 (STA), my question is if there are many records
> inserted at very short period of time, is it multi-threaded (sp_OACreate -
> MTA) from trigger point of view? If this is true, then I have to use the MTA
> component in trigger.
> Thanks!
> John
Creating a COM component in a trigger is almost certainly an extremely
bad idea. There is no obvious way that your external code can be made
part of a SQL Server transaction. How will you roll-back the effect of
your VB code if a nested transaction is rolled back? If the external
code is non-transactional in nature then there is little sense in
putting it in a trigger and doing so can only hurt performance and
reliability. Also, in my limited experience the sp_OA procs will not
scale and COM will often leak memory.
Invoke your code from outside SQL Server is my suggestion - from a job
or some other server-side process for example. If you must do it in the
database then use a stored procedure rather than a trigger. In SQL
Server 2005 you also have the option of using a CLR proc.
To answer your original question. Triggers are "multi-threaded" in the
sense that the same trigger can fire simultaneously for several
different connections. Each of those connections will attempt to create
another instance of your COM object.
If multiple rows are updated in a single statement then the trigger
will only fire ONCE. It will NOT fire once for each row. This is
another reason why not to use a trigger to invoke external code. In
order to call your COM component for each row you will probably have to
use a cursor to cycle through each row in the INSERTED virtual table
(INSERTED is available only in triggers and contains the rows that were
inserted or updated). Cursors are best avoided in most circumstances
and especially so in triggers.
Hope this helps.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
|||"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1159123193.138245.268530@.b28g2000cwb.googlegr oups.com...
> John Lee wrote:
> Creating a COM component in a trigger is almost certainly an extremely
> bad idea. There is no obvious way that your external code can be made
> part of a SQL Server transaction. How will you roll-back the effect of
> your VB code if a nested transaction is rolled back? If the external
> code is non-transactional in nature then there is little sense in
> putting it in a trigger and doing so can only hurt performance and
> reliability. Also, in my limited experience the sp_OA procs will not
> scale and COM will often leak memory.
> Invoke your code from outside SQL Server is my suggestion - from a job
> or some other server-side process for example. If you must do it in the
> database then use a stored procedure rather than a trigger. In SQL
> Server 2005 you also have the option of using a CLR proc.
> To answer your original question. Triggers are "multi-threaded" in the
> sense that the same trigger can fire simultaneously for several
> different connections. Each of those connections will attempt to create
> another instance of your COM object.
> If multiple rows are updated in a single statement then the trigger
> will only fire ONCE. It will NOT fire once for each row. This is
> another reason why not to use a trigger to invoke external code. In
> order to call your COM component for each row you will probably have to
> use a cursor to cycle through each row in the INSERTED virtual table
> (INSERTED is available only in triggers and contains the rows that were
> inserted or updated). Cursors are best avoided in most circumstances
> and especially so in triggers.
>
Ditto all of that. Just adding in COM programming an STA component can be
use by a multi-threaded client. The restriction is that each object
instance can be accessed only by the thread that created it. In a trigger
or procedure the COM component is created (SP_OACreate), used (SP_OAMethod)
and destroyed (SP_OADestroy) all in the same thread. So a plain VB6 STA COM
Component should work fine.
In addition to David's reasons why this is a poor idea, you should generally
avoid doing anything expensive in a trigger. That includes any kind of
network communication or file IO, regardless of whether the code is VB6 COM
or TSQL or CLR code in 2005.
David
|||Hi David
STAs can certainly be launched from MTAs but the bigger problem with VB6 COM
components is that they use TLS which isn't designed to work with SQL
Server's UMS. Under load, this can lead to various possible undesirable
conditions such as inconsistent results, crashes & hung UMS schedulers (at
least in SQL2K).
If the original poster really HAS to create a component under a trigger, my
advice is to use sp_OACreate only if the trigger activity is low (as
developing in VB does have its advantages in terms of productivity). If the
trigger activity is high (meaning it has a high probability of being heavily
pre-empted & re-scheduled), consider either a different architecture
(asynch, MQ etc) or write an extended stored proc, perhaps with delphi or c.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:uww$mBB4GHA.3840@.TK2MSFTNGP06.phx.gbl...
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> news:1159123193.138245.268530@.b28g2000cwb.googlegr oups.com...
> Ditto all of that. Just adding in COM programming an STA component can be
> use by a multi-threaded client. The restriction is that each object
> instance can be accessed only by the thread that created it. In a trigger
> or procedure the COM component is created (SP_OACreate), used
> (SP_OAMethod) and destroyed (SP_OADestroy) all in the same thread. So a
> plain VB6 STA COM Component should work fine.
> In addition to David's reasons why this is a poor idea, you should
> generally avoid doing anything expensive in a trigger. That includes any
> kind of network communication or file IO, regardless of whether the code
> is VB6 COM or TSQL or CLR code in 2005.
> David
|||Thanks very much for your reply - David P, David B and Greg!!!
I totally agree with you all regarding the bad things using triggers doing
heavy lifting job ... BUT here is the business problem I have to solve:
We need to sync some data from SQL server 2000 to CRM in near realtime (< 5
minutes)
so we added triggers to around 10 tables, inside those triggers, we only
write 4 pieces of info to ChangeLog table
1. option 1 - write a service to pull ChangeLog table every 2 minutes
2. option 2 - create a trigger on this table to write out a text file to
notify the service to do the sync work
I also developed an extended stored proc to write file out and this seems
twice as fast as the sp_OACreate.
sp_OACreate has two issues to me:
1. late binding - because we are using progID to create the object
2. possible MTA calling into STA issue - if triggers side are multiple
threaded, then calling into STA will make it as serialized operation.
Thanks!
John
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:eAzr0RD4GHA.600@.TK2MSFTNGP05.phx.gbl...
> Hi David
> STAs can certainly be launched from MTAs but the bigger problem with VB6
> COM components is that they use TLS which isn't designed to work with SQL
> Server's UMS. Under load, this can lead to various possible undesirable
> conditions such as inconsistent results, crashes & hung UMS schedulers (at
> least in SQL2K).
> If the original poster really HAS to create a component under a trigger,
> my advice is to use sp_OACreate only if the trigger activity is low (as
> developing in VB does have its advantages in terms of productivity). If
> the trigger activity is high (meaning it has a high probability of being
> heavily pre-empted & re-scheduled), consider either a different
> architecture (asynch, MQ etc) or write an extended stored proc, perhaps
> with delphi or c.
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:uww$mBB4GHA.3840@.TK2MSFTNGP06.phx.gbl...
>
|||A much safer and more efficient way to do this would be to have your trigger
write to staging tables and have an external process read those tables once
a minute or so. In SQL Server 2005 you could use the Service Broker to do
this in a scaleable and asynchronous way.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"John Lee" <pursca@.newsgroups.nospam> wrote in message
news:O$27%237M4GHA.2464@.TK2MSFTNGP06.phx.gbl...
> Thanks very much for your reply - David P, David B and Greg!!!
> I totally agree with you all regarding the bad things using triggers doing
> heavy lifting job ... BUT here is the business problem I have to solve:
> We need to sync some data from SQL server 2000 to CRM in near realtime (<
> 5 minutes)
> so we added triggers to around 10 tables, inside those triggers, we only
> write 4 pieces of info to ChangeLog table
> 1. option 1 - write a service to pull ChangeLog table every 2 minutes
> 2. option 2 - create a trigger on this table to write out a text file to
> notify the service to do the sync work
> I also developed an extended stored proc to write file out and this seems
> twice as fast as the sp_OACreate.
> sp_OACreate has two issues to me:
> 1. late binding - because we are using progID to create the object
> 2. possible MTA calling into STA issue - if triggers side are multiple
> threaded, then calling into STA will make it as serialized operation.
> Thanks!
> John
> "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
> news:eAzr0RD4GHA.600@.TK2MSFTNGP05.phx.gbl...
>

Is triggers multi-threaded?

Hi,
I need to call sp_OACreate inside an INSERT trigger to create COM component
that's developed in VB6 (STA), my question is if there are many records
inserted at very short period of time, is it multi-threaded (sp_OACreate -
MTA) from trigger point of view? If this is true, then I have to use the MTA
component in trigger.
Thanks!
JohnJohn Lee wrote:
> Hi,
> I need to call sp_OACreate inside an INSERT trigger to create COM component
> that's developed in VB6 (STA), my question is if there are many records
> inserted at very short period of time, is it multi-threaded (sp_OACreate -
> MTA) from trigger point of view? If this is true, then I have to use the MTA
> component in trigger.
> Thanks!
> John
Creating a COM component in a trigger is almost certainly an extremely
bad idea. There is no obvious way that your external code can be made
part of a SQL Server transaction. How will you roll-back the effect of
your VB code if a nested transaction is rolled back? If the external
code is non-transactional in nature then there is little sense in
putting it in a trigger and doing so can only hurt performance and
reliability. Also, in my limited experience the sp_OA procs will not
scale and COM will often leak memory.
Invoke your code from outside SQL Server is my suggestion - from a job
or some other server-side process for example. If you must do it in the
database then use a stored procedure rather than a trigger. In SQL
Server 2005 you also have the option of using a CLR proc.
To answer your original question. Triggers are "multi-threaded" in the
sense that the same trigger can fire simultaneously for several
different connections. Each of those connections will attempt to create
another instance of your COM object.
If multiple rows are updated in a single statement then the trigger
will only fire ONCE. It will NOT fire once for each row. This is
another reason why not to use a trigger to invoke external code. In
order to call your COM component for each row you will probably have to
use a cursor to cycle through each row in the INSERTED virtual table
(INSERTED is available only in triggers and contains the rows that were
inserted or updated). Cursors are best avoided in most circumstances
and especially so in triggers.
Hope this helps.
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1159123193.138245.268530@.b28g2000cwb.googlegroups.com...
> John Lee wrote:
>> Hi,
>> I need to call sp_OACreate inside an INSERT trigger to create COM
>> component
>> that's developed in VB6 (STA), my question is if there are many records
>> inserted at very short period of time, is it multi-threaded
>> (sp_OACreate -
>> MTA) from trigger point of view? If this is true, then I have to use the
>> MTA
>> component in trigger.
>> Thanks!
>> John
> Creating a COM component in a trigger is almost certainly an extremely
> bad idea. There is no obvious way that your external code can be made
> part of a SQL Server transaction. How will you roll-back the effect of
> your VB code if a nested transaction is rolled back? If the external
> code is non-transactional in nature then there is little sense in
> putting it in a trigger and doing so can only hurt performance and
> reliability. Also, in my limited experience the sp_OA procs will not
> scale and COM will often leak memory.
> Invoke your code from outside SQL Server is my suggestion - from a job
> or some other server-side process for example. If you must do it in the
> database then use a stored procedure rather than a trigger. In SQL
> Server 2005 you also have the option of using a CLR proc.
> To answer your original question. Triggers are "multi-threaded" in the
> sense that the same trigger can fire simultaneously for several
> different connections. Each of those connections will attempt to create
> another instance of your COM object.
> If multiple rows are updated in a single statement then the trigger
> will only fire ONCE. It will NOT fire once for each row. This is
> another reason why not to use a trigger to invoke external code. In
> order to call your COM component for each row you will probably have to
> use a cursor to cycle through each row in the INSERTED virtual table
> (INSERTED is available only in triggers and contains the rows that were
> inserted or updated). Cursors are best avoided in most circumstances
> and especially so in triggers.
>
Ditto all of that. Just adding in COM programming an STA component can be
use by a multi-threaded client. The restriction is that each object
instance can be accessed only by the thread that created it. In a trigger
or procedure the COM component is created (SP_OACreate), used (SP_OAMethod)
and destroyed (SP_OADestroy) all in the same thread. So a plain VB6 STA COM
Component should work fine.
In addition to David's reasons why this is a poor idea, you should generally
avoid doing anything expensive in a trigger. That includes any kind of
network communication or file IO, regardless of whether the code is VB6 COM
or TSQL or CLR code in 2005.
David|||Hi David
STAs can certainly be launched from MTAs but the bigger problem with VB6 COM
components is that they use TLS which isn't designed to work with SQL
Server's UMS. Under load, this can lead to various possible undesirable
conditions such as inconsistent results, crashes & hung UMS schedulers (at
least in SQL2K).
If the original poster really HAS to create a component under a trigger, my
advice is to use sp_OACreate only if the trigger activity is low (as
developing in VB does have its advantages in terms of productivity). If the
trigger activity is high (meaning it has a high probability of being heavily
pre-empted & re-scheduled), consider either a different architecture
(asynch, MQ etc) or write an extended stored proc, perhaps with delphi or c.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:uww$mBB4GHA.3840@.TK2MSFTNGP06.phx.gbl...
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
> news:1159123193.138245.268530@.b28g2000cwb.googlegroups.com...
>> John Lee wrote:
>> Hi,
>> I need to call sp_OACreate inside an INSERT trigger to create COM
>> component
>> that's developed in VB6 (STA), my question is if there are many records
>> inserted at very short period of time, is it multi-threaded
>> (sp_OACreate -
>> MTA) from trigger point of view? If this is true, then I have to use the
>> MTA
>> component in trigger.
>> Thanks!
>> John
>> Creating a COM component in a trigger is almost certainly an extremely
>> bad idea. There is no obvious way that your external code can be made
>> part of a SQL Server transaction. How will you roll-back the effect of
>> your VB code if a nested transaction is rolled back? If the external
>> code is non-transactional in nature then there is little sense in
>> putting it in a trigger and doing so can only hurt performance and
>> reliability. Also, in my limited experience the sp_OA procs will not
>> scale and COM will often leak memory.
>> Invoke your code from outside SQL Server is my suggestion - from a job
>> or some other server-side process for example. If you must do it in the
>> database then use a stored procedure rather than a trigger. In SQL
>> Server 2005 you also have the option of using a CLR proc.
>> To answer your original question. Triggers are "multi-threaded" in the
>> sense that the same trigger can fire simultaneously for several
>> different connections. Each of those connections will attempt to create
>> another instance of your COM object.
>> If multiple rows are updated in a single statement then the trigger
>> will only fire ONCE. It will NOT fire once for each row. This is
>> another reason why not to use a trigger to invoke external code. In
>> order to call your COM component for each row you will probably have to
>> use a cursor to cycle through each row in the INSERTED virtual table
>> (INSERTED is available only in triggers and contains the rows that were
>> inserted or updated). Cursors are best avoided in most circumstances
>> and especially so in triggers.
> Ditto all of that. Just adding in COM programming an STA component can be
> use by a multi-threaded client. The restriction is that each object
> instance can be accessed only by the thread that created it. In a trigger
> or procedure the COM component is created (SP_OACreate), used
> (SP_OAMethod) and destroyed (SP_OADestroy) all in the same thread. So a
> plain VB6 STA COM Component should work fine.
> In addition to David's reasons why this is a poor idea, you should
> generally avoid doing anything expensive in a trigger. That includes any
> kind of network communication or file IO, regardless of whether the code
> is VB6 COM or TSQL or CLR code in 2005.
> David|||Thanks very much for your reply - David P, David B and Greg!!!
I totally agree with you all regarding the bad things using triggers doing
heavy lifting job ... BUT here is the business problem I have to solve:
We need to sync some data from SQL server 2000 to CRM in near realtime (< 5
minutes)
so we added triggers to around 10 tables, inside those triggers, we only
write 4 pieces of info to ChangeLog table
1. option 1 - write a service to pull ChangeLog table every 2 minutes
2. option 2 - create a trigger on this table to write out a text file to
notify the service to do the sync work
I also developed an extended stored proc to write file out and this seems
twice as fast as the sp_OACreate.
sp_OACreate has two issues to me:
1. late binding - because we are using progID to create the object
2. possible MTA calling into STA issue - if triggers side are multiple
threaded, then calling into STA will make it as serialized operation.
Thanks!
John
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:eAzr0RD4GHA.600@.TK2MSFTNGP05.phx.gbl...
> Hi David
> STAs can certainly be launched from MTAs but the bigger problem with VB6
> COM components is that they use TLS which isn't designed to work with SQL
> Server's UMS. Under load, this can lead to various possible undesirable
> conditions such as inconsistent results, crashes & hung UMS schedulers (at
> least in SQL2K).
> If the original poster really HAS to create a component under a trigger,
> my advice is to use sp_OACreate only if the trigger activity is low (as
> developing in VB does have its advantages in terms of productivity). If
> the trigger activity is high (meaning it has a high probability of being
> heavily pre-empted & re-scheduled), consider either a different
> architecture (asynch, MQ etc) or write an extended stored proc, perhaps
> with delphi or c.
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:uww$mBB4GHA.3840@.TK2MSFTNGP06.phx.gbl...
>>
>> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
>> news:1159123193.138245.268530@.b28g2000cwb.googlegroups.com...
>> John Lee wrote:
>> Hi,
>> I need to call sp_OACreate inside an INSERT trigger to create COM
>> component
>> that's developed in VB6 (STA), my question is if there are many records
>> inserted at very short period of time, is it multi-threaded
>> (sp_OACreate -
>> MTA) from trigger point of view? If this is true, then I have to use
>> the MTA
>> component in trigger.
>> Thanks!
>> John
>> Creating a COM component in a trigger is almost certainly an extremely
>> bad idea. There is no obvious way that your external code can be made
>> part of a SQL Server transaction. How will you roll-back the effect of
>> your VB code if a nested transaction is rolled back? If the external
>> code is non-transactional in nature then there is little sense in
>> putting it in a trigger and doing so can only hurt performance and
>> reliability. Also, in my limited experience the sp_OA procs will not
>> scale and COM will often leak memory.
>> Invoke your code from outside SQL Server is my suggestion - from a job
>> or some other server-side process for example. If you must do it in the
>> database then use a stored procedure rather than a trigger. In SQL
>> Server 2005 you also have the option of using a CLR proc.
>> To answer your original question. Triggers are "multi-threaded" in the
>> sense that the same trigger can fire simultaneously for several
>> different connections. Each of those connections will attempt to create
>> another instance of your COM object.
>> If multiple rows are updated in a single statement then the trigger
>> will only fire ONCE. It will NOT fire once for each row. This is
>> another reason why not to use a trigger to invoke external code. In
>> order to call your COM component for each row you will probably have to
>> use a cursor to cycle through each row in the INSERTED virtual table
>> (INSERTED is available only in triggers and contains the rows that were
>> inserted or updated). Cursors are best avoided in most circumstances
>> and especially so in triggers.
>>
>> Ditto all of that. Just adding in COM programming an STA component can
>> be use by a multi-threaded client. The restriction is that each object
>> instance can be accessed only by the thread that created it. In a
>> trigger or procedure the COM component is created (SP_OACreate), used
>> (SP_OAMethod) and destroyed (SP_OADestroy) all in the same thread. So a
>> plain VB6 STA COM Component should work fine.
>> In addition to David's reasons why this is a poor idea, you should
>> generally avoid doing anything expensive in a trigger. That includes any
>> kind of network communication or file IO, regardless of whether the code
>> is VB6 COM or TSQL or CLR code in 2005.
>> David
>|||A much safer and more efficient way to do this would be to have your trigger
write to staging tables and have an external process read those tables once
a minute or so. In SQL Server 2005 you could use the Service Broker to do
this in a scaleable and asynchronous way.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"John Lee" <pursca@.newsgroups.nospam> wrote in message
news:O$27%237M4GHA.2464@.TK2MSFTNGP06.phx.gbl...
> Thanks very much for your reply - David P, David B and Greg!!!
> I totally agree with you all regarding the bad things using triggers doing
> heavy lifting job ... BUT here is the business problem I have to solve:
> We need to sync some data from SQL server 2000 to CRM in near realtime (<
> 5 minutes)
> so we added triggers to around 10 tables, inside those triggers, we only
> write 4 pieces of info to ChangeLog table
> 1. option 1 - write a service to pull ChangeLog table every 2 minutes
> 2. option 2 - create a trigger on this table to write out a text file to
> notify the service to do the sync work
> I also developed an extended stored proc to write file out and this seems
> twice as fast as the sp_OACreate.
> sp_OACreate has two issues to me:
> 1. late binding - because we are using progID to create the object
> 2. possible MTA calling into STA issue - if triggers side are multiple
> threaded, then calling into STA will make it as serialized operation.
> Thanks!
> John
> "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
> news:eAzr0RD4GHA.600@.TK2MSFTNGP05.phx.gbl...
>> Hi David
>> STAs can certainly be launched from MTAs but the bigger problem with VB6
>> COM components is that they use TLS which isn't designed to work with SQL
>> Server's UMS. Under load, this can lead to various possible undesirable
>> conditions such as inconsistent results, crashes & hung UMS schedulers
>> (at least in SQL2K).
>> If the original poster really HAS to create a component under a trigger,
>> my advice is to use sp_OACreate only if the trigger activity is low (as
>> developing in VB does have its advantages in terms of productivity). If
>> the trigger activity is high (meaning it has a high probability of being
>> heavily pre-empted & re-scheduled), consider either a different
>> architecture (asynch, MQ etc) or write an extended stored proc, perhaps
>> with delphi or c.
>> Regards,
>> Greg Linwood
>> SQL Server MVP
>> http://blogs.sqlserver.org.au/blogs/greg_linwood
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:uww$mBB4GHA.3840@.TK2MSFTNGP06.phx.gbl...
>>
>> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
>> news:1159123193.138245.268530@.b28g2000cwb.googlegroups.com...
>> John Lee wrote:
>> Hi,
>> I need to call sp_OACreate inside an INSERT trigger to create COM
>> component
>> that's developed in VB6 (STA), my question is if there are many
>> records
>> inserted at very short period of time, is it multi-threaded
>> (sp_OACreate -
>> MTA) from trigger point of view? If this is true, then I have to use
>> the MTA
>> component in trigger.
>> Thanks!
>> John
>> Creating a COM component in a trigger is almost certainly an extremely
>> bad idea. There is no obvious way that your external code can be made
>> part of a SQL Server transaction. How will you roll-back the effect of
>> your VB code if a nested transaction is rolled back? If the external
>> code is non-transactional in nature then there is little sense in
>> putting it in a trigger and doing so can only hurt performance and
>> reliability. Also, in my limited experience the sp_OA procs will not
>> scale and COM will often leak memory.
>> Invoke your code from outside SQL Server is my suggestion - from a job
>> or some other server-side process for example. If you must do it in the
>> database then use a stored procedure rather than a trigger. In SQL
>> Server 2005 you also have the option of using a CLR proc.
>> To answer your original question. Triggers are "multi-threaded" in the
>> sense that the same trigger can fire simultaneously for several
>> different connections. Each of those connections will attempt to create
>> another instance of your COM object.
>> If multiple rows are updated in a single statement then the trigger
>> will only fire ONCE. It will NOT fire once for each row. This is
>> another reason why not to use a trigger to invoke external code. In
>> order to call your COM component for each row you will probably have to
>> use a cursor to cycle through each row in the INSERTED virtual table
>> (INSERTED is available only in triggers and contains the rows that were
>> inserted or updated). Cursors are best avoided in most circumstances
>> and especially so in triggers.
>>
>> Ditto all of that. Just adding in COM programming an STA component can
>> be use by a multi-threaded client. The restriction is that each object
>> instance can be accessed only by the thread that created it. In a
>> trigger or procedure the COM component is created (SP_OACreate), used
>> (SP_OAMethod) and destroyed (SP_OADestroy) all in the same thread. So a
>> plain VB6 STA COM Component should work fine.
>> In addition to David's reasons why this is a poor idea, you should
>> generally avoid doing anything expensive in a trigger. That includes
>> any kind of network communication or file IO, regardless of whether the
>> code is VB6 COM or TSQL or CLR code in 2005.
>> David
>>
>sql