Friday, March 30, 2012

IS this true?

We've been experiencing some performance degradation recently. We are a
small company partnered with a MUCH larger one. Someone from the other
company said they had experienced some of what we are seeing and recommended
that stored procedures which only return one row of data should use output
parameters instead of "selecting" the data as a resultset.
Is this really that much faster?"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:%23oTs4XuaGHA.3720@.TK2MSFTNGP03.phx.gbl...
> We've been experiencing some performance degradation recently. We are a
> small company partnered with a MUCH larger one. Someone from the other
> company said they had experienced some of what we are seeing and
> recommended that stored procedures which only return one row of data
> should use output parameters instead of "selecting" the data as a
> resultset.
> Is this really that much faster?
It is a lot faster, however. Unless you're running several hundred of these
a second, you probably won't notice any difference. So like all performance
analysys, you need to find what resource is limited in your system and
identify the biggest consumers of that resource.
David|||Thanks for the reply...yes during our busiest hours we are definitely
running hundreds of these a second.....if not thousands. I know this is
not the main performance bottleneck in our system at this point, but if
comparatively speaking it is a big difference then it would probably be
worth it in our case.
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23JEFYoyaGHA.1196@.TK2MSFTNGP03.phx.gbl...
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:%23oTs4XuaGHA.3720@.TK2MSFTNGP03.phx.gbl...
>> We've been experiencing some performance degradation recently. We are a
>> small company partnered with a MUCH larger one. Someone from the other
>> company said they had experienced some of what we are seeing and
>> recommended that stored procedures which only return one row of data
>> should use output parameters instead of "selecting" the data as a
>> resultset.
>> Is this really that much faster?
> It is a lot faster, however. Unless you're running several hundred of
> these a second, you probably won't notice any difference. So like all
> performance analysys, you need to find what resource is limited in your
> system and identify the biggest consumers of that resource.
> David
>|||I agree with David in that it is unlikely making that change will
dramatically speed up your app as there are usually other things that can be
done first. Make sure you owner qualify all objects, especially the sp
calls. Ensure they all have SET NOCOUNT ON at the beginning and make sure
they are called as an RPC and not a batch.
--
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:OVOtD3zaGHA.4796@.TK2MSFTNGP04.phx.gbl...
> Thanks for the reply...yes during our busiest hours we are definitely
> running hundreds of these a second.....if not thousands. I know this is
> not the main performance bottleneck in our system at this point, but if
> comparatively speaking it is a big difference then it would probably be
> worth it in our case.
>
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23JEFYoyaGHA.1196@.TK2MSFTNGP03.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:%23oTs4XuaGHA.3720@.TK2MSFTNGP03.phx.gbl...
>> We've been experiencing some performance degradation recently. We are a
>> small company partnered with a MUCH larger one. Someone from the other
>> company said they had experienced some of what we are seeing and
>> recommended that stored procedures which only return one row of data
>> should use output parameters instead of "selecting" the data as a
>> resultset.
>> Is this really that much faster?
>> It is a lot faster, however. Unless you're running several hundred of
>> these a second, you probably won't notice any difference. So like all
>> performance analysys, you need to find what resource is limited in your
>> system and identify the biggest consumers of that resource.
>> David
>|||Not sure what you mean as "an RPC and not a batch"? Yes we do
exhaustively "FULLY" qualify all objects from db.owner.object. I'm also
going thru all old webservices and making sure EVERYTHING is using
parameterized queries and not just stringing together exec statements and
parameters.
I think there are a few of the latter remaining but I'm stomping them
out....
I just ensured the NOCOUNT is indeed in every sproc in the system. That
made a difference in tonights benchmarks. Also setting isolation level
appropriately where needed. I think we can get further though.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ucWcHE9aGHA.4612@.TK2MSFTNGP03.phx.gbl...
>I agree with David in that it is unlikely making that change will
>dramatically speed up your app as there are usually other things that can
>be done first. Make sure you owner qualify all objects, especially the sp
>calls. Ensure they all have SET NOCOUNT ON at the beginning and make sure
>they are called as an RPC and not a batch.
> --
> Andrew J. Kelly SQL MVP
>
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:OVOtD3zaGHA.4796@.TK2MSFTNGP04.phx.gbl...
>> Thanks for the reply...yes during our busiest hours we are definitely
>> running hundreds of these a second.....if not thousands. I know this is
>> not the main performance bottleneck in our system at this point, but if
>> comparatively speaking it is a big difference then it would probably be
>> worth it in our case.
>>
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23JEFYoyaGHA.1196@.TK2MSFTNGP03.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:%23oTs4XuaGHA.3720@.TK2MSFTNGP03.phx.gbl...
>> We've been experiencing some performance degradation recently. We are
>> a small company partnered with a MUCH larger one. Someone from the
>> other company said they had experienced some of what we are seeing and
>> recommended that stored procedures which only return one row of data
>> should use output parameters instead of "selecting" the data as a
>> resultset.
>> Is this really that much faster?
>> It is a lot faster, however. Unless you're running several hundred of
>> these a second, you probably won't notice any difference. So like all
>> performance analysys, you need to find what resource is limited in your
>> system and identify the biggest consumers of that resource.
>> David
>>
>|||Also wondered...if there are stored procs being called from a webservice
where it is just strung together i.e.
string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
parm2.ToString()
versus using a parameterized version, is it true that qualifying the object
above really doesn't help? I'm told that examples like the one above won't
be cached...that it won't cache unless it is parameterized...
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ucWcHE9aGHA.4612@.TK2MSFTNGP03.phx.gbl...
>I agree with David in that it is unlikely making that change will
>dramatically speed up your app as there are usually other things that can
>be done first. Make sure you owner qualify all objects, especially the sp
>calls. Ensure they all have SET NOCOUNT ON at the beginning and make sure
>they are called as an RPC and not a batch.
> --
> Andrew J. Kelly SQL MVP
>
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:OVOtD3zaGHA.4796@.TK2MSFTNGP04.phx.gbl...
>> Thanks for the reply...yes during our busiest hours we are definitely
>> running hundreds of these a second.....if not thousands. I know this is
>> not the main performance bottleneck in our system at this point, but if
>> comparatively speaking it is a big difference then it would probably be
>> worth it in our case.
>>
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23JEFYoyaGHA.1196@.TK2MSFTNGP03.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:%23oTs4XuaGHA.3720@.TK2MSFTNGP03.phx.gbl...
>> We've been experiencing some performance degradation recently. We are
>> a small company partnered with a MUCH larger one. Someone from the
>> other company said they had experienced some of what we are seeing and
>> recommended that stored procedures which only return one row of data
>> should use output parameters instead of "selecting" the data as a
>> resultset.
>> Is this really that much faster?
>> It is a lot faster, however. Unless you're running several hundred of
>> these a second, you probably won't notice any difference. So like all
>> performance analysys, you need to find what resource is limited in your
>> system and identify the biggest consumers of that resource.
>> David
>>
>|||"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:uiul6sMbGHA.404@.TK2MSFTNGP04.phx.gbl...
> Also wondered...if there are stored procs being called from a webservice
> where it is just strung together i.e.
> string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
> parm2.ToString()
> versus using a parameterized version, is it true that qualifying the
> object above really doesn't help? I'm told that examples like the one
> above won't be cached...that it won't cache unless it is parameterized...
>
In that case you would still reuse the plan for the proc, it's just the
"exec ..." batch that needs to be parsed, and it has a trivial plan.
But none of this is likely to help. This is just general best practice
stuff. Running around your application fixing up little stuff like this is
absolutely the wrong way to go about analyzing and improving the performance
of an application.
You must discover what your acutal problems are and address those.
David|||I have to agree and disagree with David on this<g>. First I agree that you
should determine where the bottlenecks really are and address them vs.
guessing or making random changes that may help. You can start by getting
perfmon and trace data and seeing what is out of whack. I also agree that
sending a batch (your string from the webservice) will result in two plans.
One for the batch and one for the execution of the sp. The plan for the sp
will get reused. But in my experience the batch itself does not always
result in a trivial plan that does not get cached. If it does get cached
but not reused you can have a lot of performance issues associated with it
when called a lot. You can end up with 10's or hundreds of thousands of very
similar plans that have the same hash bucket in the cache. This can result
in increased durations of each new call to search for the correct plan. You
can see this easily if you select from syscacheobjects and take a look at
the SQL and usecounts columns. Lack of owner qualifying can also be a big
hit on performance when you call a sp hundreds or thousands of times per
second due to the serialization that will occur in the cache searching
process. If yours are all owner qualified you should be all set in that
department. But one thing to note. You do not need to DB qualify the objects
if they are in the same db as where you are executing them from. As a matter
of fact it actually will be a tiny hit (most of the time not enough to
detect or worry about) in performance. The only time you need to specify
the db is when it is actually in another db.
--
Andrew J. Kelly SQL MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:ucMpteSbGHA.4116@.TK2MSFTNGP05.phx.gbl...
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:uiul6sMbGHA.404@.TK2MSFTNGP04.phx.gbl...
>> Also wondered...if there are stored procs being called from a webservice
>> where it is just strung together i.e.
>> string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
>> parm2.ToString()
>> versus using a parameterized version, is it true that qualifying the
>> object above really doesn't help? I'm told that examples like the one
>> above won't be cached...that it won't cache unless it is parameterized...
>>
> In that case you would still reuse the plan for the proc, it's just the
> "exec ..." batch that needs to be parsed, and it has a trivial plan.
> But none of this is likely to help. This is just general best practice
> stuff. Running around your application fixing up little stuff like this
> is absolutely the wrong way to go about analyzing and improving the
> performance of an application.
> You must discover what your acutal problems are and address those.
> David
>|||We were closely following BOL when we did the FULL qualification ... I quote
from SQL2005 BOL
SQL Server 2005 has an efficient algorithm to find any existing execution
plans for any specific SQL statement. In most systems, the minimal resources
that are used by this scan are less than the resources that are saved by
being able to reuse existing plans instead of compiling every SQL statement.
The algorithms to match new SQL statements to existing, unused execution
plans in the cache require that all object references be fully qualified.
For example, the first of these SELECT statements is not matched with an
existing plan, and the second is matched:
SELECT * FROM Person.Contact
SELECT * FROM AdventureWorks.Person.Contact
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23741ivYbGHA.4032@.TK2MSFTNGP02.phx.gbl...
>I have to agree and disagree with David on this<g>. First I agree that you
>should determine where the bottlenecks really are and address them vs.
>guessing or making random changes that may help. You can start by getting
>perfmon and trace data and seeing what is out of whack. I also agree that
>sending a batch (your string from the webservice) will result in two plans.
>One for the batch and one for the execution of the sp. The plan for the sp
>will get reused. But in my experience the batch itself does not always
>result in a trivial plan that does not get cached. If it does get cached
>but not reused you can have a lot of performance issues associated with it
>when called a lot. You can end up with 10's or hundreds of thousands of
>very similar plans that have the same hash bucket in the cache. This can
>result in increased durations of each new call to search for the correct
>plan. You can see this easily if you select from syscacheobjects and take a
>look at the SQL and usecounts columns. Lack of owner qualifying can also be
>a big hit on performance when you call a sp hundreds or thousands of times
>per second due to the serialization that will occur in the cache searching
>process. If yours are all owner qualified you should be all set in that
>department. But one thing to note. You do not need to DB qualify the
>objects if they are in the same db as where you are executing them from. As
>a matter of fact it actually will be a tiny hit (most of the time not
>enough to detect or worry about) in performance. The only time you need to
>specify the db is when it is actually in another db.
> --
> Andrew J. Kelly SQL MVP
>
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:ucMpteSbGHA.4116@.TK2MSFTNGP05.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:uiul6sMbGHA.404@.TK2MSFTNGP04.phx.gbl...
>> Also wondered...if there are stored procs being called from a
>> webservice where it is just strung together i.e.
>> string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
>> parm2.ToString()
>> versus using a parameterized version, is it true that qualifying the
>> object above really doesn't help? I'm told that examples like the one
>> above won't be cached...that it won't cache unless it is
>> parameterized...
>>
>> In that case you would still reuse the plan for the proc, it's just the
>> "exec ..." batch that needs to be parsed, and it has a trivial plan.
>> But none of this is likely to help. This is just general best practice
>> stuff. Running around your application fixing up little stuff like this
>> is absolutely the wrong way to go about analyzing and improving the
>> performance of an application.
>> You must discover what your acutal problems are and address those.
>> David
>|||Also, there is alot of cross db queries going on here. This makes it much
simpler as all webservices/apps etc...can use the same connection string as
the initial catalog makes no difference.
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:Oa1q8vebGHA.1208@.TK2MSFTNGP02.phx.gbl...
> We were closely following BOL when we did the FULL qualification ... I
> quote from SQL2005 BOL
> SQL Server 2005 has an efficient algorithm to find any existing execution
> plans for any specific SQL statement. In most systems, the minimal
> resources that are used by this scan are less than the resources that are
> saved by being able to reuse existing plans instead of compiling every SQL
> statement.
> The algorithms to match new SQL statements to existing, unused execution
> plans in the cache require that all object references be fully qualified.
> For example, the first of these SELECT statements is not matched with an
> existing plan, and the second is matched:
>
> SELECT * FROM Person.Contact
> SELECT * FROM AdventureWorks.Person.Contact
>
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23741ivYbGHA.4032@.TK2MSFTNGP02.phx.gbl...
>>I have to agree and disagree with David on this<g>. First I agree that
>>you should determine where the bottlenecks really are and address them vs.
>>guessing or making random changes that may help. You can start by getting
>>perfmon and trace data and seeing what is out of whack. I also agree that
>>sending a batch (your string from the webservice) will result in two
>>plans. One for the batch and one for the execution of the sp. The plan
>>for the sp will get reused. But in my experience the batch itself does
>>not always result in a trivial plan that does not get cached. If it does
>>get cached but not reused you can have a lot of performance issues
>>associated with it when called a lot. You can end up with 10's or hundreds
>>of thousands of very similar plans that have the same hash bucket in the
>>cache. This can result in increased durations of each new call to search
>>for the correct plan. You can see this easily if you select from
>>syscacheobjects and take a look at the SQL and usecounts columns. Lack of
>>owner qualifying can also be a big hit on performance when you call a sp
>>hundreds or thousands of times per second due to the serialization that
>>will occur in the cache searching process. If yours are all owner
>>qualified you should be all set in that department. But one thing to note.
>>You do not need to DB qualify the objects if they are in the same db as
>>where you are executing them from. As a matter of fact it actually will be
>>a tiny hit (most of the time not enough to detect or worry about) in
>>performance. The only time you need to specify the db is when it is
>>actually in another db.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:ucMpteSbGHA.4116@.TK2MSFTNGP05.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:uiul6sMbGHA.404@.TK2MSFTNGP04.phx.gbl...
>> Also wondered...if there are stored procs being called from a
>> webservice where it is just strung together i.e.
>> string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
>> parm2.ToString()
>> versus using a parameterized version, is it true that qualifying the
>> object above really doesn't help? I'm told that examples like the one
>> above won't be cached...that it won't cache unless it is
>> parameterized...
>>
>> In that case you would still reuse the plan for the proc, it's just the
>> "exec ..." batch that needs to be parsed, and it has a trivial plan.
>> But none of this is likely to help. This is just general best practice
>> stuff. Running around your application fixing up little stuff like this
>> is absolutely the wrong way to go about analyzing and improving the
>> performance of an application.
>> You must discover what your acutal problems are and address those.
>> David
>>
>|||I think this is a misprint but I will double check. I believe it should
have been:
SELECT * FROM Contact
SELECT * FROM Person.Contact
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:Oa1q8vebGHA.1208@.TK2MSFTNGP02.phx.gbl...
> We were closely following BOL when we did the FULL qualification ... I
> quote from SQL2005 BOL
> SQL Server 2005 has an efficient algorithm to find any existing execution
> plans for any specific SQL statement. In most systems, the minimal
> resources that are used by this scan are less than the resources that are
> saved by being able to reuse existing plans instead of compiling every SQL
> statement.
> The algorithms to match new SQL statements to existing, unused execution
> plans in the cache require that all object references be fully qualified.
> For example, the first of these SELECT statements is not matched with an
> existing plan, and the second is matched:
>
> SELECT * FROM Person.Contact
> SELECT * FROM AdventureWorks.Person.Contact
>
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23741ivYbGHA.4032@.TK2MSFTNGP02.phx.gbl...
>>I have to agree and disagree with David on this<g>. First I agree that
>>you should determine where the bottlenecks really are and address them vs.
>>guessing or making random changes that may help. You can start by getting
>>perfmon and trace data and seeing what is out of whack. I also agree that
>>sending a batch (your string from the webservice) will result in two
>>plans. One for the batch and one for the execution of the sp. The plan
>>for the sp will get reused. But in my experience the batch itself does
>>not always result in a trivial plan that does not get cached. If it does
>>get cached but not reused you can have a lot of performance issues
>>associated with it when called a lot. You can end up with 10's or hundreds
>>of thousands of very similar plans that have the same hash bucket in the
>>cache. This can result in increased durations of each new call to search
>>for the correct plan. You can see this easily if you select from
>>syscacheobjects and take a look at the SQL and usecounts columns. Lack of
>>owner qualifying can also be a big hit on performance when you call a sp
>>hundreds or thousands of times per second due to the serialization that
>>will occur in the cache searching process. If yours are all owner
>>qualified you should be all set in that department. But one thing to note.
>>You do not need to DB qualify the objects if they are in the same db as
>>where you are executing them from. As a matter of fact it actually will be
>>a tiny hit (most of the time not enough to detect or worry about) in
>>performance. The only time you need to specify the db is when it is
>>actually in another db.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:ucMpteSbGHA.4116@.TK2MSFTNGP05.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:uiul6sMbGHA.404@.TK2MSFTNGP04.phx.gbl...
>> Also wondered...if there are stored procs being called from a
>> webservice where it is just strung together i.e.
>> string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
>> parm2.ToString()
>> versus using a parameterized version, is it true that qualifying the
>> object above really doesn't help? I'm told that examples like the one
>> above won't be cached...that it won't cache unless it is
>> parameterized...
>>
>> In that case you would still reuse the plan for the proc, it's just the
>> "exec ..." batch that needs to be parsed, and it has a trivial plan.
>> But none of this is likely to help. This is just general best practice
>> stuff. Running around your application fixing up little stuff like this
>> is absolutely the wrong way to go about analyzing and improving the
>> performance of an application.
>> You must discover what your acutal problems are and address those.
>> David
>>
>|||That's fine, it is not that big of a deal wither way.
--
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:uMo$N5ebGHA.3364@.TK2MSFTNGP05.phx.gbl...
> Also, there is alot of cross db queries going on here. This makes it much
> simpler as all webservices/apps etc...can use the same connection string
> as the initial catalog makes no difference.
>
> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
> news:Oa1q8vebGHA.1208@.TK2MSFTNGP02.phx.gbl...
>> We were closely following BOL when we did the FULL qualification ... I
>> quote from SQL2005 BOL
>> SQL Server 2005 has an efficient algorithm to find any existing execution
>> plans for any specific SQL statement. In most systems, the minimal
>> resources that are used by this scan are less than the resources that are
>> saved by being able to reuse existing plans instead of compiling every
>> SQL statement.
>> The algorithms to match new SQL statements to existing, unused execution
>> plans in the cache require that all object references be fully qualified.
>> For example, the first of these SELECT statements is not matched with an
>> existing plan, and the second is matched:
>>
>> SELECT * FROM Person.Contact
>> SELECT * FROM AdventureWorks.Person.Contact
>>
>>
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:%23741ivYbGHA.4032@.TK2MSFTNGP02.phx.gbl...
>>I have to agree and disagree with David on this<g>. First I agree that
>>you should determine where the bottlenecks really are and address them
>>vs. guessing or making random changes that may help. You can start by
>>getting perfmon and trace data and seeing what is out of whack. I also
>>agree that sending a batch (your string from the webservice) will result
>>in two plans. One for the batch and one for the execution of the sp. The
>>plan for the sp will get reused. But in my experience the batch itself
>>does not always result in a trivial plan that does not get cached. If it
>>does get cached but not reused you can have a lot of performance issues
>>associated with it when called a lot. You can end up with 10's or
>>hundreds of thousands of very similar plans that have the same hash
>>bucket in the cache. This can result in increased durations of each new
>>call to search for the correct plan. You can see this easily if you
>>select from syscacheobjects and take a look at the SQL and usecounts
>>columns. Lack of owner qualifying can also be a big hit on performance
>>when you call a sp hundreds or thousands of times per second due to the
>>serialization that will occur in the cache searching process. If yours
>>are all owner qualified you should be all set in that department. But one
>>thing to note. You do not need to DB qualify the objects if they are in
>>the same db as where you are executing them from. As a matter of fact it
>>actually will be a tiny hit (most of the time not enough to detect or
>>worry about) in performance. The only time you need to specify the db is
>>when it is actually in another db.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:ucMpteSbGHA.4116@.TK2MSFTNGP05.phx.gbl...
>> "Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
>> news:uiul6sMbGHA.404@.TK2MSFTNGP04.phx.gbl...
>> Also wondered...if there are stored procs being called from a
>> webservice where it is just strung together i.e.
>> string strsql = " exec mydbase.dbo.sproc " + parm1.ToString() + ',' +
>> parm2.ToString()
>> versus using a parameterized version, is it true that qualifying the
>> object above really doesn't help? I'm told that examples like the one
>> above won't be cached...that it won't cache unless it is
>> parameterized...
>>
>> In that case you would still reuse the plan for the proc, it's just the
>> "exec ..." batch that needs to be parsed, and it has a trivial plan.
>> But none of this is likely to help. This is just general best practice
>> stuff. Running around your application fixing up little stuff like
>> this is absolutely the wrong way to go about analyzing and improving
>> the performance of an application.
>> You must discover what your acutal problems are and address those.
>> David
>>
>>
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%23OOu3flbGHA.3364@.TK2MSFTNGP05.phx.gbl...
>I think this is a misprint but I will double check. I believe it should
>have been:
> SELECT * FROM Contact
> SELECT * FROM Person.Contact
>
Queries with unqualified object names will be cached and reused. It's just
that the scope of the reuse is limited to the connections with the same
user. Since most applications connect with a small number of users (usually
only one), this is no big deal.
Run this
drop table t
create table T(id int primary key, v varchar(50))
dbcc freeproccache
select user_id()
go
select * from T
go
select * from dbo.T
go
select a.value PlanUser, sql_text.*
FROM sys.dm_exec_query_stats
cross apply sys.dm_exec_plan_attributes (plan_handle) a
cross apply sys.dm_exec_sql_text(plan_handle) as sql_text
where a.attribute = 'user_id'
This puts two different plans into the cache. One that's good for all
users, and one that's only good for one user.
David|||Thanks for the clarification guys....
Andrew .. while as you said, it might not make that much difference if we DO
qualify to the db level, I don't want to be wandering around teaching that
this is the "proper" way to do something if indeed it isn't. So, thanks
again!
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OZnd5BqbGHA.3352@.TK2MSFTNGP03.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23OOu3flbGHA.3364@.TK2MSFTNGP05.phx.gbl...
>>I think this is a misprint but I will double check. I believe it should
>>have been:
>> SELECT * FROM Contact
>> SELECT * FROM Person.Contact
>>
> Queries with unqualified object names will be cached and reused. It's
> just that the scope of the reuse is limited to the connections with the
> same user. Since most applications connect with a small number of users
> (usually only one), this is no big deal.
> Run this
> drop table t
> create table T(id int primary key, v varchar(50))
> dbcc freeproccache
>
> select user_id()
> go
> select * from T
> go
> select * from dbo.T
> go
>
> select a.value PlanUser, sql_text.*
> FROM sys.dm_exec_query_stats
> cross apply sys.dm_exec_plan_attributes (plan_handle) a
> cross apply sys.dm_exec_sql_text(plan_handle) as sql_text
> where a.attribute = 'user_id'
>
> This puts two different plans into the cache. One that's good for all
> users, and one that's only good for one user.
>
> David
>|||David,
Yes that is true, but I don't understand why BOL was stating you needed to
specify the DB name?
--
Andrew J. Kelly SQL MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:OZnd5BqbGHA.3352@.TK2MSFTNGP03.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:%23OOu3flbGHA.3364@.TK2MSFTNGP05.phx.gbl...
>>I think this is a misprint but I will double check. I believe it should
>>have been:
>> SELECT * FROM Contact
>> SELECT * FROM Person.Contact
>>
> Queries with unqualified object names will be cached and reused. It's
> just that the scope of the reuse is limited to the connections with the
> same user. Since most applications connect with a small number of users
> (usually only one), this is no big deal.
> Run this
> drop table t
> create table T(id int primary key, v varchar(50))
> dbcc freeproccache
>
> select user_id()
> go
> select * from T
> go
> select * from dbo.T
> go
>
> select a.value PlanUser, sql_text.*
> FROM sys.dm_exec_query_stats
> cross apply sys.dm_exec_plan_attributes (plan_handle) a
> cross apply sys.dm_exec_sql_text(plan_handle) as sql_text
> where a.attribute = 'user_id'
>
> This puts two different plans into the cache. One that's good for all
> users, and one that's only good for one user.
>
> David
>|||My recommendation (and most others that I know of) has always been to only
fully qualify (db name) when actually going across dbs. But to always owner
qualify. One good reason of when fully qualifying all objects can byte you
is when you have multiple db's on the same server with the same sp. Now you
have to maintain the code separately for each db even though the sp is
exactly the same.
--
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:u%23T7VLrbGHA.4716@.TK2MSFTNGP03.phx.gbl...
> Thanks for the clarification guys....
> Andrew .. while as you said, it might not make that much difference if we
> DO qualify to the db level, I don't want to be wandering around teaching
> that this is the "proper" way to do something if indeed it isn't. So,
> thanks again!
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:OZnd5BqbGHA.3352@.TK2MSFTNGP03.phx.gbl...
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:%23OOu3flbGHA.3364@.TK2MSFTNGP05.phx.gbl...
>>I think this is a misprint but I will double check. I believe it should
>>have been:
>> SELECT * FROM Contact
>> SELECT * FROM Person.Contact
>>
>> Queries with unqualified object names will be cached and reused. It's
>> just that the scope of the reuse is limited to the connections with the
>> same user. Since most applications connect with a small number of users
>> (usually only one), this is no big deal.
>> Run this
>> drop table t
>> create table T(id int primary key, v varchar(50))
>> dbcc freeproccache
>>
>> select user_id()
>> go
>> select * from T
>> go
>> select * from dbo.T
>> go
>>
>> select a.value PlanUser, sql_text.*
>> FROM sys.dm_exec_query_stats
>> cross apply sys.dm_exec_plan_attributes (plan_handle) a
>> cross apply sys.dm_exec_sql_text(plan_handle) as sql_text
>> where a.attribute = 'user_id'
>>
>> This puts two different plans into the cache. One that's good for all
>> users, and one that's only good for one user.
>>
>> David
>|||On Wed, 3 May 2006 22:50:12 -0400, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>My recommendation (and most others that I know of) has always been to only
>fully qualify (db name) when actually going across dbs. But to always owner
>qualify. One good reason of when fully qualifying all objects can byte you
>is when you have multiple db's on the same server with the same sp. Now you
>have to maintain the code separately for each db even though the sp is
>exactly the same.
I will NOT allow a stored procedure to contain an explicit reference
to the database that it is part of. Backup that database, and restore
it under another name, and you are in trouble. Restore it to another
database on the same server - say to have a copy of production for
testing or development - and you are SCREWED. I was burned years ago
by such references in such a situation, and that is one mistake I
learned from.
Roy Harvey
Beacon Falls, CT|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:e%23F4xTybGHA.4604@.TK2MSFTNGP02.phx.gbl...
> David,
> Yes that is true, but I don't understand why BOL was stating you needed to
> specify the DB name?
>
Yes, that must be a doc bug.
David|||"Roy Harvey" <roy_harvey@.snet.net> wrote in message
news:lkri52h0a4ffh4sa9l3rrcn7nk4pgu8p1h@.4ax.com...
> On Wed, 3 May 2006 22:50:12 -0400, "Andrew J. Kelly"
> <sqlmvpnooospam@.shadhawk.com> wrote:
>>My recommendation (and most others that I know of) has always been to only
>>fully qualify (db name) when actually going across dbs. But to always
>>owner
>>qualify. One good reason of when fully qualifying all objects can byte
>>you
>>is when you have multiple db's on the same server with the same sp. Now
>>you
>>have to maintain the code separately for each db even though the sp is
>>exactly the same.
> I will NOT allow a stored procedure to contain an explicit reference
> to the database that it is part of. Backup that database, and restore
> it under another name, and you are in trouble. Restore it to another
> database on the same server - say to have a copy of production for
> testing or development - and you are SCREWED. I was burned years ago
> by such references in such a situation, and that is one mistake I
> learned from.
>
I absolutely agree, but I take the sentiment further: I don't like to
schema-qualify objects in the same schema.
David|||I totally disagree with that. Not qualifying the schema can be a definite
performance killer.
Andrew J. Kelly SQL MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23tA2AC4bGHA.3956@.TK2MSFTNGP04.phx.gbl...
> "Roy Harvey" <roy_harvey@.snet.net> wrote in message
> news:lkri52h0a4ffh4sa9l3rrcn7nk4pgu8p1h@.4ax.com...
>> On Wed, 3 May 2006 22:50:12 -0400, "Andrew J. Kelly"
>> <sqlmvpnooospam@.shadhawk.com> wrote:
>>My recommendation (and most others that I know of) has always been to
>>only
>>fully qualify (db name) when actually going across dbs. But to always
>>owner
>>qualify. One good reason of when fully qualifying all objects can byte
>>you
>>is when you have multiple db's on the same server with the same sp. Now
>>you
>>have to maintain the code separately for each db even though the sp is
>>exactly the same.
>> I will NOT allow a stored procedure to contain an explicit reference
>> to the database that it is part of. Backup that database, and restore
>> it under another name, and you are in trouble. Restore it to another
>> database on the same server - say to have a copy of production for
>> testing or development - and you are SCREWED. I was burned years ago
>> by such references in such a situation, and that is one mistake I
>> learned from.
>
> I absolutely agree, but I take the sentiment further: I don't like to
> schema-qualify objects in the same schema.
> David
>|||Wow, it is really hard sometimes when we see all you who definitely have an
enormous amount of experience with the product still disagreeing. Could
this just be one of those things that could depend upon ones own environment
and practices?
In regards to the comment by Roy, while maybe it is a common practice, here
we NEVER restore the same database on the same server under a different
name...and we are currently trying to get all connections on the web groups
servers to use the same connection string (even the same initial catalog)
since we were told this would affect our pooling (I've heard many differing
opinions on this as well). We also tend to frown on having the same stored
proc in multiple databases...what compelling reason would there bo to do
this? I'm really trying to simplify this environment, not make it
harder...I've had a monster thrown at me that I had nothing to do with
creating and I know it's going to take time....BUT I don't want to
"improve" something now under false info just to fix it properly again.
So I'm wondering in our environment if things are broken up to much maybe.
What guidelines do you all use to determine what a database should consist
of? When do you start a separate database vs. just adding more tables to an
existing one? Here we have db's setup based strictly in a "categorical"
type sense. One for ordering products...another one for storing the actual
products (since more than one ordering system uses the same products) and
another one for keeping track of membership info and another one for driving
our website....
I tend to wonder if the breakup into smaller db's is just causing more
headache than it's worth. Especially since there is now so much inter-db
chatter going on. Maybe that's an indication as to whether they should be
broke up or not...
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:u2jLdQ6bGHA.5116@.TK2MSFTNGP04.phx.gbl...
>I totally disagree with that. Not qualifying the schema can be a definite
>performance killer.
>
> --
> Andrew J. Kelly SQL MVP
>
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23tA2AC4bGHA.3956@.TK2MSFTNGP04.phx.gbl...
>> "Roy Harvey" <roy_harvey@.snet.net> wrote in message
>> news:lkri52h0a4ffh4sa9l3rrcn7nk4pgu8p1h@.4ax.com...
>> On Wed, 3 May 2006 22:50:12 -0400, "Andrew J. Kelly"
>> <sqlmvpnooospam@.shadhawk.com> wrote:
>>My recommendation (and most others that I know of) has always been to
>>only
>>fully qualify (db name) when actually going across dbs. But to always
>>owner
>>qualify. One good reason of when fully qualifying all objects can byte
>>you
>>is when you have multiple db's on the same server with the same sp. Now
>>you
>>have to maintain the code separately for each db even though the sp is
>>exactly the same.
>> I will NOT allow a stored procedure to contain an explicit reference
>> to the database that it is part of. Backup that database, and restore
>> it under another name, and you are in trouble. Restore it to another
>> database on the same server - say to have a copy of production for
>> testing or development - and you are SCREWED. I was burned years ago
>> by such references in such a situation, and that is one mistake I
>> learned from.
>>
>> I absolutely agree, but I take the sentiment further: I don't like to
>> schema-qualify objects in the same schema.
>> David
>|||I believe the main reasons why people have multiple dbs with similar schemas
/ objects are due to security reasons. There are just times when a db may
host data for multiple clients and it is a client rule that their data can
not be housed int he same db as someone else's. IN general if you have
tables in one db that you always query from another it warrants the question
of why? But there are no hard and fast rules. Each application has it's own
requirements and it is up to you to decide what is best for your
environment. As for qualifications I have yet to come across a situation
where fully qualifying (db.owner.object) all objects makes sense when the
objects are in the same db. I think there are always more reason why not to
do it than to do it. If you happen to have a business requirement for it
then by all means do it. If not it will probably byte you somewhere down the
line. As for the owner qualification I will leave you with this kb:
http://support.microsoft.com/?id=263889
for performance reasons alone.
--
Andrew J. Kelly SQL MVP
"Tim Greenwood" <tim_greenwood A-T yahoo D-O-T com> wrote in message
news:uLXBzY6bGHA.3484@.TK2MSFTNGP03.phx.gbl...
> Wow, it is really hard sometimes when we see all you who definitely have
> an enormous amount of experience with the product still disagreeing.
> Could this just be one of those things that could depend upon ones own
> environment and practices?
> In regards to the comment by Roy, while maybe it is a common practice,
> here we NEVER restore the same database on the same server under a
> different name...and we are currently trying to get all connections on
> the web groups servers to use the same connection string (even the same
> initial catalog) since we were told this would affect our pooling (I've
> heard many differing opinions on this as well). We also tend to frown on
> having the same stored proc in multiple databases...what compelling
> reason would there bo to do this? I'm really trying to simplify this
> environment, not make it harder...I've had a monster thrown at me that I
> had nothing to do with creating and I know it's going to take time....BUT
> I don't want to "improve" something now under false info just to fix it
> properly again.
> So I'm wondering in our environment if things are broken up to much maybe.
> What guidelines do you all use to determine what a database should consist
> of? When do you start a separate database vs. just adding more tables to
> an existing one? Here we have db's setup based strictly in a
> "categorical" type sense. One for ordering products...another one for
> storing the actual products (since more than one ordering system uses the
> same products) and another one for keeping track of membership info and
> another one for driving our website....
> I tend to wonder if the breakup into smaller db's is just causing more
> headache than it's worth. Especially since there is now so much inter-db
> chatter going on. Maybe that's an indication as to whether they should be
> broke up or not...
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:u2jLdQ6bGHA.5116@.TK2MSFTNGP04.phx.gbl...
>>I totally disagree with that. Not qualifying the schema can be a definite
>>performance killer.
>>
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23tA2AC4bGHA.3956@.TK2MSFTNGP04.phx.gbl...
>> "Roy Harvey" <roy_harvey@.snet.net> wrote in message
>> news:lkri52h0a4ffh4sa9l3rrcn7nk4pgu8p1h@.4ax.com...
>> On Wed, 3 May 2006 22:50:12 -0400, "Andrew J. Kelly"
>> <sqlmvpnooospam@.shadhawk.com> wrote:
>>My recommendation (and most others that I know of) has always been to
>>only
>>fully qualify (db name) when actually going across dbs. But to always
>>owner
>>qualify. One good reason of when fully qualifying all objects can byte
>>you
>>is when you have multiple db's on the same server with the same sp. Now
>>you
>>have to maintain the code separately for each db even though the sp is
>>exactly the same.
>> I will NOT allow a stored procedure to contain an explicit reference
>> to the database that it is part of. Backup that database, and restore
>> it under another name, and you are in trouble. Restore it to another
>> database on the same server - say to have a copy of production for
>> testing or development - and you are SCREWED. I was burned years ago
>> by such references in such a situation, and that is one mistake I
>> learned from.
>>
>> I absolutely agree, but I take the sentiment further: I don't like to
>> schema-qualify objects in the same schema.
>> David
>>
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:e4DWRJ7bGHA.1204@.TK2MSFTNGP02.phx.gbl...
>I believe the main reasons why people have multiple dbs with similar
>schemas / objects are due to security reasons. There are just times when a
>db may host data for multiple clients and it is a client rule that their
>data can not be housed int he same db as someone else's. IN general if you
>have tables in one db that you always query from another it warrants the
>question of why? But there are no hard and fast rules. Each application
>has it's own requirements and it is up to you to decide what is best for
>your environment. As for qualifications I have yet to come across a
>situation where fully qualifying (db.owner.object) all objects makes sense
>when the objects are in the same db. I think there are always more reason
>why not to do it than to do it. If you happen to have a business
>requirement for it then by all means do it. If not it will probably byte
>you somewhere down the line. As for the owner qualification I will leave
>you with this kb:
> http://support.microsoft.com/?id=263889
> for performance reasons alone.
>
Here's the issue in that KB:
When making an RPC call from client code you should owner-qualify the
procedure name if the procedure is in another schema. If you don't it will
cause a brief compile lock as the session starts to compile a plan for the
stored procedure, discovers that one already exists, and then decides to use
the existing plan. This compile lock, while it's not a big resouce hog, can
limit the total throughput of invocations of the procedure because only one
session can own the compile lock at a time. The real problem is not that
the cache miss, or even a batch compilation. The issue is that this
situation causes a compile lock on the target procedure, and so instead of
just wasting a bit of CPU, it makes sessions "wait in line" to execute the
procedure.
This KB cannot support a general performance best-practice of
schema-qualifiing object names in stored procedurs, views or dynamic SQL.
That KB only applies to how stored procedures are invoked, not how they are
coded internally, and thus only apply to client code, not stored procedures,
views or dynamic SQL. And it is not even clear if this issue exists on 2005
when a user's default schema can, and usually should, be the dbo schema.
When the user's default schema is dbo, the initial lookup for the plan may
succeed.
David|||Yes the KB mainly talks about locks during compile time and I have seen that
bite more people than I can count. As for general owner qualification how
could it not help. For one it removes any ambiguity and the chance that
someone will try to access the incorrect object by mistake if they happen to
have two with different owners. But I have been told by members of the dev
team that there is extra lines of code called when you do not specify the
owner. It makes sense as it has to do some extra processing and security
code execution. One of the MVP's did a test a while back on simple calls
with and without owner qualifying the objects in a loop. I don't have the
results handy but there was a definite difference. While this may not cause
performance issues for most people it is just one more thing you can do to
ensure this is taken out of the loop. Why would you not want to qualify it?
--
Andrew J. Kelly SQL MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:uEjDkt%23bGHA.3348@.TK2MSFTNGP03.phx.gbl...
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:e4DWRJ7bGHA.1204@.TK2MSFTNGP02.phx.gbl...
>>I believe the main reasons why people have multiple dbs with similar
>>schemas / objects are due to security reasons. There are just times when a
>>db may host data for multiple clients and it is a client rule that their
>>data can not be housed int he same db as someone else's. IN general if
>>you have tables in one db that you always query from another it warrants
>>the question of why? But there are no hard and fast rules. Each
>>application has it's own requirements and it is up to you to decide what
>>is best for your environment. As for qualifications I have yet to come
>>across a situation where fully qualifying (db.owner.object) all objects
>>makes sense when the objects are in the same db. I think there are always
>>more reason why not to do it than to do it. If you happen to have a
>>business requirement for it then by all means do it. If not it will
>>probably byte you somewhere down the line. As for the owner qualification
>>I will leave you with this kb:
>> http://support.microsoft.com/?id=263889
>> for performance reasons alone.
> Here's the issue in that KB:
> When making an RPC call from client code you should owner-qualify the
> procedure name if the procedure is in another schema. If you don't it
> will cause a brief compile lock as the session starts to compile a plan
> for the stored procedure, discovers that one already exists, and then
> decides to use the existing plan. This compile lock, while it's not a big
> resouce hog, can limit the total throughput of invocations of the
> procedure because only one session can own the compile lock at a time.
> The real problem is not that the cache miss, or even a batch compilation.
> The issue is that this situation causes a compile lock on the target
> procedure, and so instead of just wasting a bit of CPU, it makes sessions
> "wait in line" to execute the procedure.
> This KB cannot support a general performance best-practice of
> schema-qualifiing object names in stored procedurs, views or dynamic SQL.
> That KB only applies to how stored procedures are invoked, not how they
> are coded internally, and thus only apply to client code, not stored
> procedures, views or dynamic SQL. And it is not even clear if this issue
> exists on 2005 when a user's default schema can, and usually should, be
> the dbo schema. When the user's default schema is dbo, the initial lookup
> for the plan may succeed.
>
> David
>
>
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:epssQZDcGHA.4224@.TK2MSFTNGP04.phx.gbl...
> Yes the KB mainly talks about locks during compile time and I have seen
> that bite more people than I can count. As for general owner
> qualification how could it not help. For one it removes any ambiguity and
> the chance that someone will try to access the incorrect object by mistake
> if they happen to have two with different owners. But I have been told by
> members of the dev team that there is extra lines of code called when you
> do not specify the owner. It makes sense as it has to do some extra
> processing and security code execution.
Absolutely. But that is during compile time. Once the plan is compiled,
it's done.
>One of the MVP's did a test a while back on simple calls with and without
>owner qualifying the objects in a loop. I don't have the results handy but
>there was a definite difference. While this may not cause performance
>issues for most people it is just one more thing you can do to ensure this
>is taken out of the loop. Why would you not want to qualify it?
>
Ok, here's why I don't want to qualify objects.
Extra typing.
Objects cannot be easilly moved between schemas.
Developers have to remember and correctly identify their default schema.
In all programming languages I like local things to be named with relative
names.
Not huge reasons. But if there's no real performance penalty, good enough.
David|||all of this is mouse testicles. it doesn't add up to anything anyone
is going ot measure, let alone slow down a decent production
environment.
rebuild your indexes, stats, and EXAMINE how data is pulled from the
data.
Odds are you can change an index or 3 and get your speed back.
Give us some ideas of how big your tables are. Look first at the
biggest tables, and the tables that are growing the fastest.
If you are inserting or changing large amounts of data, REALLY examine
your clustered indexes.sql

No comments:

Post a Comment