Friday, February 24, 2012

Is there any impact on server of increasing LOG File size

Thanks to all participants.

I am using SQL Server 2000 with replication object for two location. Log size on publisher go upto 25 times of data file size, I mean 80 MB Data files has maintains 2 GB log file and it is same for all five co's working on same windows 2000 advanced server board.

Since last week server randamly get disconnected from user applications and at that time few tables are not openable at server.

Can any one give a reason ? Why this type misbehaviou done by SQL Server 2000?

Thanks.If you aren't using the log file for recovery purposes, I'd suggest you set the database recovery mode to simple. That will allow SQL Server to jettision the log data once it has been replicated.

SQL Server has a lot of tunable options. It guesses correctly on the appropriate settings for them nearly all of the time (probably more than 99.95% of the time in my experience), but occaisionally it gets something mis-adjusted.

This kind of problem is probably unavoidable (although it can still be reduced), and in those few instances you need a bit of expert help to figure out what is wrong and how to fix it.

-PatP|||Thanks for your suggession. I shrink log file upt to 100 MB and set the database in simple recovery mode by 02-01-2004. After that for last three days nothing happen with user application. Is there any co-relation between problem and log size or any thing more will be findout anywhere else.

Where I can find the reson for orginal disconnection problem, which one not occured for last three working day.

Thanks

R.Mall|||PatP Sir,

I am waiting for your reply regarding following.

Thanks for your suggession. I shrink log file upt to 100 MB and set the database in simple recovery mode by 02-01-2004. After that for last three days nothing happen with user application. Is there any co-relation between problem and log size or any thing more will be findout anywhere else.

Where I can find the reson for orginal disconnection problem, which one not occured for last three working day.

Thanks

R.Mall|||There may be some connection, but I don't know of any easy way to determine if there is a connection or not. Without a lot more information about your configuration (NT, SQL, WAN network, client machine configurations, etc), I'm at a loss to help you more.

The problem lies in the apparent complexity of your configuration. If you have multiple SQL Server databases/instances, many clients, different application loads, etc... The number of items that could interact rapidly gets beyond what I can diagnose remotely.

-PatP|||PatP Sir,

Please look over my setup-

Hardware , OS and Application
Servers-Two
Intel Original server board - 7210TP
CPU HT 2.8GHz, P-IV
HDD - 80GB SATA
1GB Ram - DDR
ATX Cabinet
OS-Windows 2000 Advanced server with service pack IV.
Database - MSSQL Server 2000 with SP3.
Replication - Merge
No.of Database- 6th (1 Master and 5 cos)
Replicated no.of tables ( 42*5 ) + 12 = 222
Publisher - One
Subscriber - One (Pull Subscription)
Transaction - Medium.

Clients-45+ (10 at subscriber site and 35+ at publisher site)
Intel Alex ATX motherboard
CPU Celeron 400
LAN Card PCI 32Bit
128MB Ram
40GB HDD
OS-Windows-98
Application form - Exe and DLL (One Exe + 70+ DLL)
No. of Application - One.
No.Of modules - 11
Application Base Power Builder 7.0

ROUTER
ZYXEL-ISDN Dialup
100IH
Connectivity ISDN Dialup
If you need any more information, so let me know, I will send you immediately.

Thanks

R.Mall|||Dear PatP Sir,

Please see the details.|||All I can do is offer guesses, based on the information that you've provided. To really do a good job, it would take something on the order of 500 pages of information, and probably 40 hours of effort on my part. That isn't something I'm willing to do at this point in time.

A single ISDN line can't always support ten users plus replication on that scale. While it can sometimes do the job, there are othertimes that it will definitely be a bottleneck for performance. This can be mitigated by how you configure the servers to use the connection, but it is still potentially the largest potential problem that I see.

A busy SQL Server would consider 1 Gb of RAM as an hors d'oeurve. I routinely configure busy servers with 8 Gb, then let someone try to talk me down. For 45 users, I might drop as low as 2 Gb, but not without considerable discussion first. I really don't think you have enough RAM in the servers.

Powerbuilder's Datawindows are very easy for developers to use, but they can be "noisy" from a database traffic perspective. As one example, if a datawindow has many rows and the default update method is used after a single row is changed, the entire contents of the database can be written back to the server. Depending on how frequently you've got your subscriber set to "pull" a merge, there can be a lot of data queued up (even with SIMPLE recovery)!

Your problem is probably complex. I don't have the time or resources to do a good job helping you fix it remotely. You really need to get either a dedicated team to help you solve this remotely, or someone to come on site (which would be much better in my opinion). I don't think that casual help via a forum is going to be enough to help you solve this problem.

-PatP|||Dear PatP Sir,

Thanks for your valuable commend on my issue, I will see as per your guidelines to that issue.

I Think you have analyze methods for analyze the problem but due to lack of time and information you can't to it.

Can you help me in another way? Just tell me analyzer methods and help material site or hard copy book, so I will do it myself.

Thanks

R.Mall|||First order of business, get VERY familiar with the NT Performance Analyzer. For this kind of problem, it can provide at least 90% of the information you need to diagnose the problem.

Just as an observation, I wouldn't recommend using a machine to monitor its own performance if I could use a second machine to do it. Running the Performance Monitor can be rather demanding, especially if you are monitoring many counters at once.

Use Performance Monitor to monitor the server at a time when performance is acceptable, and at times when the performance is not acceptable. At first, you'll want to record all of the Performance counters that seem vaguely relevant.

The whole point of this process is to determine where the "weak link" is in your system. It shouldn't take a lot of measurements to trace the problem down to just a few counters (certainly less than ten, probably only three). At first you'll need to track all of the memory, disk, cpu, and network counters... Very shortly you should be able to identify which counters change drastically as performance degrades. Those are the counters that you need to watch carefully going forward.

In the begining, what I'd recommend is to save the performance counters into a file. Run a few (three to five) basline monitors at different times of the day, including at least one with no users online at all. When your server performance starts to degrade, run another stored trace. The worse the performance gets, the more important getting these traces will be.

When you have several traces stored, you'll need to start comparing them. Look for common trace information that is very similar when performance is good as they are when performance is bad. These are "background noise" that you can safely filter out.

As you begin to see which traces change drastically when performance changes, you'll start to understand where the bottleneck is in your system. This is the area where you need to focus your attention, but be forewarned that there can be more than one bottleneck, and sometimes fixing one exposes a completely different one.

As an example of this, I think that your servers are severly RAM starved. I think that they need at least twice as much memory, and quite possibly more than that. However, when you fix that problem, you might well find that something else is also a bottleneck, but due to the RAM problems it was never exposed.

To put this in a "real world" context, if you have an truck that's in bad shape and you need to move something very heavy, you might decide to work on the truck first because you know it can't do the job in its present condition. You tune up the engine (new spark plugs and rotar), only to discover that you need a carburator overhaul too that was masked by the need for the tune up. Once you get the engine running well, you discover there isn't enough transmission fluid. As you fix one problem, it exposes another problem that has been there all along, but that wasn't serious enough to be an issue.

Don't let my comments make you depressed. Your problem is definitely something that can be fixed. I just don't want you to think you'll go in with a "magic bullet", fire one shot and the world will be lovely again. This is a problem that probably took a while to become an issue, and it will take some time and effort to fix. Actually, in my mind performance monitoring and management are part of the ongoing job... The issues are always there, they just don't always get the attention that they should!

-PatP|||PatP sir,

Lot of thanks to you for your valuable suggession, I will do it accordingly and hope I will resolve it with the help of you.

I would like to thanks more to you for past time advice also , when I was in loop of Replication 4 month ago.

Thanks

R.Mall|||Amount of activity that can be generated by the number of users you indicated can not possibly be a source of concern regarding you hardware configuration. If you're experiancing "disconnects" it's probably due to clien timeouts which are a direct result of host CPU being too busy to process any consequent requests. This is a direct affect of under-utilization of indexing capabilities (this statement is dictated purely by the amount of info you provided, and may be easily refuted if the presence of indexes is indicated). Merge replication does not pose this kind of behavior with 45+ concurrent connections. In fact it doesn't manifest itself like this even with 450+ concurrent connections with 20+ runnable processes (I bet you have less than that).

I think you need to analyze execution plans for most frequently invoked statements and concentrate your attention on optimizing them before you burry yourself in perfmon counters.

No comments:

Post a Comment