Pal tool windows 2008




















This counter should always be below 1, This analysis checks for values above 1, If all analyses are throwing alerts at the same time, then this may indicate the system is running out of memory. This value includes only current physical pages and does not include any virtual memory pages not currently resident. It does equal the System Cache value shown in Task Manager. As a result, this value may be smaller than the actual amount of virtual memory in use by the file system cache.

This counter displays the last observed value only; it is not an average. It is calculated by measuring the duration the idle thread is active in the sample interval, and then subtracting that time from the interval duration. Each processor has an idle thread that consumes cycles when no other threads are ready to run. This counter is the primary indicator of processor activity and displays the average percentage of busy time observed during the sample interval.

It is calculated by monitoring the time that the service is inactive, and subtracting that value from percent. This analysis checks for utilization greater than 60 percent on each individual processor. If so, determine whether it is high user mode CPU or high privileged mode.

If a user-mode processor bottleneck is suspected, then consider using a process profiler to analyze the functions causing the high CPU consumption. Unlike the disk counters, this counter shows ready threads only, not threads that are running.

There is a single queue for processor time even on computers with multiple processors. Therefore, if a computer has multiple processors, you need to divide this value by the number of processors servicing the workload. A sustained processor queue of less than 10 threads per processor is normally acceptable, dependent of the workload. This analysis determines whether the average processor queue length exceeds the number of processors. If so, then this could indicate a processor bottleneck.

The processor queue is the collection of threads that are ready but not able to be executed by the processor because another active thread is currently executing. A sustained or recurring queue of more threads than number of processors is a good indication of a processor bottleneck. There is a single queue for processor time, even on multiprocessor computers. If the CPU is very busy 90 percent and higher utilization and the PQL average is consistently higher than the number of processors, then you may have a processor bottleneck that could benefit from additional CPUs.

Or, you could reduce the number of threads and queue at the application level. This will cause less context switching, which is good for reducing CPU load. The common reason for a high PQL with low CPU utilization is that requests for processor time arrive randomly, and threads demand irregular amounts of time from the processor.

This means that the processor is not a bottleneck. Instead, your threading logic that needs to be improved. This counter indicates the percentage of time a thread runs in privileged mode. When a Windows system service is called, the service will often run in privileged mode to gain access to system-private data. Such data is protected from access by threads executing in user mode.

Calls to the system can be explicit or implicit, such as page faults or interrupts. Unlike some early operating systems, Windows uses process boundaries for subsystem protection in addition to the traditional protection of user and privileged modes.

Some work done by Windows on behalf of the application might appear in other subsystem processes in addition to the privileged time in the process. A context switch happens when a higher priority thread preempts a lower priority thread that is currently running or when a high priority thread blocks. High levels of context switching can occur when many threads share the same priority level.

This often indicates that too many threads are competing for the processors on the system. If you do not see much processor utilization and you see very low levels of context switching, it could indicate that threads are blocked. As a general rule, context switching rates of less than 5, per second per processor are not worth worrying about. If context switching rates exceed 15, per second per processor, then there is a constraint.

This analysis checks for high CPU, high privileged mode CPU, and high greater than 5, per processor system context switches per second all occurring at the same time. If high context switching is occurring, then reduce the number of threads and processes running on the system. This counter has two possible values namely normal 0 or exceeded 1. This analysis checks for a value of 1. If so, BizTalk has exceeded the threshold of the number of database sessions permitted.

The idle database sessions in the common per-host session pool do not add to this count, and this check is made strictly on the number of sessions actually being used by the host instance. This option is disabled by default; typically, this setting should only be enabled if the database server is a bottleneck or for low-end database servers in the BizTalk Server system.

You can monitor the number of active database connections by using the database session performance counter under the BizTalk:Message Agent performance object category. This parameter only affects outbound message throttling.

Enter a value of 0 to disable throttling that is based on the number of database sessions. The default value is 0. This counter refers to the number of messages in the database queues that this process has published. This value is measured by the number of items in the queue tables for all hosts and the number of items in the spool and tracking tables.

Queue includes the work queue, the state queue and the suspended queue. If a process is publishing to multiple queues, this counter reflects the weighted average of all the queues. If the host is restarted, statistics held in memory are lost. Since some overhead is involved, BizTalk Server will resume gathering statistics only when there are at least publishes, with 5 percent of the total publishes within the restarted host process.

This counter will be set to a value of 1 if either of the conditions listed for the message count in database threshold occurs. By default the host message count in database throttling threshold is set to a value of 50,, which will trigger a throttling condition under the following circumstances:.

The total number of messages published by the host instance to the work, state, and suspended queues of the subscribing hosts exceeds 50, Since suspended messages are included in the message count in database calculation, throttling of message publishing can occur even if the BizTalk server is experiencing low or no load. If this occurs, then consider a course of action that will reduce the number of messages in the database.

For example, ensure the BizTalk SQL Server jobs are running without error and use the Group Hub in the BizTalk Administration console to determine whether message build up is caused by large numbers of suspended messages. This number does not include the messages retrieved from database but still waiting for delivery in the in-memory queue.

You can monitor the number of in-Process Messages by using the In-process message count performance counter under the BizTalk:Message Agent performance object category. This parameter provides a hint to the throttling mechanism when considering throttling conditions. The actual threshold is subject to self-tuning. You can verify the actual threshold by monitoring the in-process message count performance counter.

This parameter can be set to a smaller value for large message scenarios, where either the average message size is high, or the processing of messages may require a large number of messages. This would be evident if a scenario experiences memory-based throttling too often and if the memory threshold gets auto-adjusted to a substantially low value.

Such behavior would indicate that the outbound transport should process fewer messages concurrently to avoid excessive memory usage. Also, for scenarios where the adapter is more efficient when processing a few messages at a time for example, when sending to a server that limits concurrent connections , this parameter may be tuned to a lower value than the default. This analysis checks the High In-Process Message Count counter to determine whether this kind of throttling is occurring.

The rate overdrive factor percent parameter is configurable on the Message Processing Throttling Settings dialog box. Rate-based throttling for outbound messages is accomplished primarily by inducing a delay before removing the messages from the in-memory queue and delivering the messages to the End Point Manager EPM or orchestration engine for processing.

No other action is taken to accomplish rate-based throttling for outbound messages. Outbound throttling can cause delayed message delivery and messages may build up in the in-memory queue and cause de-queue threads to be blocked until the throttling condition is mitigated. When de-queue threads are blocked, no additional messages are pulled from the MessageBox into the in-memory queue for outbound delivery. This analysis checks for a value of 1 in the High Message Delivery Rate counter.

High message delivery rates can be caused by high processing complexity, slow outbound adapters, or a momentary shortage of system resources. The BizTalk Process Memory usage throttling threshold setting is the percentage of memory used compared to the sum of the working set size and total available virtual memory for the process if a value from 1 through is entered.

When a percentage value is specified the process memory threshold is recalculated at regular intervals. If the user specifies a percentage value, it is computed based on the available memory to commit and the current Process Memory usage. This analysis checks for a value of 1 in the High Process Memory counter. If this occurs, then try to determine the cause of the memory increase by using Debug Diag see references in Memory Leak Detection analysis.

Note that is it normal for processes to consume a large portion of memory during startup and this may initially appear as a memory leak, but a true memory leak occurs when a process fails to release memory that it no longer needs, thereby reducing the amount of available memory over time.

High process memory throttling can occur if the batch to be published has steep memory requirements, or too many threads are processing messages. If the system appears to be over-throttling, consider increasing the value associated with the process memory usage threshold for the host and verify that the host instance does not generate an "out of memory" error.

If an "out of memory" error is raised by increasing the process memory usage threshold, then consider reducing the values for the internal message queue size and In-process messages per CPU thresholds. This strategy is particularly relevant in large message processing scenarios.

In addition, this value should be set to a low value for scenarios having large memory requirement per message. Setting a low value will kick in throttling early on and prevent a memory explosion within the process. The BizTalk Physical Memory usage throttling threshold setting is the percentage of memory consumption compared to the total amount of available physical memory if a value from 1 through is entered.

This setting can also be the total amount of available physical memory in megabytes if a value greater than is entered. Enter a value of 0 to disable throttling based on physical memory usage.

This analysis checks for a value of 1 in the High System Memory counter. Since this measures total system memory, a throttling condition may be triggered if non-BizTalk Server processes are consuming an extensive amount of system memory. If this threshold is exceeded, BizTalk Server will try to reduce the size of the EPM thread pool and message agent thread pool. Thread based throttling should be enabled in scenarios where high load can lead to the creation of a large number of threads.

This parameter affects both inbound and outbound throttling. Thread based throttling is disabled by default. The user-specified value is used as a guideline, and the host may dynamically self-tune this threshold value based on the memory usage patterns and thread requirements of the process.

This analysis checks for a value of 1 in the High Thread Count counter. Consider adjusting the different thread pool sizes to ensure that the system does not create a large number of threads. This analysis can be correlated with Context Switches per Second analysis to determine whether the operating system is saturated with too many threads, but in most cases high thread counts cause more contention on the backend database than on the BizTalk server.

For more information about modifying the thread pool sizes see How to Modify the Default Host Throttling Settings in references. BizTalk Inbound Latency AnalysisAverage latency in milliseconds from when the messaging engine receives a document from the adapter until the time it is published to the message box.

Reducing latency is important to some users of BizTalk, therefore tracking how much time documents spend in the inbound adapter is important. Assuming a low latency environment, this analysis checks whether the document spent more than 5 seconds in the inbound adapter.

This may indicate a processing delay in the transport of messages through inbound adapters in this host instance. If multiple inbound adapters exist in this host instance, then consider separating them into their own hosts to determine which inbound adapter has high latency. The BizTalk message delivery throttling state is one of the primary indicators of throttling.

It is a flag indicating whether the system is throttling message delivery affecting XLANG message processing and outbound transports. The throttling condition is indicated by the numeric value of the counter. Here is a list of the values and their respective meaning:. This performance counter is the number of attempted database connections that failed since the host instance started.

If the SQL Server service hosting the BizTalk databases becomes unavailable for any reason, the database cluster transfers resources from the active computer to the passive computer. During this failover process, the BizTalk Server service instances experience database connection failures and automatically restart to reconnect to the databases. The functioning database computer previously the passive computer begins processing the database connections after assuming the resources during failover.

When this occurs, the BizTalk Server runtime instance that catches the exception shuts down and then cycles every minute to check to see whether the database is available.

NET Framework v2. One especially useful feature is the ability to preselect a time range within the log files. Please note that the control does not pre-populate the date range within the log file. Using the drop-down menu, there are a number of different threshold files to choose with which to analyze your data. This will help when PAL is generating the report — especially where percentage type calculations are used. An extremely useful feature of PAL is that you can edit the threshold files to add and edit counters and thresholds beyond what is defined by default.

To modify the template, click on the Edit button. At which point you can browse your system for the counters you want to add. You also have the option to connect to a remote system to select counters.

Select your counter and click on the Add button …. In brief, the tool takes a Performance Monitor log file. The thresholds for the counters are based on those recommended by various Microsoft product teams and Microsoft support.

The tool includes many features, and is customizable, allowing you to configure it to meet your specific needs. The first step is to create a Performance Monitor log file, as this will be the source of the data that will be analyzed by PAL. More on this later. In addition, you will want to collect data over a representative period of time, such as one hour, one day, and so on.

Keep in mind that collecting Performance Monitor logs can become resource intensive if you collect a lot of counters and collect the data for long periods of time. Notice below that you can analyze an entire log file or a partial log file based on time and data. Below is example of the analysis templates that are available. As you can see, PAL is a multipurpose tool and can be used for analyzing the performance of many different Microsoft products. This screen lists every Performance Counter that is analyzed by this analysis template, and in addition, you have the ability to edit it.

What I really find useful about this screen is that it lists all of the threshold figures used by Microsoft to determine if a particular counter is exceeding its recommended threshold or not. When you create your Performance Monitor logs, I highly suggest you collect all of the counters that are listed in the analysis template that you want to use, so you can take full advantage of the analysis ability of PAL. Once you have selected an analysis template SQL Server in my case , the next step is to answer some questions about your server.

The questions that are presented depend on which analysis template you have selected. Below, notice that there are five questions that you need to answer. This data is used when the analysis report is created later. Next, the wizard presents us with the Analysis Interval screen. Here, we tell PAL how we want to slice and dice the data. For example, if we want to see Performance Monitor counter analysis for every 60 second time period, or for every hour time period, we can.



0コメント

  • 1000 / 1000