HP

HPjmeter 4.1 User's Guide

English
  HPjmeter 4.1 User's Guide > Chapter 8 Using Visualizer Functions   

Using Monitoring Displays

Monitoring metric displays are available from the Monitor menu on the main console. The monitoring displays can be used during live operation of the application and with saved data (.hpjmeter) files.

HPjmeter monitoring metrics are organized as follows:

Monitor Code and/or CPU Activity Menu

Monitor Memory and/or Heap Activity Menu

Monitor Threads and/or Locks Menu

Monitor JVM and/or System Activity Menu

See also:

Monitor Code and/or CPU Activity Menu

Java Method HotSpots

Displays a sampling-based estimate of the CPU time consumed by individual Java methods.

Methods are listed from highest to lowest CPU usage by percentage; over a session lifetime; with package, method, and method arguments displayed.

Each method's information is written over a graphical representation of the confidence interval calculated for that method.

Figure 8-3 Monitoring Metric: Java Methods HotSpots with Confidence Interval Graphically Displayed for Each Method

Java Method HotSpots visualizer
Guidelines
  • If this metric shows a large percentage of CPU time spent in just a few methods listed at the top, it indicates a performance problem or a method performance problem that might be improved.

  • When the top entry represents a single-digit percentage, a performance problem is unlikely unless the entry describes a method that you did not expect to see.

Details

You can turn off the display of package and argument information from the View menu.

HPjmeter reports select methods that are the root cause of high CPU-time usage, while excluding others that are rarely relevant. This technique reports methods that are most likely to need additional optimization.

A common optimization technique is to improve how a method calls helper methods. These helper methods are not included in the list.

The percentages are not absolute across the entire application, but are computed only with respect to the methods HPjmeter reports.

HPjmeter does not report small methods, which are frequently inlined, and methods outside your application, such as those in the java.* package. The goal is to help you zoom in on your core application logic, including use of helper methods and APIs.

The metric window reports “No hotspot detected at this time” until it detects a Method CPU hot spot, and then the metric data appears. The survey for hot spots often takes just a few seconds, but in some cases could take longer.

NOTE: The data collection for Java Method HotSpots may be significantly delayed if one or more of these conditions exist:
  • The application runs on a single CPU system.

  • The application does not consume a lot of CPU.

  • The application consumes CPU exclusively in non-profiled methods.

Restarting the application server or node agent will not improve data collection.

Thrown Exceptions

Displays thrown exception counts according to the exception type and the catching method. If you need stack trace information, refer to Thrown Exceptions with Stack Traces.

The integer is a count of how many times this exception has been thrown in this location, and caught by this method. The percentage gives information on how often this exception is being thrown in relation to all the detected thrown exceptions.

HPjmeter collects and reports exceptions caught in classes that are instrumented, that is, classes that the JVM agent instrumentation rules have not excluded. To identify the JVM agent rules in effect, you can use the JVM agent verbose option.

HPjmeter does not collect or report exceptions that are caught in methods filtered out by the exclude JVM agent option.

The display shows “No thrown exception detected since the session opened.” until HPjmeter detects a thrown exception, at which time it displays the information.

The window shows events in a hierarchical tree.

The View menu allows you to control the information displayed in the window:

  • Select View->Show Percentages to alternatively hide or show the percentage value of the total count for each exception, shown next to the count value.

  • Select View->Show Packages to alternatively hide or show the Java package names to shorten the lines in the display.

The results are cumulative over the lifetime of the session.

Thrown Exceptions with Stack Traces

Enable this metric in the Session Preferences window only when you need to get information about where your applications throws exceptions. To view this data, click Monitor->Code/CPU->Thrown Exceptions.

This metric displays thrown exception counts according to the exception type, the catching method, and the stack trace of the throwing location.

The integer is a count of how many times this exception has been thrown in this location, and caught by this method. The percentage gives information on how often this exception is being thrown in relation to all the detected thrown exceptions.

NOTE: Collecting the stack trace information could impair the performance of your application if the application throws a large number of exceptions during the session. To minimize the effect on your application, you can enable the Thrown Exceptions metric, which does not collect stack traces, when you start your session.

Figure 8-4 Monitoring Metric: Thrown Exceptions with Stack Traces

Thrown exceptions with stack traces
visualizer

HPjmeter collects and reports exceptions caught in classes that are instrumented, that is, classes that the JVM agent instrumentation rules have not excluded. To identify the JVM agent rules in effect, you can use the JVM agent verbose option.

HPjmeter does not collect or report exceptions that are caught in methods filtered out by the exclude JVM agent option.

The display shows “No thrown exception detected since the session opened.” until HPjmeter detects a thrown exception at which time it displays the information.

The window shows events in a hierarchical tree.

The View menu allows you to control the information displayed in the window:

  • Select View->Show Percentages to alternatively hide or show the percentage value of the total count for each exception, shown next to the count value.

  • Select View->Show Packages to alternatively hide or show the Java package names to shorten the lines in the display.

  • Select View->Show Stacktraces alternatively expand or collapse the throw location stack traces of all the exception nodes, or click on a specific node to expand or collapse its throw location stack trace only.

The results are cumulative over the lifetime of the session.

Monitor Memory and/or Heap Activity Menu

Heap Monitor

Displays free and used memory sizes in the heap and garbage collection events over time. The used heap space includes live objects and dead objects that have not yet been collected as garbage. Specifically, this visualizer shows the heap in use by objects in the eden space and in the old and survivor generations, but does not include the permanent generation. (See Basic Garbage Collection Concepts if you are unfamiliar with these terms.)

This display indicates whether your application is doing many allocations, which typically correspond to the load level, or if your application is idle.

Figure 8-5 Monitoring Metric: Heap Monitor

Monitoring Metric: Heap Monitor
Guidelines
  • Look for extra-wide garbage-collection bars, which correspond to garbage collection pauses. These could cause transient service-level objective violations.

    To reduce intermittent long garbage collection pauses try changing the garbage collection algorithm with a JVM option. Refer to your JVM documentation.

  • If the garbage collection events still take a long time, it may indicate a paging problem where the physical memory available to the application is too small for the specified maximum size.

    The remedies include:

    • Decrease the maximum heap size, with a corresponding decrease in the maximum load supported by your application.

    • Remove other load from the system.

    • Install more physical memory.

  • When you select a high level of detail, 1 to 20 minutes, and the heap size does not go to the local maximum before a garbage collection happens, it could indicate excessive calls to System.gc(). See Identifying Excessive Calls to System.gc().

  • When you select coarse granularity, 1 to 24 hours, you may notice the overall change of behavior in heap size and garbage collection pattern. This can help with understanding the correlation between the application load and the pressure on the heap.

    If there is plenty of gray in selected areas of the display, this means that the heap was too small for the load imposed on the application at that time.

Garbage Collections

Displays garbage collection events over the period that the application has been running and an estimated percentage of time spent in garbage collection. These events include collection from the young, old, and survivor objects in the heap. This display does not include objects in the permanent generation space. (See Basic Garbage Collection Concepts if you are unfamiliar with these terms.) When running your application with Java 5.0.12 or later or with Java 6.0.01 or later, the visualizer can show major versus minor garbage collections.

NOTE: For detailed garbage collection information, run your application with —Xverbosegc or —Xloggc options and view the results in the GC viewer. See Obtaining Garbage Collection Data and Using Specialized Garbage Collection Displays for information on collecting and viewing garbage collection data in HPjmeter.

Figure 8-6 Monitoring Metric: Garbage Collections

Monitoring Metric: Garbage Collections
Guidelines
  • For a healthy heap, minor collections should dominate major garbage collections. If the number of minor collections is too small compared to the number of major garbage collections, the young generation of the heap may be too small.

  • If the heap size shown by garbage collections converges towards the heap limit, the application has run out of memory, or soon will run out.

  • If the old generation is too small, the application will run out of memory. If the total heap size is too large compared to available physical memory, thrashing occurs.

  • A value of 5 percent or less of time spent in garbage collection is acceptable. Values larger than 10 percent usually indicate an opportunity for improvement.

  • With a time span of more than one hour, you can identify possible memory leaks. See Determining the Severity of a Memory Leak.

Details

Each point on the graph represents the heap size after a garbage collection completes; it represents the amount of live memory at that time.

Frequent long garbage collections represent a potential problem, and will be coupled with a high percentage of time spent in garbage collection. This percentage is displayed in the lower right of the window.

GC Duration

Displays the duration of each garbage collection noted.

Guidelines

  • Expect collection times to vary with the size of the heap; the larger the heap, the longer duration a normal GC will be.

  • Collection times that are shorter or longer than expected for the heap size can indicate that tuning garbage collection could improve performance.

Details

  • For HP Java 1.5.0.12 and later or 6.0.01 or later, this visualizer distinguishes between major and minor garbage collections such as full GC and scavenge.

Percentage of Time Spent in Garbage Collection

The percentage is an estimated value of the time spent in garbage collection.

Guidelines
  • The horizontal red line shows the current average percentage of time spent in garbage collection.

  • An almost steady value of 5 percent or less is considered low and acceptable.

  • Sustained values larger than 10 percent suggest room for improvement.

Details

Here are two possible ways to make improvements:

  • Tune the heap parameters for better performance. For HP HotSpot VM, run your application with the -Xverbosegc option and view the results in HPjmeter.

  • If the heap has been already tuned, you can decrease the application pressure on the heap, that is, decrease the rate of object allocations, by trying these alternatives:

    • Reusing objects

    • Changing memory-inefficient algorithms

    Object allocation statistics can help identify areas for improvement.

Figure 8-7 Monitoring Metric: Percentage of Time Spent in Garbage Collection

Monitoring Metric: Percentage of Time Spent in Garbage Collection

Unfinalized Objects

Shows a measure of the objects that have not been finalized at each garbage collection during the monitoring period.

Guidelines

  • Escalating numbers of unfinalized objects can indicate that the finalizer queue is growing, with associated objects holding increasing space in the heap.

Details

  • Some or many of the objects in a finalizer queue may no longer be needed by the program and are waiting to be finalized and then collected.

  • Check to see if the finalize () method in your application is being called at appropriate times and frequency.

  • Profiling with –Xeprof will help you to obtain details about the number of unused finalizers in the heap.

  • Use the monitoring or profiling thread histogram to check the state of the finalizer thread during the recorded period.

Allocated Object Statistics by Class

Shows object allocation statistics according to the object type allocated.

Guidelines

A typical large Java application allocates a lot of strings. This value can reach 25 percent of all allocations. If any other type, especially an application-defined type approaches or exceeds such a value, it may indicate inefficient code.

Details

For those classes that are instrumented (visible through the JVM agent verbose flag), every object allocation in every method is instrumented to report allocations. However, sampling is used to minimize overhead, so the metric reports allocation percentages, not total allocation counts. These percentages are not absolute across the entire application, but are computed with respect to allocations in instrumented classes.

Sampling minimizes overhead and focuses attention on user code. To discover allocation statistics about application server classes, use the include and exclude filtering flags in the JVM agent options.

The reported data is cumulative over the lifetime of the session, and accuracy will improve as the session length increases.

Figure 8-8 Monitoring Metric: Allocated Object Statistics by Class

Monitoring Metric: Allocated Object Statistics by Class

Allocating Method Statistics

Shows the methods that allocate the most objects.

This metric is useful when you choose to decrease heap pressure by modifying the application code.

Guidelines

Methods listed at the top should become the primary candidates for optimization.

Details

For those classes that are instrumented (visible through the JVM agent verbose flag), every object allocation in every method is instrumented to report allocations. However, sampling is used to minimize overhead, so the metric reports allocation percentages, not total allocation counts. These percentages are not absolute across the entire application, but are computed with respect to allocations in instrumented classes.

Sampling minimizes overhead and focuses attention on user code. To discover allocation statistics about application server classes, use the include and exclude filtering flags in the JVM agent options.

The reported data is cumulative over the lifetime of the session, and accuracy will improve as the session length increases.

Figure 8-9 Monitoring Metric: Allocating Method Statistics

Monitoring Metric: Allocating Method Statistics

Current Live Heap Objects

Use this visualizer to obtain an immediate data summary of live objects in the heap each time that you click the Refresh Live Objects button. This can be especially useful when trying to understand unexpected behavior in memory usage.

The display shows information for the classes of live objects found. It does not show indirect references. See Table 8-1 “Data Shown in Current Live Heap Objects Visualizer”.

Figure 8-10 Monitoring Metric: Current Live Heap Objects

Current Live Heap Objects visualizer
showing data types and button for updating the display; sorted by
% heap used in descending order.

Table 8-1 Data Shown in Current Live Heap Objects Visualizer

Column HeadingDescription
ClassName of class to which object belongs
% Heap UsedPercent of allocated heap used
BytesCumulative size occupied by the object (in bytes)
+/- First BytesThe total change in the amount of bytes held for this class since the first snapshot was taken.
+/- Last BytesThe change in the amount of bytes held for this class since the last snapshot was taken (most recent increment).
CountNumber of current live instances of the object class
+/- First CountThe total change in the number of objects held for this class since the first snapshot was taken.
+/- Last CountThe change in the number of objects held for this class since the last snapshot was taken (most recent increment).

 

Details

When the heap is large with many objects, refreshing the snapshot will affect system performance more than refreshing from a smaller heap with fewer objects.

Sort by any of the data types by clicking the column heading in the Current Live Heap Objects table. Continue clicking on the same column heading to toggle the sort between ascending and descending order for numerical columns and by alphabetical order for columns containing text.

You can copy all or part of the data displayed into a temporary buffer, then paste or append it into a spreadsheet or other similar software using a keyboard shortcut.

To select a portion of the data, click and drag the cursor across the desired rows and columns of data. The selected rows change color. Then click Copy Selection to Buffer in the tool bar to capture the data.

Figure 8-11 Copying Selected Current Live Heap Objects Data into Buffer

Current Live Heap Objects visualizer
showing area selected for copying into buffer.

To select all data for use in a spreadsheet, click Copy All to Buffer in the tool bar.

Figure 8-12 Copying All Current Live Heap Objects Data into Buffer

Current Live Heap Objects visualizer
showing Copy All to Buffer button highlighted.

Click File ->Save to capture all data as an ASCII text file that you can save onto your local machine.

Related topics

Monitor Threads and/or Locks Menu

Thread Histogram

Displays thread states over time. Thread data arrives in time slices. For each time slice, color-coded bars represent the percentage of time the thread spent in each state. The reported states are:

Waiting

The thread has been suspended using Object.wait() method.

Lock Contention

The thread is delayed while attempting to enter a Java monitor that is already acquired by another thread.

Running

All remaining cases.

Guidelines
  • Large amounts of red in Thread Histogram indicate heavy lock contention, which is usually a possible problem. On the other hand, large amounts of green indicate a potential processing capacity for the involved threads.

  • When there is no load, the state for the threads doing the work on behalf of transactions should be waiting, and marked by the green color.

  • Threads terminating normally, or because of uncaught exceptions, appear as a discontinued row.

  • Multiple short-lived threads appear as apparently blank rows in the display. At the same time the number of displayed threads, shown at the bottom of the display, is large.

  • Lock Contention appears as red in the display.

  • Deadlocked threads appear as two or more threads spending all their time in lock contention, red, starting from a given time. This point in time identifies the deadlock occurrence.

Details

For each time slice, represented by a small portion of the X-axis, the display along the Y-axis shows the percentage of the time slice that the thread spent in each state. It represents a stacked bar graph for the time slice.

Figure 8-13 Monitoring Metric: Thread Histogram

Monitoring Metric: Thread Histogram

Lock Contention

Provides lock contention statistics.

The percentages for each method represent how much of the total lock contention observed occurred in that method. For example, if there was a single instance of lock contention during a run, that method would show 100 percent. Therefore, methods that show a high percentage of lock contention may not be a problem, unless you see a significant amount lock contention in your application.

Lock contention can be detected either in synchronized methods, or in methods that contain synchronized blocks.

Guidelines
  • Lock contention in a running application does not necessarily indicate a problem.

  • If you suspect a lock contention problem with your application, you should look more closely at the highest-ranked methods in the Lock Contention display.

  • The Thread Histogram can also help you determine if there is significant lock contention. Other system-level tools can also provide information to determine if there is excessive lock contention.

Details

This metric uses sampling to determine the level of lock contention. Therefore, this display shows percentages of time wasted over the sampling period, not actual time wasted in lock contention.

The reported data is cumulative over the lifetime of the session.

Figure 8-14 Monitoring Metric: Lock Contention

Monitoring Metric: Lock Contention

Considerations When Comparing Lock Contention and Thread Histogram Metrics:

Lock contention data is sampled less frequently than thread histogram data. When there is lock contention with a short lifespan, a small amount of lock contention might appear in the Thread Histogram, but not be shown in the Lock Contention percentages. This can happen when the contention occurs during a time when the Thread Histogram sample is being taken, but the lock contention sample is not.

Monitor JVM and/or System Activity Menu

Method Compilation Count

Displays a list of all the methods compiled from the time the session was opened, showing the number of times a particular method was compiled.

The metric window reports “No methods compiled since the session opened.” until the next method compilation occurs, and then the metric data appears.

Guidelines
  • The normal values for this metric are single-digit numbers.

  • If the top item or items show a much larger value than the rest of entries, and the value constantly grows, it suggests excessive method compilation.

  • Normally, a method is compiled once or just a few times, which results in a very flat profile, with none of the entries showing large numbers.

    However, a JVM may have a performance problem in which a certain method, or methods, is compiled repeatedly. Such a problem manifests itself in one entry clearly dominating the list and showing constant growth over time.

Figure 8-15 Monitoring Metric: Method Compilation Count

Monitoring Metric: Method Compilation Count

Method Compilation Frequency

Produces a graph that shows the compilation frequency. This is a companion to the existing Method Compilation Count. The Method Compilation Frequency metric provides a view of how much effort the JVM is spending on method compilation.

Guidelines
  • A typical profile shows a lot of compilations as a Java application is starting up, then it usually goes down to a small number as things reach a steady state.

Figure 8-16 Monitoring Metric: Method Compilation Frequency

Monitoring Metric: Method Compilation Frequency

Loaded Classes

HPjmeter displays the number of classes loaded into memory for processing. This number usually stabilizes over time as processing progresses.

Guidelines
  • The number of classes loaded at any one time tends to oscillate within a narrow range; typically less than 2 percent of all loaded classes will be unloaded or reloaded during application processing.

  • If the number of loaded classes constantly grows, it indicates a possibility that the new classes, possibly dynamically created, may eventually fill the available memory and cause the application to crash.

Figure 8-17 Monitoring Metric: Loaded Classes

Monitoring Metric: Loaded Classes

Percent (%) CPU Utilization

Displays total system and process CPU consumption.

Guidelines

  • Percentages are displayed as number of CPUs x 100%.

  • Excessive use of CPU resources (greater than 80% of the total number of CPUs) may indicate that the application load limit is close, even though the application may appear to be performing well. At higher consumption rates, CPU consumption can become a bottleneck to good performance.

  • When system CPU consumption is significantly higher than process consumption, this may indicate that “alien” or undesired processes are using CPU resources that the preferred application could be using. It may also indicate that the application is abusing the operating system kernel.

Figure 8-18 Percent CPU Utilization for System and Processes

Percent CPU Utilization for System and Processes