 |
 |
Monitoring metric displays are available from
the Monitor menu on the main console. The monitoring
displays can be used during live operation of the application and
with saved data (.hpjmeter) files. HPjmeter monitoring metrics are organized as follows: Monitor Code and/or CPU Activity Menu Monitor Memory and/or Heap Activity Menu Monitor Threads and/or Locks Menu Monitor JVM and/or System Activity Menu See also:
Monitor Code and/or CPU Activity Menu |  |
Displays a sampling-based estimate of the CPU
time consumed by individual Java methods. Methods are listed from highest to lowest CPU
usage by percentage; over a session lifetime; with package, method,
and method arguments displayed. Each method's information is written over
a graphical representation of the confidence interval calculated for that method.
If this metric shows a
large percentage of CPU time spent in just a few methods listed at
the top, it indicates a performance problem or a method performance
problem that might be improved. When the top entry represents
a single-digit percentage, a performance problem is unlikely unless
the entry describes a method that you did not expect to see.
You can turn off the display of package and argument
information from the View menu. HPjmeter reports select methods that are the root
cause of high CPU-time usage, while excluding others that are rarely
relevant. This technique reports methods that are most likely to need
additional optimization. A common optimization technique is to improve
how a method calls helper methods. These helper methods are not included
in the list. The percentages are not absolute across the entire
application, but are computed only with respect to the methods HPjmeter reports. HPjmeter does not report small methods, which
are frequently inlined, and methods outside your application, such
as those in the java.* package. The goal is to
help you zoom in on your core application logic, including use of
helper methods and APIs. The metric window reports “No hotspot detected
at this time” until it detects a Method CPU hot spot, and then
the metric data appears. The survey for hot spots often takes just
a few seconds, but in some cases could take longer. Displays thrown exception counts according to
the exception type and the catching method. If you need stack trace
information, refer to Thrown Exceptions with Stack Traces. The integer is a count of how many times this
exception has been thrown in this location, and caught by this method.
The percentage gives information on how often this exception is being
thrown in relation to all the detected thrown exceptions. HPjmeter collects and reports exceptions caught
in classes that are instrumented, that is, classes that the JVM agent
instrumentation rules have not excluded. To identify the JVM agent
rules in effect, you can use the JVM agent verbose option. HPjmeter does not collect or report exceptions
that are caught in methods filtered out by the exclude JVM agent option. The display shows “No thrown exception detected
since the session opened.” until HPjmeter detects a thrown
exception, at which time it displays the information. The window shows events in a hierarchical tree. The View menu allows you to
control the information displayed in the window: Select View Show Percentages to alternatively hide or show the percentage value of the total
count for each exception, shown next to the count value. Select View Show Packages to alternatively hide or show the Java package names to shorten
the lines in the display.
The results are cumulative over the lifetime of
the session. Thrown Exceptions with Stack TracesEnable this metric in the Session Preferences
window only when you need to get information about where your applications
throws exceptions. To view this data, click Monitor Code/CPU Thrown Exceptions. This metric displays thrown exception counts according
to the exception type, the catching method, and the stack trace of
the throwing location. The integer is a count of how many times this
exception has been thrown in this location, and caught by this method.
The percentage gives information on how often this exception is being
thrown in relation to all the detected thrown exceptions.
 |  |  |  |  | NOTE: Collecting the stack trace information could impair the performance
of your application if the application throws a large number of exceptions
during the session. To minimize the effect on your application, you
can enable the Thrown Exceptions metric, which does not collect stack traces, when you start your
session. |  |  |  |  |
HPjmeter collects and reports exceptions caught
in classes that are instrumented, that is, classes that the JVM agent
instrumentation rules have not excluded. To identify the JVM agent
rules in effect, you can use the JVM agent verbose option. HPjmeter does not collect or report exceptions
that are caught in methods filtered out by the exclude JVM agent option. The display shows “No thrown exception detected
since the session opened.” until HPjmeter detects a thrown
exception at which time it displays the information. The window shows events in a hierarchical tree. The View menu allows you to
control the information displayed in the window: Select View Show Percentages to alternatively hide or show the percentage value of the total
count for each exception, shown next to the count value. Select View Show Packages to alternatively hide or show the Java package names to shorten
the lines in the display. Select View Show Stacktraces alternatively expand or collapse the throw location stack traces
of all the exception nodes, or click on a specific node to expand
or collapse its throw location stack trace only.
The results are cumulative over the lifetime of
the session. Monitor Memory and/or Heap Activity Menu |  |
Displays free and used memory sizes in the heap
and garbage collection events over time. The used heap space includes
live objects and dead objects that have not yet been collected as
garbage. Specifically, this visualizer shows the heap in use by objects
in the eden space and in the old and survivor generations, but does
not include the permanent generation. (See Basic Garbage Collection Concepts if you are unfamiliar
with these terms.) This display indicates whether your application
is doing many allocations, which typically correspond to the load
level, or if your application is idle.
Look for extra-wide garbage-collection
bars, which correspond to garbage collection pauses. These could cause
transient service-level objective violations. To reduce intermittent long garbage collection
pauses try changing the garbage collection algorithm with a JVM option.
Refer to your JVM documentation. If the garbage collection
events still take a long time, it may indicate a paging problem where
the physical memory available to the application is too small for
the specified maximum size. The remedies
include: Decrease the maximum heap size, with a corresponding
decrease in the maximum load supported by your application. Remove other load from the system. Install more physical memory.
When you select a high
level of detail, 1 to 20 minutes, and the heap size does not go to
the local maximum before a garbage collection happens, it could indicate
excessive calls to System.gc(). See Identifying Excessive Calls to System.gc(). When you select coarse
granularity, 1 to 24 hours, you may notice the overall change of behavior
in heap size and garbage collection pattern. This can help with understanding
the correlation between the application load and the pressure on the
heap. If there is plenty of gray in selected
areas of the display, this means that the heap was too small for the
load imposed on the application at that time.
Displays garbage collection events over the period
that the application has been running and an estimated percentage
of time spent in garbage collection. These events include collection
from the young, old, and survivor objects in the heap. This display
does not include objects in the permanent generation space. (See Basic Garbage Collection Concepts if you are unfamiliar
with these terms.) When running your application with Java 5.0.12
or later or with Java 6.0.01 or later, the visualizer can show major
versus minor garbage collections.
For a healthy heap, minor
collections should dominate major garbage collections. If the number
of minor collections is too small compared to the number of major
garbage collections, the young generation of
the heap may be too small. If the heap size shown
by garbage collections converges towards the heap limit, the application
has run out of memory, or soon will run out. If the old generation
is too small, the application will run out of memory. If the total
heap size is too large compared to available physical memory, thrashing
occurs. A value of 5 percent or
less of time spent in garbage collection is acceptable. Values larger
than 10 percent usually indicate an opportunity for improvement. With a time span of more
than one hour, you can identify possible memory leaks. See Determining the Severity of a Memory Leak.
Each point on the graph represents the heap size
after a garbage collection completes; it represents the amount of
live memory at that time. Frequent long garbage collections represent a
potential problem, and will be coupled with a high percentage of time
spent in garbage collection. This percentage is displayed in the lower
right of the window. Displays the duration of each garbage collection
noted.
Expect collection times to vary with the size of the
heap; the larger the heap, the longer duration a normal GC will be. Collection times that are shorter or longer than expected
for the heap size can indicate that tuning garbage collection could
improve performance.
For HP Java 1.5.0.12 and later or 6.0.01 or later,
this visualizer distinguishes between major and minor garbage collections
such as full GC and scavenge.
Percentage of Time Spent in Garbage CollectionThe percentage is an estimated value of the time
spent in garbage collection. The horizontal red line
shows the current average percentage of time spent in garbage collection. An almost steady value
of 5 percent or less is considered low and acceptable. Sustained values larger
than 10 percent suggest room for improvement.
Here are two possible ways to make improvements: Tune the heap parameters
for better performance. For HP HotSpot VM, run your application with
the -Xverbosegc option and view the results in HPjmeter. If the heap has been already
tuned, you can decrease the application pressure on the heap, that
is, decrease the rate of object allocations, by trying these alternatives: Changing memory-inefficient algorithms
Object allocation statistics can help identify areas
for improvement.
Shows a measure of the objects that have not been
finalized at each garbage collection during the monitoring period.
Escalating numbers of unfinalized objects can indicate
that the finalizer queue is growing, with associated objects holding
increasing space in the heap.
Some or many of the objects in a finalizer queue may
no longer be needed by the program and are waiting to be finalized
and then collected. Check to see if the finalize () method in your application is being called at appropriate times
and frequency. Profiling with –Xeprof will
help you to obtain details about the number of unused finalizers in
the heap. Use the monitoring or profiling thread histogram to
check the state of the finalizer thread during the recorded period.
Allocated Object Statistics by ClassShows object allocation statistics according to
the object type allocated. A typical large Java application allocates a lot
of strings. This value can reach 25 percent of all allocations. If
any other type, especially an application-defined type approaches
or exceeds such a value, it may indicate inefficient code. For those classes that are instrumented (visible
through the JVM agent verbose flag), every object
allocation in every method is instrumented to report allocations.
However, sampling is used to minimize
overhead, so the metric reports allocation percentages, not total
allocation counts. These percentages are not absolute across the entire
application, but are computed with respect to allocations in instrumented
classes. Sampling minimizes overhead and focuses attention
on user code. To discover allocation statistics about application
server classes, use the include and exclude filtering flags in the
JVM agent options. The reported data is cumulative over the lifetime
of the session, and accuracy will improve as the session length increases.
Allocating Method StatisticsShows the methods that allocate the most objects. This metric is useful when you choose to decrease
heap pressure by modifying the application code. Methods listed at the top should become the primary
candidates for optimization. For those classes that are instrumented (visible
through the JVM agent verbose flag), every object allocation in every
method is instrumented to report allocations. However, sampling is used to minimize overhead, so
the metric reports allocation percentages, not total allocation counts.
These percentages are not absolute across the entire application,
but are computed with respect to allocations in instrumented classes. Sampling minimizes overhead and focuses attention
on user code. To discover allocation statistics about application
server classes, use the include and exclude filtering flags in the
JVM agent options. The reported data is cumulative over the lifetime
of the session, and accuracy will improve as the session length increases.
Current Live Heap ObjectsUse this visualizer to obtain an immediate data
summary of live objects in the heap each time that you click the Refresh Live Objects button. This can be especially
useful when trying to understand unexpected behavior in memory usage. The display shows information for the classes
of live objects found. It does not show indirect references. See Table 8-1 “Data Shown in Current Live Heap Objects Visualizer”.
Table 8-1 Data Shown in Current Live Heap Objects Visualizer | Column Heading | Description |
|---|
| Class | Name of class
to which object belongs | | % Heap Used | Percent
of allocated heap used | | Bytes | Cumulative size
occupied by the object (in bytes) | | +/- First Bytes | The
total change in the amount of bytes held for this class since the
first snapshot was taken. | | +/- Last Bytes | The
change in the amount of bytes held for this class since the last snapshot
was taken (most recent increment). | | Count | Number of current
live instances of the object class | | +/- First Count | The
total change in the number of objects held for this class since the
first snapshot was taken. | | +/- Last Count | The
change in the number of objects held for this class since the last
snapshot was taken (most recent increment). |
When the heap is large with many objects, refreshing
the snapshot will affect system performance more than refreshing from
a smaller heap with fewer objects. Sort by any of the data types by clicking the
column heading in the Current Live Heap Objects table. Continue clicking
on the same column heading to toggle the sort between ascending and
descending order for numerical columns and by alphabetical order for
columns containing text. You can copy all or part of the data displayed
into a temporary buffer, then paste or append it into a spreadsheet
or other similar software using a keyboard shortcut. To select a portion of the data, click and drag the cursor across the desired rows and columns of
data. The selected rows change color. Then click Copy Selection
to Buffer in the tool bar to capture the data.
To select all data for use
in a spreadsheet, click Copy All to Buffer in the tool bar.
Click File Save to capture all data as
an ASCII text file that you can save onto your local machine.
Monitor Threads and/or Locks Menu |  |
Displays thread states over time. Thread data
arrives in time slices. For each time slice, color-coded bars represent
the percentage of time the thread spent in each state. The reported
states are: | Waiting | The thread has been suspended
using Object.wait() method. | | Lock Contention | The thread is delayed
while attempting to enter a Java monitor that is already acquired
by another thread. | | Running | All remaining cases. |
Large amounts of red in
Thread Histogram indicate heavy lock contention, which is usually
a possible problem. On the other hand, large amounts of green indicate
a potential processing capacity for the involved threads. When there is no load,
the state for the threads doing the work on behalf of transactions
should be waiting, and marked by the green color. Threads terminating normally,
or because of uncaught exceptions, appear as a discontinued row. Multiple short-lived threads
appear as apparently blank rows in the display. At the same time the
number of displayed threads, shown at the bottom of the display, is
large. Lock Contention appears
as red in the display. Deadlocked threads appear
as two or more threads spending all their time in lock contention,
red, starting from a given time. This point in time identifies the
deadlock occurrence.
For each time slice, represented by a small portion
of the X-axis, the display along the Y-axis shows the percentage of
the time slice that the thread spent in each state. It represents
a stacked bar graph for the time slice.
Provides lock contention statistics. The percentages for each method represent how
much of the total lock contention observed occurred in that method.
For example, if there was a single instance of lock contention during
a run, that method would show 100 percent. Therefore, methods that
show a high percentage of lock contention may not be a problem, unless
you see a significant amount lock contention in your application. Lock contention can be detected either in synchronized
methods, or in methods that contain synchronized blocks. Lock contention in a running
application does not necessarily indicate a problem. If you suspect a lock
contention problem with your application, you should look more closely
at the highest-ranked methods in the Lock Contention display. The Thread Histogram can
also help you determine if there is significant lock contention. Other
system-level tools can also provide information to determine if there
is excessive lock contention.
This metric uses sampling to determine the level of lock contention. Therefore, this display
shows percentages of time wasted over the sampling period, not actual
time wasted in lock contention. The reported data is cumulative over the lifetime
of the session.
Monitor JVM and/or System Activity Menu |  |
Displays a list of all the methods compiled from
the time the session was opened, showing the number of times a particular
method was compiled. The metric window reports “No methods compiled
since the session opened.” until the next method compilation
occurs, and then the metric data appears. The normal values for
this metric are single-digit numbers. If the top item or items
show a much larger value than the rest of entries, and the value constantly
grows, it suggests excessive method compilation. Normally, a method is
compiled once or just a few times, which results in a very flat profile,
with none of the entries showing large numbers. However, a JVM may have a performance problem in which a certain
method, or methods, is compiled repeatedly. Such a problem manifests
itself in one entry clearly dominating the list and showing constant
growth over time.
Method Compilation FrequencyProduces a graph that shows the compilation frequency. This
is a companion to the existing Method Compilation Count. The Method
Compilation Frequency metric provides a view of how much effort the
JVM is spending on method compilation. A typical profile shows a lot of compilations as a
Java application is starting up, then it usually goes down to a small
number as things reach a steady state.
HPjmeter displays the number of classes loaded
into memory for processing. This number usually stabilizes over time
as processing progresses. The number of classes
loaded at any one time tends to oscillate within a narrow range; typically
less than 2 percent of all loaded classes will be unloaded or reloaded
during application processing. If the number of loaded
classes constantly grows, it indicates a possibility that the new
classes, possibly dynamically created, may eventually fill the available
memory and cause the application to crash.
Percent (%) CPU UtilizationDisplays total system and process CPU consumption.
Percentages are displayed as number of CPUs x 100%. Excessive use of CPU resources (greater than 80% of
the total number of CPUs) may indicate that the application load limit
is close, even though the application may appear to be performing
well. At higher consumption rates, CPU consumption can become a bottleneck
to good performance. When system CPU consumption is significantly higher
than process consumption, this may indicate that “alien”
or undesired processes are using CPU resources that the preferred
application could be using. It may also indicate that the application
is abusing the operating system kernel.
|
 |
|