10.
К оглавлениюB0F8O.php" style="padding:2px; font-size: 14px;">1 2 3 4 5 6 7 8 9 10 11 12When a
system is associated with a Laboratory, it will automatically save information
on the available computers. Figure 14: List of computers shows all the
computers used in a Benchmark test performed in Nice. Information is associated
with each computer (See Figure 10: Computer info collected). For each new
Benchmark run, the data is collected again. However a copy of this information
is saved in association with a laboratory. Information that was not collected,
such as computer model, and level 2 cache, can be added to this copy. This data
will be copied to each run so that once the computer model is written to one
place, it is copied all over. Note that data from computers that are
participating in a Benchmark Run can be gathered by pinging them. When this has
been performed, the missing data can be filled in.
@%20=58725AB5=.files/image014.jpg">
Overview
This lesson teaches the student how to analyze the results of the
benchmark run.
Objectives
At the end
of this lesson, you should be able to:
Analyze the results received from the execution of each script.
The
Benchmark tool collects varied information during a Benchmark run. When a run
is first created, the tool will collect information on the Axapta version and
the database. This information can be found in the Benchmark run form, under
the collected data tab page. Figure 15: Benchmark run collected data, shows how
the environment information is collected at creation of the Benchmark run,
except for the client type, which is calculated when the run is finished.
@%20=58725AB5=.files/image015.jpg">
Figure 15: Benchmark run collected data
For each
user that participates in a Benchmark run, a record is created containing
directions as to what script to run etc. Secondly, it contains information
collected at the start of the script execution. Information relating to the
ODBC driver version is also collected, and if connected to an Object Server,
the name of that server is gathered.
As well as
the above, the start and finish time of the particular script is included.
Timing details for each executed step is recorded and can be viewed by clicking
timing details. The table with timing details will rapidly be filled up and is
therefore not exported along with the data on a given laboratory series. Based
on the timing details, the number of main lines created during the script is
calculated. This number is used to calculate the throughput, which will be
discussed in the next section on aggregated information.
In Figure
16: User script information, two more TabPages can be seen. These contain
information that would normally be directed to the infolog window. This reason
for this is that the infolog window is not available in all cases. This is
especially the case when running as a worker thread. If a Benchmark run is
terminated due to an error, go into the User script form for that specific run,
find a user script with error status, go to the log tab, and find the
information provided there.
@%20=58725AB5=.files/image016.jpg">
Figure 16: User script information
Figure 17: Timing details for a benchmark user shows the collected data for each executed step in the script. The third column displays the response time measured in milliseconds. Note that in order to optimize performance, the timing details are cached in memory and only flushed to disk at given intervals. This minimizes the database access.
@%20=58725AB5=.files/image017.jpg">
Figure 17: Timing details for a benchmark user
When a
system is associated with a Laboratory, it will automatically save information
on the available computers. Figure 14: List of computers shows all the
computers used in a Benchmark test performed in Nice. Information is associated
with each computer (See Figure 10: Computer info collected). For each new
Benchmark run, the data is collected again. However a copy of this information
is saved in association with a laboratory. Information that was not collected,
such as computer model, and level 2 cache, can be added to this copy. This data
will be copied to each run so that once the computer model is written to one
place, it is copied all over. Note that data from computers that are
participating in a Benchmark Run can be gathered by pinging them. When this has
been performed, the missing data can be filled in.
@%20=58725AB5=.files/image014.jpg">
Overview
This lesson teaches the student how to analyze the results of the
benchmark run.
Objectives
At the end
of this lesson, you should be able to:
Analyze the results received from the execution of each script.
The
Benchmark tool collects varied information during a Benchmark run. When a run
is first created, the tool will collect information on the Axapta version and
the database. This information can be found in the Benchmark run form, under
the collected data tab page. Figure 15: Benchmark run collected data, shows how
the environment information is collected at creation of the Benchmark run,
except for the client type, which is calculated when the run is finished.
@%20=58725AB5=.files/image015.jpg">
Figure 15: Benchmark run collected data
For each
user that participates in a Benchmark run, a record is created containing
directions as to what script to run etc. Secondly, it contains information
collected at the start of the script execution. Information relating to the
ODBC driver version is also collected, and if connected to an Object Server,
the name of that server is gathered.
As well as
the above, the start and finish time of the particular script is included.
Timing details for each executed step is recorded and can be viewed by clicking
timing details. The table with timing details will rapidly be filled up and is
therefore not exported along with the data on a given laboratory series. Based
on the timing details, the number of main lines created during the script is
calculated. This number is used to calculate the throughput, which will be
discussed in the next section on aggregated information.
In Figure
16: User script information, two more TabPages can be seen. These contain
information that would normally be directed to the infolog window. This reason
for this is that the infolog window is not available in all cases. This is
especially the case when running as a worker thread. If a Benchmark run is
terminated due to an error, go into the User script form for that specific run,
find a user script with error status, go to the log tab, and find the
information provided there.
@%20=58725AB5=.files/image016.jpg">
Figure 16: User script information
Figure 17: Timing details for a benchmark user shows the collected data for each executed step in the script. The third column displays the response time measured in milliseconds. Note that in order to optimize performance, the timing details are cached in memory and only flushed to disk at given intervals. This minimizes the database access.
@%20=58725AB5=.files/image017.jpg">