In this article we are going to have an idea on how to make report and comment on those reports based on the jmeter listener results For primary idea on jmeter listeners, see this.  

We have to remind that, performance report or any report may send to any level of users have some useability or understanding issues . In here level of user define by technical knowledge. I have been experienced in such projects where performance reports are send to technical, non-technical, managerial, business development people etc.
So, we have to make the comment along with report in such a way so that every kind of person(mentioned before) have at least some understanding over the report. You may ask why this is important
-This will help the whole software team inside a company to understand the report which will make importance about the performance
-This will help SQA team to set standards based on target clients
-This will help management person to set up the timeline and milestones of the projects
-This will help the support team to define their efforts and answers of feedback to client.
-This will help stake holders to know the actual stability of the product in production.
[As, a performance test normally performed after development , sometimes after beta]

Normally we used to keep a few listeners in the jmeter test plan for running as most of them takes higher resources(memory and process). It is best practice to use jmeter to run without listeners but to save results in CSV file. We will have mostly used two listeners Summary Report, Aggregate Report.  After getting the results from these two listeners, we need to save the results as CSV file and then we will process them in report and then we will make some comments.
Preparing reports: From Summary Report, Aggregate Report, we will get these attributes

Throughput : (Request/second, sometimes it is shown in request/min or request/hour but when you save the CSV, it is always in request/second)
What is this? It indicates how many request/second is gotten by jmeter. It means, the more throughput your web pages have, it will be more responsive and faster. It includes any intervals between samples.
Some time, we might get a higher throughput because of cache server serving the same data again and again. To overcome this, try to avoid static data while requesting.
This is an ideal candidate for reporting.
Throughput = (Number of requests) / (total time).

Average : (Millisecond) It indicates the average time needed for one request among the samples jmeter determined. Ex- Suppose we are testing 100 user load for a log in request, and among the 100 users, jmeter listener see the results among 86 users and gave a average time. In here average time means total time needed by 86 sample and divided by 86. (per sample time).
This is not ideal candidate for reporting as , most of the time starting and ending thread may need some extra time, so the average time may not represent the actual average time.

Sample: (numbers) : It represents the number of sample requests determined by a jmeter listener. During execution, it is normal to have determination among less number than thread numbers. Ex- I am testing 100 users but, a listener could record or measure results among 86 samples among that 100. So, it will define the number of threads(samples) under measurements.
As, it don’t represents any state of the wep pages, so we can avoid this to include in the report. If it is asked , how many samples were used for measurement, then it will be mention in the report.

Min : (Millisecond)  The shortest time needed for a sample among the same named samples. It can be ignored in the report.

Max : (Millisecond) The longest time needed for a sample among the same named samples. It is one of the ideal candidate for report.

Std.dev: (Millisecond) The standard deviation of sample elapsed time. Jmeter calculates the population of standard deviation( same as STDEVP function in spreadsheet) not from the sample standard deviation. It means, it calculates among results shown in the summary report data, not from the sample time.
Depend on client, it can be mentioned in the report. Usually it is not mentioned. 

Error : (%) Percent of requests with errors.
It means, if there are 100 samples, and among then 10 samples took more time than it should(time can be mentioned in the sampler) or not responding or getting false (of http 200) .
This is an ideal candidate for reporting as it represents errors.
Sometimes, we may get 0 because of non responsive sites or exception in apache / java socket exception. We should be aware of the log before mention this in the report.

Bandwidth :(kb/sec) The throughput calculated in Kilobytes per second
Normally, it is mentioned beside throughput in reporting. It is optional; it just shows additional visibility with throughput.

Size : (avg. byte) Average size of the sample response (shows in byte) . It can be mentioned in the report but not mandatory. It is useful when refactoring the solutions, showing which are heavy requests.

Median: (Millisecond): IT represents time in the middle set of results. That means, 50% of sample took less time that this and other 50 took more time than this. The Median is the same as the 50 th Percentile
This may be mentioned in the report. It shows a overall average for requests.

90%Line : (Millisecond) It represents the time needed by 90% of the samples. In other words, 90% of all samples took not more than this time. And the other 10% took at least this time. It is same as 90th Percentile.
It is an ideal candidate for reporting as it represents the max time needed for most of pages(90%).

So, we get the measurements. Now, reporting. In this section, we will see different representation of reports.
1.    Compare graph: This is comparison among all get and post requests based on the measurement under single test run. So, it is side by side comparison of requests under single measurement. Ex- In log in test, if we compare log in page load and log in request side by side, it will show which will have more throughput(a measurement) or need less average time(another measurement).
2.    Progressive graph: This is comparison among progressive test run on a get / post request based on a measurement. Progressive test run means increasing / decreasing the number of user/time for test. For example, if we do testing with same settings under 50, 100, 150, 200 … number of users. And when we will compare (let’s say Log in Request) response time for 50, 100, 150, 200 users, then it will show progressive graph for log in request under response time. Same for time driven approach, like running same test under constant number of user for 30min, 1hr, 2 hr, 4 hr, 8hr etc.
3.    Mixture (Ultimate) Graph: This is among the all get and post requests based on the measurement under progressive test run. It is basically a mixture graph of the previous two.

Tips:
- Change the unit to make graph more understandable to user. Ex- make millisecond to second or req/sec to req/min. This is important to have more visibility over graphs.
-Change unit to have a good size graph. Some time graphs became small for using small unit, if we change the unit, it will be more visible.
- Change the label of the request. This is must when we use recording. For better understanding over page/request , change the label so that every one can understand. Ex- Change the page name to Log in page instead on domai\login.html.
-Define the problems in the graph. (you can see the standards mention below to identify problems with in the graph report)

So, when we have the reports, we may comment based on the reports like following.
A.    Using Compare graph :
   1.Which request is talking the most time of all. According to this we can apply refactoring, implement caching, identify bottlenecks. 
   2.Which page size is bigger, so let’s restructure or re engineer the page. (like optimization).
   3.We can identify the ajax/js time dependencies.
   4.We can also show which pages have high error rates
   5.We can define max throughput of a page/requests and define which need to improve.

B.    Using Progressive graph :
   1.We can show which page/requests are failing/generating error at increment over user/time.
   2.We can show which page/requests are taking more time at increment over user/time.
   3.We can define maximum user/time supported by the application.
   4.We can also find the server’s breaking point.

If we have a chance to compare results among multiple servers. We can comment on
1.Which server requires less time(performs better) on which page/requests
2.Which server need to improve (performs poor) in which page/requests
3.Which server has bottlenecks
4.Which sever is busy most of the time(using server agent)

So, now we know the comments for a test on a web application. But there are other things we should mention in the comment. These are fully depends on clients. I am adding some from my previous projects.
1.    Server configuration & bandwidth where test were performed
2.    Server configuration & bandwidth on which the tested application hosted
3.    Jmeter setting and configuration( jmeter property, test thread configuration, ramp-ups , delay time, plug in configuration, etc)
4.    Test Scenario settings
5.    Notes : on dependencies, blocking issues, known issues. Etc.
6.    Suggestions :  Based on what we get with measurement points.
7.    Good areas : Based on what we get with measurement points.
8.    Bad Areas : Based on what we get with measurement points.

Note : It is best to set standards before starting the test. This is one of the best practices. So, when can make standards. It should be at the beginning or before test plan approved. First, we should find what are the type of requests are there, then set the standard.
To have more ideas, let me give you some areas…
Let’s say out testing web application have following type page/request
1.    Page Load Get
2.    Ajax
3.    JS
4.    Page Post with 10 parameters

So,  what will be the standard. Actually , this part is fully depend on
-The robustness of the application
-Client target and standards
-Mostly used standards in similar type application over the world.
-Development time line.

Depend on my previous experiences; I used to provide 2000ms for Page Load Get. 3000 for a Ajax / JS request, 500ms for one parameter for a Page Post request. This is the data I set with my project experiences and it will vary project to project.

I will try to add more ideas time to time.

Thank you...:)...