Sample: Number of requests sent.
The Throughput: is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The Response time: is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client.
The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.
Average: This is the Average (Arithmetic mean µ = 1/n * Si=1…n xi) Response time of your total samples.
Min and Max are the minimum and maximum response time.
An important thing to understand is that the mean value can be very misleading as it does not show you how close (or far) your values are from the average.For this purpose, we need the Deviation value since Average value can be the Same for different response time of the samples!!
Deviation: The standard deviation (s) measures the mean distance of the values to their average (µ).It gives you a good idea of the dispersion or variability of the measures to their mean value.
The following equation show how the standard deviation (s) is calculated:
s = 1/n * v Si=1…n (xi-µ)2
For Details, see here!!
So, if the deviation value is low compared to the mean value, it will indicate you that your measures are not dispersed (or mostly close to the mean value) and that the mean value is significant.
Kb/sec: The throughput measured in Kilobytes per second.
Error % : Percent of requests with errors.
An example is always better to understand!!! I think, this article will help you.