Good service performance tool to get to use

Comments · 108 Views

Server-side performance testing is a test aimed at verifying the performance status of the server and whether there are any problems.

The execution process includes target formulation (determining requirements), test preparation, test execution, and test result analysis. In addition to test execution, other links are also very important. The in-depth details will be discussed one by one in the follow-up special projects. This article focuses on the goals and key points of each stage of this large process

Clear goals or requirements are the most important thing, which is related to the preparation and execution of the entire test. The following are common goals
Measure the load capacity of the entire system and evaluate the load range that can be served normally.
The ultimate pressure, bottleneck point, etc. of a certain server performance test.
Whether performance has deteriorated after a server performance test.
In some cases, it is necessary to confirm the performance status for certain scenarios in the business, such as user login, business query, and high-concurrency e-commerce transactions. Whether it is the above general description or the description of the specific scene, the three potential elements related to the goal need us to abstract it:
To verify within the scope of the system, it is necessary to understand the scope of the system under test that the target refers to—is it the entire business system or a certain part of the business system?
The business scenario needs to be verified. According to this business scenario, we need to find the set of requests that the user may send.
The state that needs to be achieved is the concurrent state of the user, or the extreme pressure state of the server performance test, etc. This is a key criterion for us to achieve the goal later.
When determining the goal, we also need to determine whether the goal is reasonable, that is, whether it can meet the final business needs. For example, if the important goal of the business is that the server performance test is normal when the TPS is 30,000, then the goal cannot be set: the limit pressure of the verification system is 30,000, because even if the system limit pressure reaches 30,000, the user experience is definitely not above is normal.
Finally, during the entire performance testing process, we need to review frequently whether our trade-offs and solutions can meet the original goal

Index determination
Once the goals are identified, the next thing to do is to translate the goals into metrics that the performance testing itself validates. These indicators include both request indicators representing user-side experience and system backend server performance test status. Here are the metrics we often see:
System capacity: How many user operations can the system accommodate at most? Usually, the user operations of the business system have multiple processes, which can be converted and tested according to a certain ratio.
Concurrent number: Simulate the number of concurrent users. There should be different combinations of concurrent operations in a system to simulate the real scene of users as much as possible. Response time: The response time of the interface/user operation. If the time is too long, it will affect the user experience and increase the system load.
Throughput rate: the number of transactions processed by the system per second (TPS), and the indicator related to this indicator is the number of requests processed per second (QPS). What is the relationship between the two? The transaction processing the user request includes one or more requests. For example, the transaction of the user logging in through the verification code includes the user obtaining the verification code request and the user sending the login request (if 20 people can log in at the same time per second, Then the corresponding TPS is 20). After understanding this logic, we can test the corresponding throughput data according to specific scenarios

Comments