Simulation of a user’s behavior for testing. Part 1
Sometimes there is a need to look how a system works when it is used by users without launching it to production. Of course you can hire a lot of people and give them a task to use the system or put the system to the beta testing phase. In some cases that’s a good idea but sometimes it isn’t.
If enrolling of outsiders is not applicable for some reasons it’s a good idea to write a testing appliance that simulates user’s behavior. In this article I describe the general idea of the appliance that I built for one of the systems I work with. I’ll try to show you all problems I faced during building it and the approaches I used to solve them.
There were several general requirements:
- The appliance should emulate user’s behavior as close as possible
- Tests should not depend on each other
- It should be easy to add new test and to modify and remove existing ones.
- The testing appliance should be highly configurable.
- The appliance should be easily adoptable for other systems
- Ability to make a load testing and a regular testing
- Collecting the stats of testing
- Ability to run launch testing by a system scheduler and to save the stats
Let’s discuss the first requirement. One guy said to me that user is a random-driven octopus because he has too much hands connected to his butt and all of those hands randomly click on buttons. Using this definition we can make a conclusion – we can’t exactly predict user’s actions. However we can predict quite accurate how user choose an action. The general purpose of a system may help us to specify the probability of choosing each of actions. Here I have said the main characteristic of a test – the probability of the fact that the action will be chosen next. That is the way how the appliance choses next test – it selects tests randomly but with specified probability. This approach allows to estimate that the behavior of the appliance is very close to the behavior of a real user by specifying probability of choosing each action.
The second requirement helps to satisfy the third one. The only requirement I would like to add is that the appliance should provide a small and simple interface for tests. Of course appliance is restricted in communication with tests within that interface.
Let’s discuss tests. Each test represent one of user’s action. To provide a simple and small interface for test writers I made an abstract class. That specifies three methods that correspond to each of test’s phases – initialization, executing and checking the result. I separated the executing phase and the checking phase to satisfy the requirement that requires to add an ability to run tests in load testing mode without checking results. Separating execution of test and checking results also helps to avoid inaccuracy in measurement the time required for an action.
“Wait a minute!”, you say. Of course have you noticed the strange _execute() method that is protected. This method corresponds to actual executing of a test. RunTest() method implements the common behavior of tests – calculating the time that the action takes. So to write a test the test writer should make a class that inherits the ITest class and to implement InitTest(), _execute(), and CheckResults() methods. Here I’d like to notice that the _execute() method should only perform an action. Initialization and checking the result should be implemented in corresponding methods. Otherwise it will take more time to execute an action and the stats will be affected.
One important detail – all tests should communicate to the system only via external interfaces that are available for users – UI, SOAP/REST/JSONRPC, etc. It is not allowed to use internal interfaces because the appliance simulates the behavior of a user that cannot communicate to the system that way.
The big advantage of using a testing appliance for testing instead of using human beings is an ability to simulate simultaneously as many users as you want. If you’d like to do that you need to take care of multithreading. The approach I used says that each user should be simulated by a separate thread. It’s quite easy to manage number of threads if tests do not depend on each other and there are no dependencies between tests and the appliance. In other words: each test should be independent.
There must be some king of manager that manages tests. By managing tests I mean loading, choosing and running tests. I also added an additional responsibility for the manager that I implemented for my appliance. Each test returns a structure of data that contains the results of execution. I need to store them to make it possible to calculate stats. So I made the test-manager responsible for that.
Let’s discuss flexibility. The requirements say that the appliance should be highly configurable and easily adoptable for other systems. The main problem is to let users to configure the test. One of the approach is to let test writers to implement their mechanisms for storing configuration of their tests but that’s not a good idea. Another approach says that the appliance should provide a flexible mechanism for storing settings of each test. In this case we need a configurator that can dynamically apply settings to tests.
Another one important feature of the appliance is handling the results and calculating stats. After execution test can require a lot of system resources so storing tests is not a good idea. It is reasonable to store results of execution separately from the test. Using this approach you can destroy the test after execution and clear all it’s data. That’s why another important part of the system is a results handler. In our case it is responsible for storing the results and calculating stats.
Oh, it seems that my train arrived to Kharkiv so let’s make a quick conclusion:
The main parts of the testing appliance are:
- Interface for tests
- Testing manager
- Results handler.
I will describe the implementation of the appliance in my next article.