Test driven development (TDD) is a software development process that emphasizes the creation of tests for all functionality in a piece of software. Before adding a new feature, the developer first writes one or more tests for the feature which will fail (due to the feature not being implemented yet). At this point, the developer will write just enough code so that the previously defined tests pass. If tests have not been written for various edge cases or subtleties of the feature, these conditions will not be handled in the implementation. The developer will then continue to iterate by adding more tests and writing more code to make the tests pass. In this way, the tests essentially become a requirements specification for the feature, and the implementation contains just enough functionality to implement the requirements and no more. This results in several benefits: the new feature is quite thoroughly tested; future refactoring of the code is quite safe as there exists a robust test suite to ensure that the behaviour is preserved; and the code should be relatively clean and free of bloat as it only implements the required functionality.
Tests in the TDD development methodology often consist of unit tests, as opposed to integration tests. Unit tests are standalone test cases that test a single “unit” of the program, often a method or other small area of functionality. A unit test is typically structured such that it can be run in isolation from the rest of the program under test, which allows it to be automated and run in series with many other unit tests. This is in contrast with integration tests, which tests the interactions between various components of the program or interactions between the program and outside systems. Unit tests are preferred for TDD because they do not require the setup of any external components and are focused on testing only the code that is currently under development. This allows for running many automated unit tests in quick succession, which results in a fast code-test-debug cycle.
The issue with writing unit tests is that at least some of the code in most applications expects to communicate with external systems or has dependencies on other parts of the program. So how does one go about applying the TDD development methodology when working with a distributed system using a distributed object model like Distrix? The answer lies in the use of mock objects and other techniques such as dependency injection. What this means in practice is that you design your application so that the dependencies (internal or external) can be replaced with components inserted by the unit tests at test time. This allows the unit test to control and monitor how the “unit” under test interacts with its environment.
As an example, in a Distrix agent you would often need to test the distributed object creation and update callbacks as this is where a lot of the event handling logic would sit. You cannot easily arrange for an actual remote object to be created or updated in a unit test, as this requires the agent to subscribe from another agent that publishes the object. Your test would be difficult to run in a quick, automated manner as there are too many external dependencies to consider.
A rapid approach that provides the same test coverage is to create a mock object which derives from the distributed proxy object. The unit test can then call the callback function, passing in this mock object in place of the real proxy object. The callback function will treat the object as a proxy (because it is derived from the proxy class), and the unit test can have complete control over the behaviour of the mock proxy, which will allow it to test various conditions within the callback function. You can see an example of this technique using Java classes below: