We now briefly discuss our software engineering practices that help us to ensure the transparency, reliability, scalability, and extensibility of the
grmpy package. Please visit us at the Software Engineering for Economists Initiative for an accessible introduction on how to integrate these practices in your own research.
We use pytest as our test runner. We broadly group our tests in three categories:
We create random model parameterizations and estimation requests and test for a valid return of the program.
We conduct numerous Monte Carlo exercises to ensure that we can recover the true underlying parameterization with an estimation. Also by varying the tuning parameters of the estimation (e.g. random draws for integration) and the optimizers, we learn about their effect on estimation performance.
We provide a regression test. For this purpose we generated random model parameterizations, simulated the coresponding outputs, summed them up and saved both, the parameters and the sums in a json file. The json file is part of the package. Through this the provided test is able to draw parameterizations randomly from the json file. In the next step the test simulates the output variables and compares the sum of the simulated output with the associated json file information. This ensures that the package works accurate even after an update to a new version.
We use several automatic code review tools to help us improve the readability and maintainability of our code base. For example, we work with Codacy. However, we also conduct regular peer code-reviews using Reviewable.