“Dynamic Integration Testing” in Kotlin
Intro
I’ve been maintaining kotlin-faker — a library that generates “random” fake data for testing and data anonymization purposes— for awhile now, and have been heavily relying on integration tests since early on. These tests helped me found many issues that would’ve been hard to find otherwise, if possible at all, due to the nature of the functionality that the library provides; after all — we’re talking about randomly generated data here.
Maintaining kotlin-faker
presents some interesting challenges, particularly due to the library's extensive domain coverage. For example, as of release v1.6.0, kotlin-faker
contained more than 213 “data providers” (distinct data generators for various things like address
, name
, bank
, ...), each teeming with functions dedicated to producing random data (With a total of more than a 1000 distinct functions that generate random data and are accessible by end users.)
Tip: Take a look at the faker-bot
cli application that provides more details on the available functionality of the kotlin-faker lib.
The crux of the matter lies in testing these functions — a daunting task at first glance, given their inherent unpredictability and the sheer volume of possibilities.
The Quest for Comprehensive Testing
Traditionally, unit tests serve as the first line of defense when it comes to dynamic code verification, ensuring the internal mechanics of the code works as intended. However, given the nature of kotlin-faker, which thrives on the unpredictability of its output, integration tests become indispensable. These tests are not just supplementary; they are vital in uncovering hidden issues that might elude unit tests, issues that could remain dormant until a user encounters a malfunctioning function.
The question then arises: How does one efficiently test the output of hundreds of functions designed to generate random data? Manually crafting a test for each function is neither practical nor feasible.
Enter Kotest
Unlike more “traditional” testing frameworks in Java like JUnit that rely heavily on annotations, Kotest tests are defined through descriptive specs — essentially, functions that allow for more expressive test-writing techniques.
Consider the simplicity and power of enclosing a test within a repeat(10) { }
construct or a forEach { }
block:
describe("randomly-generated string") {
repeat(10) { r ->
it("should be a non-empty string, run#$r") {
// your test
}
}
}
These capabilities not only enhance the readability of tests but also open doors to new testing strategies, which in case of kotlin-faker
proved to be particularly useful since we’re dealing with random data generation.
describe("list of inputs") {
listOf("foo", "bar", "baz").forEach { input ->
it("should do something when using $input") {
// your test
}
}
}
Note: It’s worth mentioning that we could do something similar with e.g. JUnit Parameterized Tests, but kotest “adds a little “zing” to it by giving us specs that allow for a more “functional” approach to writing tests instead of having to deal with individual test methods annotated with @Test
.
This, however, solves only half the problem. We still need to write a test for each function that generates data. Or do we? 🤔
Enter Reflection
Some libraries try to avoid it, others can’t live without it. There are many controversies around reflection generally and Java reflection in particular.
- Reflection: Is using reflection still “bad” or “slow”? What has changed with reflection since 2002?
- Is it a bad habit to (over)use reflection?
- Why should I use reflection?
- What is reflection and why is it useful?
I believe it, like anything else, is just a tool for solving a problem. A tool when used properly — helps you to fix the problem in a “simpler” way; or it gives you even more problems when abused instead.
So what about using reflection for testing purposes? I’m not talking about modifying visibility of private methods, for example, to be able to test them in unit tests. This is good example of something that you shouldn’t do. But let me show you an example of when I have found the usage is more than justified and helped me save hundreds of lines of test code.
The usage of reflection marked a turning point when it comes to integration testing of kotlin-faker. Through reflection, the integration testing setup dynamically identifies and invokes every available data-generation function across its myriad of domains. This automated, comprehensive coverage ensures that no stone is left unturned, no function untested. Not only that, it ensures that any new functionality will always be tested as well, w/o updating the integration test code whatsoever.
Imagine a test that, instead of being statically coded for a specific function, dynamically adapts to invoke every data-generation function within the library and verify its output against a set of expected criteria. By leveraging Kotest’s “functional specs” in tandem with Kotlin’s reflection capabilities, we achieve precisely that:
describe("every public function in each data provider") {
val faker = Faker()
// Get a list of all publicly visible data providers
val providers: List<KProperty<*>> = faker::class.declaredMemberProperties.filter {
it.visibility == KVisibility.PUBLIC
&& it.returnType.isSubtypeOf(FakeDataProvider::class.starProjectedType)
}
// Get a list of all publicly visible functions in each provider
val providerFunctions = providers.associateBy { provider ->
provider.getter.call(faker)!!::class.declaredMemberFunctions.filter {
it.visibility == KVisibility.PUBLIC && !it.annotations.any { ann -> ann is Deprecated }
}
}
assertSoftly {
providerFunctions.forEach { (functions, provider) ->
functions.forEach {
context("result value for ${provider.name} ${it.name} is resolved correctly") {
val regex = Regex("""#\{.*}|#++""")
val value = when (it.parameters.size) {
1 -> it.call(provider.getter.call(faker)).toString()
2 -> {
if (it.parameters[1].isOptional) { // optional params are enum typed (see functions in Dune, Finance or Tron, for example)
it.callBy(mapOf(it.parameters[0] to provider.getter.call(faker))).toString()
} else it.call(provider.getter.call(faker), "").toString()
}
3 -> {
if (it.parameters[1].isOptional && it.parameters[2].isOptional) {
it.callBy(mapOf(it.parameters[0] to provider.getter.call(faker))).toString()
} else it.call(provider.getter.call(faker), "", "").toString()
}
else -> throw IllegalArgumentException("")
}
it("resolved value should not contain yaml expression") {
if (value.contains(regex)) {
throw AssertionError("Value '$value' for '${provider.name} ${it.name}' should not contain regex '$regex'")
}
}
it("resolved value should not be empty string") {
if (value == "") {
throw AssertionError("Value for '${provider.name} ${it.name}' should not be empty string")
}
}
it("resolved value should not contain duplicates") {
val values = value.split(" ")
if (values.odds() == values.evens()) {
throw AssertionError("Value '$value' for '${provider.name} ${it.name}' should not contain duplicates")
}
}
}
}
}
}
}
With only 60 lines of code we are executing 3000+ (as of writing) integration tests.
Now, verifying the output against expected results is a challenge in itself since, once more, we’re dealing with randomly generated data here. However, we can at least check that all the “expressions” and “placeholders” were resolved and that the output is not empty and does not contain duplicates. This gives us enough certainty when it comes to most of the functionality while keeping the tests as simple as possible. There are corner cases still, and they are tested separately in a more “traditional way” — by explicitly writing each test.
Kotlin-faker also supports data generation in numerous locales (60 to be exact, as of this writing) and we want to check that the Faker
instance can be initialized with each of those w/o errors at least. Again, this is very simple to do with kotest just by iterating over a list of locales
, with a test declared inside the forEach
block:
describe("Faker instance with a custom locale") {
val localeDir = requireNotNull(this::class.java.classLoader.getResource("locales/"))
val locales = File(localeDir.toURI()).listFiles().mapNotNull {
if ((it.isFile && it.extension == "yml") || (it.isDirectory && it.name != "en")) {
it.nameWithoutExtension
} else null
}
locales.forEach {
it("Faker with locale '$it' should be initialized without exceptions") {
assertDoesNotThrow {
faker { fakerConfig { locale = it } }
}
}
}
}
Conclusion
Integration testing is a critical part of any development process. These tests provide an additional safety net, ensuring that any changes to the code do not inadvertently break the functionality.
The testing strategy we employed in kotlin-faker
has had a profound impact on library's ease of maintenance and verification of correctness of the functionality. Systematic testing of every function across all data generators has helped me uncovered issues that could have been very difficult, if not impossible, to find otherwise, which would ultimately affect the reliability of the library. These tests have not only helped in ensuring the correctness of the data generated but also in maintaining the library's quality over time. Whether it's a completely new data generator class, an extra function to an existing generator, or a change to the yml
file that serves as a "data dictionary", we have greater certainty that such a change will not negatively affect the functionality because at the very least it will always be tested against the baseline of the integration testing.
Moreover, this approach has facilitated a more efficient development process overall. As new functions or domains are added to kotlin-faker
, the integration tests automatically "adapt", ensuring that every data-generation function of the library remains tested.