I am a huge test first advocate. Since seeing Martin Thompson speak I am now trying to include performance testing with the same approach.
I am going to call this approach Performance, Acceptance and Unit Test Driven Development, or PAUTDD :)
Tools/frameworks/library come and go so I'll start with the theory then show how I set this up using Junit and Mockio for unit testing, Cucumber for acceptance tests and Gatling for performance tests. Well I won't show JUnit and Mockito because that is boring!
So here's how I develop a feature:
- Write a high level end to end acceptance test. There will be times where I'll want acceptance tests not to be
end to end, like if there was an embedded rules engine.
- Write a basic performance test.
- TDD with unit testing framework until the above two pass.
- Consider scenario based performance test.
I hate to work without test first at the acceptance level, even for a small feature (half a day dev?) I find them
invaluable for keeping me on track and letting me know when functionally I am done. Especially if I end up doing a
bit too much Unit testing/Mocking (bad Chris!) as when head down in code it is easy to forget the big picture: what
functionality are you developing for a user.
Next is a newer part of my development process. Here I want a performance test for the new feature, this may not be
applicable but it usually is. Which ever framework I am using here I want to be able to run it locally and as part
of my continuous integration for trend analysis. However I want more for my effort than that, I really want to be
able to use the same code for running against various environments running different hardware.
performance tests that aren't in source control and versioned. Tools that use a GUI are no use to me, I constantly
found while working at my last company that the "performance testers" would constantly change their scripts and this
would change the trend way more than any application changes. I want to be able to track both.
So how to set all this up for a Spring Boot application using Cucumber and Gatling?
This post is on build setup, not the actual writing of the tests. My aim is to enable easy
Here's the layout of my latest project:
Main is the source, test is the unit tests and e2e is both
the Cucumber and the Gatling tests. I could have had separate source sets for the Cucumber and Gatling tests but
that would have confused IntelliJ's Gradle support too much (and they are nicely split by Cucumber being Java and
Gatling being Scala).
There are various articles on Cucumber-JVM, here's the
steps I used to get this running nicely in the IDE and via Gradle.
First the new source set:
Nothing exciting here, we are using the same classapth as
test, we could have had a separate one.
Next is dependencies, this is actually the Gatling and
HTTP client (for hitting our application) as well.
We cucumber JUnit and Spring dependencies.
Next is the source code for the acceptance tests. The Features
are in the resource folder and the source is in the Java folder. To allow running via Gradle you also create a JUnit
test to run all the features. Intellij should work find without this by just running the feature files.
Here I have Features separated by type, here's an example Feature:
I like to keep the language at a high level so a non-techy can write these. The JUnit test RunEndToEndTests looks
This is what Gradle will pick up when we run this from the command line. You could separate this out into
multiple tests if you wanted.
For running inside IntelliJ you might need to edit the run configuration to
include a different Glue as by default it will be the same as the package your Feature file is in, for this project
this wouldn't pick up the GlobalSteps as it is outside of the security/users folder. This is what my configuration
looks like, I set this as the default:
Now our features will run if you want to see what the implementation of the Steps look like, checkout the whole
project from Github.
This is my first project using Gatling, I wanted my scenarios in code that I could have in version control.
Previously I've used JMeter. Where as you can checkin the XML it really isn't nice to look at in diff tools. I've
also been forced *weep* to use more GUI based tools like SOASTA and HP Load runner. One thing I haven't looked at is
Gatling's support for running many agents. For me to continue using Gatling beyond developer trend analysis this
needs to be well supported.
We already have the dependencies, see the dependencies section above, and we're going to use the same source set.
The only difference is we're going to be writing these tests in Scala.
My first requirement was not to have to have Gatling installed manually on developer and CI boxes. Here's how to do
this in Gradle:
Where BasicSimulation is the fully qualified name of my Gatling load test. All we do here is define a JavaExec task
with the Gatling main class, tell Gatling where our source is and which simulation to run.
To tie all this
together so it runs every time we fun Gradle check we add the following at the bottom of our Gradle build file:
This will produce reports that can be published + consumed by the Jenkins Gatling plugin.
To run the
same load test from Intellij we do exactly the same in a run configuration:
A basic Gatling tests looks like this:
This test will run a very simple test where a single virtual user hits the /api/auction URL 100 times with a
pause of 10 milliseconds. The top couple of lines start the Spring boot application and register a shut down hook to
We'll then end up with a report that looks like this:
This is a pretty terrible load test as it runs for 2 seconds and has a single user. But the point of this post is
to setup everything so when adding new functionality it is trivial to add new performance and acceptance tests.
it, happy testing! If you want to see the whole project is on Github
It us under active development so you'll know how I got on with Gatling based if it is still there and there are
lots of tests that use it!