Prelude
This post is the 2nd installment of a 2 part series about integration testing. You can read the first installment, which is about executing tests in a restricted environment using Docker. The example repository that this post draws it examples from can be found here.
Introduction
”More than the act of testing, the act of designing tests is one of the best bug preventers known.” - Boris Beizer
Before an integration test can be executed, the external systems that the test will be touching must be properly configured. If not, the test results will not be valid or reliable. For example, a database needs to be loaded with well defined data that is correct for the behavior being tested. Data updated during a test needs to be validated, especially if that updated data is required to be accurate for a subsequent test.
Go’s testing tool provides a facility to execute code before any test function is executed. This is done through the use of an entry point function called TestMain
. This is equivalent to the traditional entry point function Main
that all Go applications have. Thanks to the TestMain
function, configuring an external system like a database before running any tests is possible. In this post, I will share how to use TestMain
to configure and seed a Postgres database and how to write and run tests against that database.
Managing Seed Data
In order to seed the database, the data needs to be defined and placed somewhere the testing tool can access it from. A common approach is to define a SQL file that is part of the project and contains all the SQL commands that need to be executed. Another approach is to store the SQL commands in constants inside the code. In opposition to both these approaches, I am going with a pure Go implementation to solve this problem.
It’s often the case that you already have your data structures defined as Go struct types for database access. I am going to take advantage of the fact that those data structures exist to already move data in and out of the database. Instead of defining seed data in the form of a SQL query, all of the seed data is going to be constructed and assigned to variables based on the application’s existing data structures.
I like this solution because it makes it much easier to write the integration tests and validate that the data is properly flowing in and out of the database and application. Instead of having to compare directly against JSON, the data can be unmarshaled into its appropriate type and compared directly against the variables defined for the seed data. This will not only minimize syntactical comparison errors in tests, but also allow your tests to be more maintainable, scalable, and easier to read.
Seeding The Database
In the project I am sharing for this post, all of the functionality for seeding the test database is contained within a package called testdb
. This package is not imported by application code and therefore only exists for the purpose of testing. The three main functions that aide in seeding the test database are defined as follows: SeedLists
, SeedItems
, and Truncate
.
Here is a view of the SeedLists
function.
Listing 1
56 func SeedLists(dbc *sqlx.DB) ([]list.List, error) {
57 now := time.Now().Truncate(time.Microsecond)
58
59 lists := []list.List{
60 {
61 Name: "Grocery",
62 Created: now,
63 Modified: now,
64 },
65 {
66 Name: "To-do",
67 Created: now,
68 Modified: now,
69 },
70 {
71 Name: "Employees",
72 Created: now,
73 Modified: now,
74 },
75 }
76
77 for i := range lists {
78 stmt, err := dbc.Prepare("INSERT INTO list (name, created, modified) VALUES ($1, $2, $3) RETURNING list_id;")
79 if err != nil {
80 return nil, errors.Wrap(err, "prepare list insertion")
81 }
82
83 row := stmt.QueryRow(lists[i].Name, lists[i].Created, lists[i].Modified)
84
85 if err = row.Scan(&lists[i].ID); err != nil {
86 if err := stmt.Close(); err != nil {
87 return nil, errors.Wrap(err, "close psql statement")
88 }
89
90 return nil, errors.Wrap(err, "capture list id")
91 }
92
93 if err := stmt.Close(); err != nil {
94 return nil, errors.Wrap(err, "close psql statement")
95 }
96 }
97
98 return lists, nil
99 }
Listing 1 shows the SeedLists
function and how it creates test data. Between lines 59-75, a table of list.List
data values are defined for insert. Then between 77-96, the test data is inserted into the database. To help compare the data being inserted with the results of any database calls made during the test, the test data set is returned to the caller on line 98.
Next, look at the SeedItems
function that inserts more test data into the database.
Listing 2
102 func SeedItems(dbc *sqlx.DB, lists []list.List) ([]item.Item, error) {
103 now := time.Now().Truncate(time.Microsecond)
104
105 items := []item.Item{
106 {
107 ListID: lists[0].ID, // Grocery
108 Name: "Chocolate Milk",
109 Quantity: 1,
110 Created: now,
111 Modified: now,
112 },
113 {
114 ListID: lists[0].ID, // Grocery
115 Name: "Mac and Cheese",
116 Quantity: 2,
117 Created: now,
118 Modified: now,
119 },
120 {
121 ListID: lists[1].ID, // To-do
122 Name: "Write Integration Tests",
123 Quantity: 1,
124 Created: now,
125 Modified: now,
126 },
127 }
128
129 for i := range items {
130 stmt, err := dbc.Prepare("INSERT INTO item (list_id, name, quantity, created, modified) VALUES ($1, $2, $3, $4, $5) RETURNING item_id;")
131 if err != nil {
132 return nil, errors.Wrap(err, "prepare item insertion")
133 }
134
135 row := stmt.QueryRow(items[i].ListID, items[i].Name, items[i].Quantity, items[i].Created, items[i].Modified)
136
137 if err = row.Scan(&items[i].ID); err != nil {
138 if err := stmt.Close(); err != nil {
139 return nil, errors.Wrap(err, "close psql statement")
140 }
141
142 return nil, errors.Wrap(err, "capture list id")
143 }
144
145 if err := stmt.Close(); err != nil {
146 return nil, errors.Wrap(err, "close psql statement")
147 }
148 }
149
150 return items, nil
151 }
Listing 2 shows the SeedItems
function and how it also creates test data. The code is identical to listing 1, except for the use of the item.Item
for the data type. The only function left to share from the testdb
package is the Truncate
function.
Listing 3
45 func Truncate(dbc *sqlx.DB) error {
46 stmt := "TRUNCATE TABLE list, item;"
47
48 if _, err := dbc.Exec(stmt); err != nil {
49 return errors.Wrap(err, "truncate test database tables")
50 }
51
52 return nil
53 }
Listing 3 shows the Truncate
function. As its name suggests, this is used to remove all the data that the SeedLists
and SeedItems
functions insert.
Creating TestMain Using testing.M
With the package that facilitates the seeding and truncating of the test database complete, it’s time to focus on the actual set-up to start running the integration tests. Go’s testing tool allows you to define your own TestMain
function if you need to perform activities before any test function is executed.
Listing 4
22 func TestMain(m *testing.M) {
23 os.Exit(testMain(m))
24 }
Listing 4 is the TestMain
function that executes before the start of any integration tests. You can see on line 23, an unexported function named testMain
is called within the scope of the call to os.Exit
. This is done so that deferred functions within testMain
can be executed and a proper integer value can still be set inside the os.Exit
call. The following is the implementation for testMain
function.
Listing 5
27 func testMain(m *testing.M) int {
28 dbc, err := testdb.Open()
29 if err != nil {
30 log.WithError(err).Info("create test database connection")
31 return 1
32 }
33 defer dbc.Close()
34
35 a = handlers.NewApplication(dbc)
36
37 return m.Run()
38 }
In listing 5, you can see the testMain
function is only 8 lines of code. The function starts with opening a connection to the database with a call to testdb.Open()
on line 28. The configuration parameters for this call are set as constants inside the testdb
package. It is important to note that this call to Open
will fail if the test database is not running. This test database is created and facilitated by docker-compose, as explained in detail in part 1 of this series over integration testing (click here to read part 1).
Once the test database has been successfully connected to, the connection is passed to handlers.NewApplication()
on line 35 and the return value of this function is used to initialize a package-level variable of type *handlers.Application
. The handlers.Application
type is custom to this project and has struct fields for the http.Handler
interface to facilitate routing for the web service as well as a reference to the open database connection that was created.
Now that an application value is initialized, m.Run
can be called to execute any test functions. The call to m.Run
is blocking and won’t return until all the test functions that are identified to run get executed. A non-zero exit code denotes a failure, 0
denotes success.
Writing Integration Tests for a Web Service
An integration test combines multiple units of code as well as any integrated services, such as a database, and tests the functionality of each unit as well as the relationship between each unit. Writing integration tests for a web service generally means that all of the entry-points for each integration test will be a route. The http.Handler
interface, which is a required component of any web service, contains the ServeHTTP
function which allows us to leverage the defined routes in the application.
The functions that facilitate database seeding and return the seeded data as go types is particularly useful in asserting the structure of the returned response bodies in web service integration tests. In the following listings I will be breaking down the different parts of a typical integration test for an API route. Seeding and obtaining the seed data as Go values with the functions defined in listings 1 and 2 is the first step:
Listing 6
17 func Test_getItems(t *testing.T) {
18 defer func() {
19 if err := testdb.Truncate(a.DB); err != nil {
20 t.Errorf("error truncating test database tables: %v", err)
21 }
22 }()
23
24 expectedLists, err := testdb.SeedLists(a.DB)
25 if err != nil {
26 t.Fatalf("error seeding lists: %v", err)
27 }
28
29 expectedItems, err := testdb.SeedItems(a.DB, expectedLists)
30 if err != nil {
31 t.Fatalf("error seeding items: %v", err)
32 }
Before allowing the seeding functions, which could potentially individually fail, truncation of the database must be set. This way, irregardless of whether the seeding functions pass or fail, the database will always be clean after the test runs. After truncation is deferred to be run, the testdb
seeding functions are invoked and their return values are captured in order to be used in an integration test for a route defined in the example web service. If either of these seeding functions fail, the test calls `t.Fatalf.
Listing 7
11 // Application is the struct that contains the server handler as well as
12 // any references to services that the application needs.
13 type Application struct {
14 DB *sqlx.DB
15 handler http.Handler
16 }
17
18 // ServeHTTP implements the http.Handler interface for the Application type.
19 func (a *Application) ServeHTTP(w http.ResponseWriter, r *http.Request) {
20 a.handler.ServeHTTP(w, r)
21 }
In order to invoke registered routes, the Application
type implements the http.Handler
interface. The interface on Application
invokes the ServeHTTP
function implemented by the interface http.Handler
type set as a struct field on Application
.
Listing 8
66 req, err := http.NewRequest(http.MethodGet, fmt.Sprintf("/list/%d/item", test.ListID), nil)
67 if err != nil {
68 t.Errorf("error creating request: %v", err)
69 }
70
71 w := httptest.NewRecorder()
72 a.ServeHTTP(w, req)
If you recall in listing 5, an Application
type is constructed in order to be utilized during the tests. The ServeHTTP
function which takes two parameters, a http.ResponseWriter
and a http.Request
. Directly calling ServeHTTP
with a http.ResponseRecorder
to record the response as well as a http.Request
created by using http.NewRequest
allows the caller to invoke a registered route as seen on line 72 of listing 8.
The http.NewRecorder
function is invoked on line 71 and returns a ResponseRecorder
value that implements the ResponseWriter
interface. After invoking the route, the ResponseRecorder
constructed on line 71 of listing 8 is able to be analyzed. The ResponseRecorder
value’s most notable fields are Code
, which contains the status code of the response, and Body
, which is a pointer to a bytes.Buffer
value that contains the contents of the response.
Listing 9
74 if want, got := http.StatusOK, w.Code; want != got {
75 t.Errorf("expected status code: %v, got status code: %v", want, got)
76 }
In listing 9, the status code returned in the response of the invoked route is compared to an expected code, http.StatusOK
. If the expected code and returned code do not match, t.Errorf
is invoked which denotes a failure in the ran tests and gives context to why the tests failed.
Listing 10
79 var items []item.Item
80 resp := web.Response{
81 Results: items,
82 }
83
84 if err := json.NewDecoder(w.Body).Decode(&resp); err != nil {
85 t.Errorf("error decoding response body: %v", err)
86 }
87
88 if d := cmp.Diff(expectedItems, items); d != "" {
89 t.Errorf("unexpected difference in response body:\n%v", d)
90 }
The example application uses a custom response body, web.Response
which stores the response of a route under the JSON key results
. On line 79 of listing 10 a variable is declared of type []item.Item
which is the expected value of the results
key. The items
variable is passed to the results
field of resp
, declared and initialized on line 80. The items
variable will contain the items that the route responded with upon the body of the response being decoded into resp
, which happens on line 84.
Google’s go-cmp package is a safer and easier to use alternative to reflect.DeepEqual
, useful for comparing structs, maps, slices, and arrays. The call to cmp.Diff
on line 88 ensures that the expected body, defined within the returned seed data in listing 6, and the body returned in the response are equal, if they are not the test will fail and differences will be reported in stdout.
Testing Tips and Tricks
The best advice that can be dispensed as far as testing goes is to test early and often. Tests should not be an after-thought, rather they should drive the development of your application. This is where the phrase test-driven-development is coined. A segment of code, by default, is not always readily available to be tested. Keeping the thought of testing in the back of your head as you write code ensures that the code being written is, in fact, testable by default. No unit of code is too small to be considered negligible for testing. The more tests your services have, the less unknown, moreover hidden, side-effects will bubble up in production.
The following tips and tricks outlined in this section will allow your tests to be more insightful, easier to read, and faster.
Table Tests
Table tests are a manner of writing tests that prevent duplication of test assertion for different testable outcomes for the same unit of code. Take this function that takes an indefinite amount of integers and returns the sum of them, for example:
Listing 11
110 // Add takes an indefinite amount of operands and adds them together, returning
111 // the sum of the operation.
112 func Add(operands ...int) int {
113 var sum int
114
115 for _, operand := range operands {
116 sum += operand
117 }
118
119 return sum
120 }
In testing, I want to ensure that this function can handle the following cases:
No operands, should return 0. One operand, should return the value of the operand. Two operands, should return the sum of the two operands. Three operands, should return the sum of the three operands.
Writing these tests independently of each other is going to result in the duplication a lot of the same calls and assertions. The better way of doing this, in my opinion, is utilizing table tests. In order to write table tests, a slice of anonymously declared structs must be defined that contains metadata for each of our test cases. These entries for the different test cases can then be looped through and the cases can be tested and ran independently, using t.Run
. t.Run
takes two parameters, the name of the sub-test and the sub-test function, which has to match the following definition: func(*testing.T)
.
Listing 12
123 // TestAdd tests the Add function.
124 func TestAdd(t *testing.T) {
125 tt := []struct {
126 Name string
127 Operands []int
128 Sum int
129 }{
130 {
131 Name: "NoOperands",
132 Operands: []int{},
133 Sum: 0,
134 },
135 {
136 Name: "OneOperand",
137 Operands: []int{10},
138 Sum: 10,
139 },
140 {
141 Name: "TwoOperands",
142 Operands: []int{10, 5},
143 Sum: 15,
144 },
145 {
146 Name: "ThreeOperands",
147 Operands: []int{10, 5, 4},
148 Sum: 19,
149 },
150 }
151
152 for _, test := range tt {
153 fn := func(t *testing.T) {
154 if e, a := test.Sum, Add(test.Operands...); e != a {
155 t.Errorf("expected sum %d, got sum %d", e, a)
156 }
157 }
158
159 t.Run(test.Name, fn)
160 }
161 }
In listing 12, the different cases are defined using a slice of an anonymously declared struct on lines 125 to 150. These different cases are looped through on line 152. The function used to execute each test case is defined on lines 153 to 157, which simply asserts that the returned (actual) sum is what is expected in each case. If it is not, t.Errorf
is called which will fail that individual test and give context to the failure. Each test case is executed on line 159 with a call to t.Run
using the name defined in each test case.
t.Helper() and t.Parallel()
The testing
package provides many helpful utilities to aide in testing, without having to import packages outside of the standard library. My two favorite utilities that the testing
package provides is t.Helper
and t.Parallel
, both receiver functions of testing.T
, the sole parameter of each Test*
function in _test.go
files.
It is often the case that helper functions are necessary in testing, and these functions also often return errors. Take this example helper function, for example:
Listing 13
164 // GenerateTempFile generates a temp file and returns the reference to
165 // the underlying os.File and an error.
166 func GenerateTempFile() (*os.File, error) {
167 f, err := ioutil.TempFile("", "")
168 if err != nil {
169 return nil, err
170 }
171
172 return f, nil
173 }
In listing 14 a helper function is defined for a particular package of tests. The function returns a pointer to an os.File
and an error
. The calling test must assert that the error
returned is non-nil each time this helper is called. Normally this is fine, but there is a better way of doing this using t.Helper()
which omits the error
from the returned values.
Listing 14
175 // GenerateTempFile generates a temp file and returns the reference to
176 // the underlying os.File.
177 func GenerateTempFile(t *testing.T) *os.File {
178 t.Helper()
179
180 f, err := ioutil.TempFile("", "")
181 if err != nil {
182 t.Fatalf("unable to generate temp file: %v", err)
183 }
184
185 return f
186 }
In listing 14, the same function from listing 13 is modified to use t.Helper()
. The function definition is modified to take *testing.T
(from the calling Test*
function) as a parameter and omits the error
from the returned values. The first thing this function does is call t.Helper()
on line 178. This signals to the compiled test binary that if any receiver function of t
is called within this function that it should be reported to the calling function (the Test*
function). All line number and file information will be related to the calling function as well, as opposed to this helper function.
Some tests are safe to run in parallel, and go’s testing
package natively allows tests to be ran in parallel. The way you denote to the compiled test binary that a test is safe to be ran in parallel with other tests is to place a call to t.Parallel()
at the beginning of any Test*
function that is safe to be ran in parallel. It is as simple and powerful as that.
Conclusion
Without configuring the external systems that your application leverages during runtime, the behavior of your application cannot be completely validated in the context of your integration tests. Moreover, those external systems, especially if they contain application state data, need to be continuously monitored to ensure they contain valid and meaningful data. Go allows developers to not only configure, but also maintain external services during testing without the need for ever reaching for a package outside of the standard library. Thankfully, because of this, integration tests that are readable, consistent, highly-performant, and reliable all at once can be written. The true beauty of Go lies within its minimalistic, yet fully-featured toolset it gives the developer without having to rely on external libraries or any unconventional conventions.