When deploying an application into AWS (Amazon Web Services) public cloud, chances are that we might be using some AWS services like S3 (Simple Storage Service). In this article we will learn how can we mock such dependencies for cases like early development stages and automated pipelines.

Component testing

In a distributed ecossystem (e.g. SOA), component testing allows for rapid development by mocking the underlying dependencies with fake services, so many teams can develop in parallel by only having a contract of the APIs.

Another advantage of mocking downstream services is that you can easily test your code in an automated pipeline without depending on those services stability or development stage.

Service Calls

Component tests will verify the subject component or service entirely against mocked downstream components or services. Unlike integration tests, they won’t hit the real downstream services, as the endpoints will all be configured to hit a different target, a mock server.

As such, response expectations can be set in the beginning of the test for the mock server to return a deterministic response when it is called under certain conditions.

Mocked Service Calls

Hmm, that does not sound safe enough!

It’s important to note though, that component testing does not replace the brave and true end to end integration tests, though are a lot cheaper.

Component tests will give us the ability to test the more obscure edge cases of our services without incurring in the penalty of hammering all dependencies for every single of them, even if all edge cases make use of the same underlying functionality of those services.

By having only end to end integration tests, there’s a huge overlap of functionality testing that bring no value to the product and delays the pipelines, so it’s important to reduce those to the minimum quantity that keeps the maximum quality.

The secret for efficient success is to find an equilibrium between what can and should be tested with unit tests, component tests and integration tests.

Moto Server

Please refer to Mock Server for a powerful mock-server implementation. Mock Server can be used to mock expectations for AWS services or any other http-based web service, but for Mock Server to be effective, that means that we need to set the expectations properly.

For AWS services, a simpler approach is to use Moto Server, a wrapper service around the Moto library that is used to mock AWS services on Python applications. It already simulates the expectations for a great variety of AWS services that can be used seamlessly.

It can be used to give a great sense of confidence without using the real thing. Please note that using a real AWS service may incurr in monetary costs and require one to supply AWS access secrets, and that might not be easily sorted out without exposing that information or using more complex tools for managing secrets like Vault.

Sample S3 access code

Let’s say we got this sample Java code for accessing S3, adapted from AWS SDK official Java sample.

public static void main(String[] args) throws IOException {
AmazonS3Client s3 = new AmazonS3Client()
.withRegion(Region.getRegion(Regions.US_EAST_1));

String bucketName = "my-test-bucket";
String key = "MyObjectKey";

// create bucket
s3.createBucket(bucketName);

// list buckets
s3.listBuckets()
.forEach(bucket -> System.out::println(bucket.getName()));

// uploading from file
s3.putObject(new PutObjectRequest(bucketName, key,
new File("/path/to/sample/file.txt")));

// download an object and printing its contents
S3Object obj = s3.getObject(new GetObjectRequest(bucketName, key));
new BufferedReader(new InputStreamReader(obj.getObjectContent()))
.lines()
.forEach(System.out::println);

// deleting the object
s3.deleteObject(bucketName, key);

// deleting the bucket
s3.deleteBucket(bucketName);
}

If we run the above code using our AWS credentials, it will run against an endpoint like my-test-bucket.s3-us-east-1.amazonaws.com. How can we test this against a fake service instead?

Adapting the code for testing

Since we need to fake the target service, we need a way to tell AWS SDK that it should use a different endpoint. Fortunately, AWS SDK provides a way to do it by overriding the default auto-generated endpoint with a custom one.

To use it, the first lines of the above snippet would need to be replaced with:

AmazonS3Client s3 = new AmazonS3Client()
.withRegion(Region.getRegion(Regions.US_EAST_1));

// override endpoint if configured
getEndpointConfig().ifPresent(s3::withEndpoint);

That’s the only tweak that we need to do in order to use an endpoint other than the official one, such as the Moto Server endpoint.

Starting Moto Server

To start Moto Server, I got this convenient docker image that can be used like:

docker run --name s3 -d -e MOTO_SERVICE=s3 -p 5001:5000 -i picadoh/motocker

Or we can use it in docker-compose as:

s3:
image: picadoh/motocker
environment:
- MOTO_SERVICE=s3
- MOTO_HOST=10.0.1.0
ports:
- "5001:5000"

Now we can just run our sample code using the endpoint http://localhost:5001 and everything shall go smoothly.

If we don’t remove the object and the bucket at the end, it’s actually possible to access it’s contents through the browser by entering the adddress http://localhost:5001/my-test-bucket/MyObjectKey.

Conclusion

In this article we’ve seen how can we setup Moto Server to simulate the behavior of S3 so we can easily test our application in the early development stages or in automated testing pipelines without incurring in the monetary or maintenance costs of calling AWS S3 directly.

Moto Server is in constant maitenance and I’ve got the opportunity to contribute into it myself, so feel free to contribute as well for new features or fixes.

Have fun testing!