Last A List Apart issue features an article about how to safely manage third parties: Dependence Day: The Power and Peril of Third-Party Solutions. The main discussion is around determining whether it is best to develop a feature or to use a third party. One of the criteria is vitality, the risk of the dependency being abandoned.

For external providers this is obviously very important because it is likely it will end up hosting data, so if the provider disappears data is gone with it. But for linked libraries I think the criteria is much less important given a proper abstraction is built. The pain we all suffer to migrate to an alternative, or even to a newer version of the original one, is mostly the result of poor respect of encapsulation.

It is simplest done for technical libraries, generally one needs to make a few calls to the library in order to get a task done. Converting an object to JSON in Java can be done with the Gson library. The first time the application needs to serialize, one import the library in the project and simply add lines:

Gson gson = new Gson();
String content = gson.toJson(myBean);

Some time later another developer needs to serialize to JSON in another part of the app, and he simply imports Gson at that place and makes the same sequence of calls. And trouble just got in, because at this point the cost of migrating to another library just doubled. The actual answer should be to create a new utility class with a method expressing the need rather than a technical implementation:

import com.google.gson.Gson;

public class Json {

  public static String serializeBean(Object bean) {
    Gson gson = new Gson();
    String content = gson.toJson(bean);
    return content;
  }
}

This completely hides the underlying library which can be switched at will. There is still one issue here to ensure a new third party can replace the hold ones: it must handle all use cases. And this can be managed with tests. The first developer may only need beans with primitive properties, thus it adds tests for primitive values are added and appropriate javadoc:

/**
 * Serializes beans to JSON.
 *
 * <p>Bean may only have primitive properties.</p>
 *
 * @param bean bean to serialize.
 * @return JSON representation of bean
 */

The second one may need to handle lists and sets, thus it adds new tests and updates the javadoc accordingly:

/**
 * Serializes beans to JSON.
 *
 * <p>Bean may only have properties:<ul>
 *    <li>primitive</li>
 *    <li>collection of wrapped primitives</li>
 * </ul></p>
 *
 * @param bean bean to serialize.
 * @return JSON representation of bean
 */

Once switching, tests proves whether the new library fulfills the need in seconds.

This was obviously a very simple example. A more realistic one is around databases, there are so many applications riddled with SQL or MongoDB queries everywhere, which become specific to the database provider or version over time. This should never occur with a proper encapsulation of the database behind a functional abstraction. Contrary to what 2000’s MVC frameworks enforced, I am not recommending the usage of a big full fledged ORM, but rather creating ad-hoc abstractions as the need arise within the app, possibly using different datastores for different modules:

public interface PonyStable {

  void feed(Pony p);
  void stroll(Pony p);
}

class SqlPonyStable {

  public void feed(Pony p) {
    execSql("UPDATE Pony ...");
  }

  public void stroll(Pony p) {
    execSql("UPDATE Pony ...");
  }
}

By dogmatically constraining the third party to a few classes with a clear contract, it becomes easier to switch to a new version or an alternative. Since both sides have a clear contract testing is also much easier: most of the application can be tested against mock data while a real database can be setup for the tests of the wrapper.