I am very annoyed by a common usage of Java Map it is not made for. Maps are here to represent a dictionary from keys to values, but it is often used to store unstructured content in place of a standard object.
Map<String, String> dog = person.getDog();
The general reasoning is that a map makes it more extensible, but it makes it hard to validate the object right at modification time. In the example one would likely include some validate method inside the person object:
I personally like to use method chaining and fluent interfaces and think these are great patterns, but in Java then tend to be implemented with a return this and that annoys me. It is generally assumed that methods modify the curent instance and return it, as highlighted in Wikipedia’s articles for method chaining:
I really like CouchDB. It is a NOSQL database, but really in the not only SQL sense. It is not designed to work accross thousands of nodes like Cassandra would. It is neither designed to handle petabytes of data as Hadoop. But it works really great for providing real time results of moderately complex queries ran data that seldom changes.
Recently abroad in an hotel with a great Wifi, my computer did not recognize its wifi interface. I then tried connecting my Android phone to the wifi and sharing the connection via USB, but the computer was not discovering it either. Digging a bit, the issue was just that the computer did not enable networking on the device corresponding to the phone and it could be solved with a few commands, just writing it there not to search again the next time it happens.
First step was to get the network used by the phone, for that I used terminal emulator and typed ip addr, it returned 192.168.1.1/24.
Finding the phone device was done looking at the result of dmesg on the computer right after plugging the phone, for me usb0.
Finally enabling networking on the computer device on the same network (but with a different address) was simply typing:
ip addr add 192.168.1.2/24 dev usb0
ip link set dev usb0 up
Github recently suffered from an outage while upgrading a switch. During the update the network has been down for a few seconds but that caused file servers to suddenly appear partitioned.
I see that as a practical application of the CAP theorem: their distributed storage architecture decided to drop partition tolerance in favor of availability and consistency. So once the datacenter got partitioned, software turned bad. Still, choosing partition tolerance was to me a wise choice as availability is desirable and dropping consistency could lead to something very hard to reconcile once the network goes up. As they explain in the post it was a known risk, but hard to mitigate:
The fact that the cluster communication between fileserver nodes relies
on any network infrastructure has been a known problem for some time
What I actually retain from this story is that even within a single datacenter, partitions occur. And it is very interesting to read Github’s postmortem posts.