Archive for the ‘Uncategorized’ Category

New website live!

Friday, March 13th, 2009

If you have been here recently then you will know already: our new site is finally online, live for everyone!

The intent was to revamp our site into a place that perfectly aligns with both our community and our business goals, and I think we have succeeded at doing so - but I will let you be the judge of that.

Thanks to everybody in the team for making this possible. To all our visitors I would say: enjoy!

Creating a website, the open source way

Saturday, February 28th, 2009

It has been taking months since we started it, but our new website will be online very soon now. We are currently doing the last bits of work to get it ready…

What makes our new site interesting is our geographically dispersed team that has been working on it:

  • The content and PDF goodies were written in Canada
  • The overall site design was done in Belgium
  • The content management support - a wiki platform - was (and still is) being done in Germany

From the inception to the finish, this has been a great deal of fun (and work!) and it is a very exciting way of doing things! This is yet another example of how far you can get with a work tracker (like fogbugz), a wiki and skype;-)

Greetings from Devoxx 2008

Wednesday, December 10th, 2008

I was at Devoxx (formerly Javapolis) today. Besides wandering around in the lobby (I prefer to talk to people there rather than listening to talks), the only presentation I attended was on the SpringSource dm Server. Joris Kuipers did a nice job on explaining how the OSGi modules work (I think I got it - more or less;-)

Tomorrow (Thursday) I will be there too - so if you are around just make sure to say hi!

Atomikos for XTP: Transactions for Nothing and Failover for Free

Friday, November 28th, 2008

A lot of the hype in “Extreme Transaction Processing (XTP)” is fail-over. When Oracle bought Coherence (a Tangosol product), they essentially got an XTP solution for database access.

As Cameron Purdy notes here, this now allows Oracle to provide a degree of XTP failover.

Now guess what: with Atomikos TransactionsEssentials you get:

  • Transactional robustness for nothing, and
  • failover for free

How? Just do the following:

  1. queue requests in JMS
  2. process them by a cluster of competing consumer processes
  3. use Atomikos TransactionsEssentials to ensure that each message is processed exactly once, without duplicates or message loss

By the semantics of queues, this architecture will give you failover. By the semantics of transactions, this will give you exactly once. Since the requests can be queued by any source, this is multichannel. Everything is commodity infrastructure. This is very easy to scale: just add another process.

In summary, this is XTP of the highest degree:-)

Why Amazon should use two-phase commit (or: how Amazon ripped me off)

Friday, October 24th, 2008

Working for Atomikos, I use two-phase commit a lot. While I don’t want to claim that it is a solution to all problems, I do find it frustrating to hear people proclaiming that they don’t use it because it doesn’t scale (or some other reason).

Take, for instance, Werner Vogel’s talk about the Amazon architecture. Once again, two-phase commit is rejected as a viable solution/technology. Once again, I disagree.

Let me illustrate my point with an example of what really happened to me recently - after ordering a book at Amazon (ironically;-). I can give similar examples with airline ticket reservations but those will have to wait until later…

So what happened really? Well, I ordered a book that I really wanted to have. I ordered it online at Amazon… All went well, I checked out and paid by VISA. However, that is where things started to go wrong: while waiting for the book to be delivered, I suddenly get an email from Amazon saying that… my order has been canceled!

Canceled? Yes, but not in a way you would think: I still had to pay for the delivery by DHL (sorry, what is that?!). Yes sir, DHL claimed they had found nobody present at the delivery address. The delivery was at our office address, so it is very unlikely that nobody be there in the first place. Moreover, any courier service I know will leave a note that they passed by and at least settle for an alternative delivery. Not this time.

My conclusion? DHL did not arrive at my place. On the Amazon order tracking page, my order had not even left Germany (to be delivered where I live, in Belgium).

Now what will I remember? I will remember that Amazon ripped me off, either directly or via DHL. I will also remember to be very suspicious about people who say they don’t need two-phase commit. Two-phase commit comes down to ensuring agreement between the different parties involved in a transaction. Clearly, there was no such thing in my case.

A CAP Solution (Proving Brewer Wrong)

Sunday, September 7th, 2008

One of the latest challenges in computer science seems to be the CAP theorem. It addresses a perceived impossibility of building large-scale and clustered (web) service architectures. The fact that it (supposedly) has been proven to be true makes what I am going to write here all the more unlikely. Still, read on because I will show that I am right and CAP is not an impossibility after all… While the impossibility proof of CAP is mathematically correct, it is based on assumptions that are too strict. By relaxing these assumptions, I found the solution presented here.

What is CAP?

The CAP theorem (short for consistency, availability, partition-tolerant) essentially states that you cannot have a clustered system that supports all of the following three qualities:

Consistency is a quality meaning (informally speaking) that reads and writes happen correctly. In other words, the overall effect of executing thousands or millions of transactions concurrently is the same as if they had been executed one-at-a-time. Usually, this is done with the help of a transaction manager of some sort.
Availability essentially means that every operation (that makes it to a non-failing node) eventually returns a result.
This quality refers to the possibility of tolerating partitions on the network. Note that we suppose a cluster architecture (which is where the network comes in).

CAP is a conjecture originally formulated by Eric Brewer (Inktomi) and has influenced many of today’s larger-scale websites like . In other words, the impact of CAP is very large. To make it worse, the perceived impossibility of a CAP system (one that has all three desirable properties) has lead people to advocate something called BASE (Basically Available, Soft-state and Eventually Consistent) - see this talk by Werner Vogels (CTO at Amazon).

As far as I know (but I could be wrong), a theoretical foundation of BASE does not exist yet (it seems more of an informal approach which to me raises serious questions concerning correctness). In this post I will present:

  • a CAP solution
  • how this conforms to what BASE wants to achieve
  • a “design pattern” for building correct systems that (in a way) offer both CAP and BASE qualities

Because CAP is perceived as impossible and because BASE lacks formal treatment, I consider this to be a signification contribution to the state of today’s engineering;-)

What about the proof of Brewer’s theorem?

Brewer’s proof has been published by Nancy Lynch et al and discussed by me (see my earlier post and also this one).

While the theoretical proof of the impossibility of CAP is valid, it has a big limitation: it assumes that all three CAP properties have to be supplied at the same moment in time. If you drop this assumption, then all of a sudden you get into a new spectrum of possibilities. This is what I will do here.

A CAP solution

Enough talk, let’s get to the core of the matter. Here is my solution to CAP. To make it concrete, I will use the concept of a web-shop like Amazon. Here are the rules that are sufficient to ensure CAP:

  1. Process reads from the database if possible, or use a cached value if needed for availability (if the DB is unreachable).
  2. All reads use versioning or another mechanism that allows optimistic locking.
  3. Updates supplied by clients (orders in case of Amazon) are queued for execution, and include the versioning information of the reads that lead to the update.
  4. Queued updates are processed when the number of partitions is low enough to do so. The easiest way to do this is with a cluster-wide distributed transaction across all replicas (more on scalability later), but other more refined ways are possible (such as quorum-based replication or any other smart way of replicating). The version information in the update is used to validate it: if the data in the database has been modified since the original read(s) that lead to the update, the update is rejected and a cancellation is reported back to the client. Otherwise the order is processed and a confirmation is reported back to the client.
  5. The results (confirmation or cancellation) are sent asynchronously to the clients. This can be either email, message queuing, or any other asynchronous delivery method.

That’s it. Adhere to these guidelines, and you have a CAP architecture. I will not provide a formal proof here (I intend to do that elsewhere, in a research paper), but intuitively the proof is as follows:

  • This system is consistent because reads are based on snapshots and incorrect updates are rejected before they are applied. In other words: there are no incorrect executions.
  • This system is available since reads always return a value, and so do writes (even though they are queued and it may take a while).
  • This system is partition-tolerant because it allows network and node failures.

Granted, this system does not provide all three at the same moment in time (which is how we go around the impossibility), but nevertheless the result is quite strong IMHO.

The limitations

There are some limitations to this solution - all of which seem reasonable:

  1. Read-only requests may be presented with stale information (due to updates that have yet-to-be-applied). In that sense, their results could be “inconsistent”: for instance, the availability of an Amazon item can change between two page views. I do not see this as a major restriction, since no website that I know of will offer read consistency for the duration of a user session. It all depends on what you consider to be within the scope of one transaction;-) Note that this almost corresponds to snapshot isolation found in Oracle.
  2. Partitions should not last forever: in order for this to work, partitions should be resolved within a reasonable time (reasonable being: within the expected confirmation time for updates). The duration of any partitions also affects the time window in which reads can produce stale data.
  3. The updates have to be applied in the same relative order at all cluster nodes. This puts some restrictions on the algorithm used to do this.

Note that updates are always based on correct reads thanks to the versioning check before they are applied. So update transactions are always consistent.

How does this relate to BASE?

You could see this as a design pattern for BASE if you like. The solution adheres to BASE in the sense that it uses cached reads (if needed) and that the updates are delayed (so you could say they are “eventually” applied and the system becomes “consistent”).

Reflections in scalability

So far the CAP focus was on possibility. I think my solution shows that it is possible. Now how about scaling up?

The naive solution (a huge distributed transaction to update all cluster nodes in-sync) is unlikely to scale: as you add more nodes, more updates are needed. Now I am a big fan of transactions, but not to use them in an arbitrary matter. So how to propagate these updates through the cluster?

While smarter solutions for this exist (such as the work by Bettina Kemme), a trivial first try would be to push updates (lazily) to all nodes in the cluster. This can be done with a smart queuing mechanism. The disadvantage is that updates are not applied everywhere at once (rather, the all-or-nothing quality just “ripples” through the system). So you get into the “eventually” style again.

Note that this latter suggestion makes the system behave much like the READ COMMITTED isolation level (which, by the way, is the default in Oracle). So this approach sacrifices consistency/isolation a bit in favor of scalability.

Future work

Additional research could/should be done in the following areas:

  • Improving read consistency through session affinity
  • The best way to push the updates through the cluster
  • Performance evaluation in real life implementations

Final note and disclaimer

I did not see Brewer’s original presentation of the CAP theorem - so it could be that what he meant with consistency also involved all reads (see the limitations of the solution I presented here). In that case I did not find a solution for CAP but at least it is a framework and proof outline for BASE ;-)

UPDATE 15/3/2012:

It seems like Greg Young and Udi Dahan have been working along similar lines and gave this pattern/solution a name: CQRS.

Why Forte migrations should use Atomikos

Saturday, September 6th, 2008

Forté/UDS is an end-of-life technology that used to be in Sun’s product portfolio. When talking to people who have been doing a lot with Forté in the past, it seems that Forté can be considered an ancestor of Java:

  • It has an object-oriented (4GL) development language.
  • Like Java’s JMX, Forte also has instrumentation (the agent is even called iconsole - like jconsole for Java’s built-in JMX agent these days!).
  • It has distributed transactions.
  • It has a strong notion of events as first-class citizens in the language.

The only thing that Forté does not have is Enterprise JavaBeans (EJB), nor XML configuration issues for the application server. This means that Forté developers who migrate to Java (because they are left little choice) get confronted with complexities that they did not have to bother with in their 4GL environment.

Thanks to Atomikos and the J2EE without application server methodology, teams who used to work in Forté can easily do Java/J2EE without having to bother about the clutter of EJB nor about the application server’s XML hell. What’s more, in combination with Spring, Hibernate and JMS there is an equivalent, light-weight Java stack that (thanks to Atomikos) can still do all the connection pooling, event-driven and transactional processing that is needed.

What makes it even better is that this methodology seems to achieve equal productivity as with the 4GL environment in Forté, which is pretty good given that Java is a 3GL and is not widely known as a productivity miracle.

The Achilles heel of the CAP theorem

Friday, September 5th, 2008

In my last post I discussed the theoretical proof of the CAP theorem. Both the theorem and the proof have a limitation that might very well render them not-so-universal as assumed.

The limitation of the CAP proof

The limitation of the CAP proof (as formulated by Lynch et al) is the following: it assumes that - for the purpose of availability - requests are to be served even when there is a partition in the cluster.

A way around the limitation

There is a way around this limitation - although it may sound exotic: just make sure that there are no partitions when requests are served.

How? By simply doing the following:

  • Queue requests (e.g., in JMS).
  • Only process requests when there is no partition problem.
  • Send responses asynchronously, for instance via email.

Since no partition (hopefully) lasts forever, this solution does not lead to livelock.

Also, note that quorum solutions exist to avoid that the complete cluster has to be up at the same time.

Is this the capitulation of CAP? Who knows…

My take on CAP

Wednesday, September 3rd, 2008

The CAP theorem (Consistency, Availability, Partitioning) has been receiving quite a lot of interest lately, just to mention one of the many references.

What is CAP about?

First let me give credits here: I am deriving my inspiration from the theoretical insights found in this paper co-authored by one of my favorite woman scientists, Nancy Lynch from MIT. If you get a chance to read this paper, go ahead it will bring you some very useful fundamental understanding…

The CAP theorem is essentially a limitation on what you can do with clustered (web) services in the fashionable context of SOA.

The word ‘cluster’ is important here since that is what it is all about. In particular, the theorem states that you can’t have all three properties (Consistency, Availability, Partitioning) in one and the same system (read: service). This implies that there is no perfect solution to building a high-throughput popular service, or is there? Let’s first explore what each thing means…


By consistency, the theorem refers to the property that changes (updates) to the service back-end are visible to later queries. Simplifying: if you add something to your shopping basket then it will appear there next time you retrieve your basket status. That sounds trivial, but it is not if the basket is spread over multiple physical server processes… Consistency is commonly ensured (between processes) by having some sort of distributed transaction coordinator, or (assuming a central back-end) a single centralized database.


The Lynch paper uses a very simple but sufficient definition of “availability”: a system is available if every request to it returns. In other words: there is no infinite blocking.


Partitioning means the cut-off between two segments of the cluster. In other words, one or more nodes become unreachable for at least some time.

What is the Theorem saying?

You can’t have all three of the above qualities, period. However, you can combine any two of them if you like. This is proven in the paper by Lynch et al. Also (and this is important) you can apply different combinations of qualities to parts of your system. Meaning: you can stress consistency in one part, availability in another part, and so on. For instance, order processing or payment processing can be done consistently and available (sacrificing partition tolerance) whereas querying the product catalog can be done differently (stressing partition tolerance in favor of consistency).

Does this contradict or invalidate Atomikos?

Not at all, quite the contrary: it makes Atomikos (and its third generation of TP monitors) all the more relevant. Why? Because Atomikos products can help you in making those parts consistent when you want them to be.

Virtually achieving all three qualities

If you embrace asynchronous messaging (a la JMS or email) and extreme transaction processing (XTP) then it is possible to asymptotically realize all three qualities (consistency, availability, partition-tolerance) provided that you do use a callback mechanism to communicate results (e.g., by sending a confirmation email). Here is how:

  • Queue requests in JMS.
  • Process each request transactionally (so failures will leave the request queued for retries).
  • The process that digests each request can be arbitrarily complex and use transactions (consistency) and return whenever it likes (thanks to the queuing, no reply is expected within a preset time frame).
  • Any lack of availability of the processing is recovered by the queues: failed requests will stay queued until the process in the back-end is in fact available again.

Now did I just break the CAP impossibility? More on this in a next post…

Unlimited scaling, easy!

Friday, August 1st, 2008

Suppose you want to develop a high-volume transaction processing system in Java/J2EE. How would you do it? Most people would say: don’t use JTA/XA transactions because they kill performance. Wrong. And they would also say: use an appserver to scale. Again, they couldn’t be more wrong.

Here is the magic recipe on how we build systems with virtually unlimited scalability at Atomikos:

  • Kick out your appserver as soon as you can, as explained here. J2EE is not limited to an appserver. J2EE is a set of APIs. The appserver ties these APIs to a programming model that almost nobody needs. Conclusion: drop the latter.
  • Use a persistent JMS queue to store transaction requests. This allows easy load-balancing and provides crash resilience for ongoing requests. It also de-couples the clients from the transaction processing system.
  • Use ExtremeTransactions to process the requests (stored in JMS). This allows for reliable, exactly-once message processing as outlined here. Make sure to use the supplied JMS and JDBC drivers!
  • To add more power, just add a second VM (process) on a separate CPU.
  • Repeat until performance is high enough.

You will reach the required performance because of the intra-VM nature of each process you add. The only potential bottlenecks are your own database or JMS backend. So scaling comes down to scaling your backends, which is much simpler than scaling your application itself (which has already been done in a natural way as outlined above).

So don’t let anybody fool you: transactions do scale - even without limits!.