October 12th, 2010
Big news: our new 3.7.0M2 releases are out for ExtremeTransactions and TransactionsEssentials!
If you look at the release notes then you will see that there are not that many new features (besides bug fixes and some important performance tuning). So what makes these new releases big news then? It’s our new release process and build infrastructure that made them possible.
That’s right: we have a new build process. We are now officially using maven and mercurial for our builds, instead of ant and svn. Also, we have tuned our repository architecture to better match our business model: we are now tuned towards more frequent releases of ExtremeTransactions and optimized even more for our support business.
So we hope you enjoy the new releases as much as we do! Beware though: they are milestone builds, meaning they are bound to have minor issues still. This is mostly due to initial imperfections in our new build process. After all, it _is_ a new way of working for all of us!
September 27th, 2010
The Atomikos connection pooling mechanism invalidates connections when there are any errors - just to be sure that later transactions are not corrupted by prior errors on the connection stream to the back-end. Also, sometimes connections simply time out in the back-end, and are closed without warning. So it may happen that you have some ‘erroneous’ connections in the pool at any given time, and you will only find out the next time you try to use one of these connections (i.e., in your application logic you will see exceptions related to this).
To avoid this (and have the pool proactively validate connections for you) just set a testQuery on the AtomikosDataSourceBean instance. The idea is that you supply a snippet of SQL code that can be used by the pool to test if the connection is still valid. If not, it will be replaced automatically - and you should never get any erroneous connection out of the pool.
The fact that the testQuery is optional has been confusing to some users. Consequently, we’ve been asked to make it required or default to something meaningful. We’ve seriously thought about this, but there are a few problems here:
- Making it required means breaking a lot of existing, working configurations when they upgrade to our newest release. That is because these existing configurations usually do not include any testQuery settings and the new release would fail to read the configuration and initialize correctly - thereby breaking backwards compatibility. We did not want to do this.
- Providing a reasonable default is equally difficult, if not even harder: it turns out that there is no known SQL statement that will work for all DBMS. So our default testQuery - whichever we choose - would always fail on some systems. We did not want to do this either.
So what did we do to improve this? After some input from our LinkedIn group we now have a serious warning message in the logs whenever you don’t set the testQuery.
Note: this will be available in our very next release - due beginning of October.
September 20th, 2010
Maybe you have noticed: we have recently changed our developer access formula - based on customer feedback. Whereas the ‘old’ formula used to be ticket-based and expired after 1 month, the new developer access is as follows:
- There is no limit on the number of issues
- Expiry is after either 3 or 6 months (your choice) - which gives you plenty of time to experiment
- There is, however, a limit of 1 named contact at the side of the customer
Again, this is based on input we got from several customers and prospects. Thanks for your feedback!
PS yes, the old formula has been discontinued: we too experienced problems with the issue limit…
August 21st, 2010
When implementing a service-oriented architecture (SOA), there is always the choice between components (included modules of reusable functionality) versus services (deployed once, reused by calls over the network).
How do you know which one to choose? There is a lot of things to consider, but these will give you a head-start:
- if there is a need for different configuration parameters per consumer, favor a component
- if performance of remote network calls is problematic, favor a component
- if deploy-once is crucial, favor a service
- if you have no control over the deployment parameters, favor a service (e.g., if the provider is a third party)
- to dynamically switch between both, choose service component architecture (SCA)
Also, keep in mind that components require setting up the required infrastructure (database schema, queues, etc) for each deployment.
Probably most readers of this blog post will know about components, because you probably use TransactionsEssentials as a component
July 20th, 2010
In an attempt to ‘increase performance’, many people will try to hack around in JMS - thereby falling into the idempotent receiver trap by checking for duplicate message receipt. The consequence: scalability actually degrades!
Fortunately, the best way to enjoy reliable messaging is also the simplest one, and it scales linearly.
July 19th, 2010
We’ve uploaded a slideshare presentation on developing transactional applications with Spring. Enjoy!
July 13th, 2010
Here is another excellent article about the cost of application servers, and why a paradigm shift is needed with lighter-weight alternatives:
Interesting note: the author used to work at Bea, so he definitely knows what he is talking about;-)
July 12th, 2010
Check out http://www.tomcatexpert.com/blog/2010/07/07/how-migrate-jee-applications-tomcat for a nice discussion on how to migrate from jee to a light-weight alternative like Tomcat - with Atomikos for JTA if needed.
Of course, you can also use Jetty from Webtide (which has Atomikos pre-integrated into the Hightide edition)…
June 15th, 2010
Check out this cool blog entry on how this can all work together.
DISCLAIMER: the suggested solution has not yet been verified by Atomikos…
May 10th, 2010
The cloud phenomenon is an interesting one, and a natural evolution of the outsourcing model. While a lot is going on around cloud computing itself, little is being said about reliability.
Do clouds offer reliability? In a way yes: caching systems like Terracotta, Gemstone or Oracle’s Coherence offer a fail-safe mode for availability of your data in the form of caches. So if a cloud node goes down, chances are that a live copy of the data still exists somewhere else, which means that your process can continue working elsewhere.
All is fine (or mostly fine) if you are working with a single database and are processing, say, web requests in the cache. After all, if you only have one database and no other resources then you don’t even need something like a transaction manager (or Atomikos, for that matter). There are at least two situations where things change:
- If you queue cache updates to enable write-behind, then you find yourself in a queuing scenario and are processing jobs from a queue to a database. Enter distributed transactions.
- If you are not processing web requests but rather get queued requests from the start. Enter distributed transactions.
In both cases you should at least consider using a transaction manager. In both cases, Atomikos is a good choice for the following reasons:
- It’s open source (or at least our basic version is)
- It’s very light-weight and easy to deploy (meaning it lends itself easily to cloud-oriented virtualized configurations)
- It bundles over 10 years of experience and market leadership
- It provides full crash recovery and all other bells and whistles - unlike many of the built-in solutions that you will find in a cache
So in that way, Atomikos provides “reliability for the cloud”.