Comparing Quartz, cron4j and Obsidian Scheduler

We’ve all worked on projects that required us to do very basic tasks at periodic intervals. Perhaps we chose a basic ScheduledThreadPoolExecutor. If we’re already using Spring, maybe we tried their TaskExecutor/TaskScheduler support.

But once we encounter any number of situations such as an increased quantity of tasks, new interdependencies between tasks, unexpected problems in task execution or the like, we will likely start to consider a more extensive scheduling solution.

Our website has a fairly exhaustive feature comparison of the most commonly used Java schedulers, so we won’t go into that in this post, but we do encourage you to take a look.

Features aside, are there other criteria that should come into play? Factors such as development team responsiveness to feature requests and bug reports certainly can be critical for many organizations. If you head over to the Quartz Download page, you’ll see that they haven’t had a release in over a year, despite there being many active unresolved issues. Cron4j hasn’t had a release in over 2 years. While Spring has made some changes to the design of their TaskExecutor/TaskScheduler support in recent releases, their true priorities lie elsewhere as they have not really done much to expand the feature set.

Obsidian Scheduler on the other hand is actively maintained, actively supported (with free online support!) and responsive to our user community. In the past year, we’ve averaged a release per month delivering a blend of features, enhancements and fixes, proof that we’re a nimble and responsive organization. We encourage you to give Obsidian a try today!

Java Enterprise Software Versus What it Should Be

A lot of developers end up in the Java “enterprise” world at some point in their careers. I know the term alone conjures up all kinds of reactions, and rightly so. Often environments where lots of interesting technical challenges exist end up being those that nobody wants to work on because they are brittle, difficult to work on and just no fun. Often the problems that crop up in big projects are due to management, but I’ve seen developers make lots of bad decisions that lead to awful pieces of software, all in the name of “enterprise”.

What is Enterprise?

You could argue that the term can mean just about anything, and that’s true, but for the sake of this article I’m going to define it in a way that I think lines up with the common usage. The average enterprise project has these attributes:

  • typically in a large corporate environment
  • multiple layers of management/direction involved
  • preference for solutions from large vendors like Red Hat, IBM or Microsoft
  • preference for well-known, established (though sometimes deficient) products and standards
  • concerns about scaling and performance

Now that I’ve defined what type of project we are talking about, let’s see what they usually end up looking like.

The Typical Enterprise Java Project

Most of us have seen the hallmarks of enterprise projects. It would help if we had an example, so let’s pretend it’s an e-commerce platform with some B2B capabilities. Here’s what it might look like:

  • EJB3 plus JPA and JSF – these fit a “standard” and everyone use them, so it’s safe choice.
  • SOAP – it’s standard and defines how things like security should work, so there’s less to worry about.
  • JMS Message-Driven Beans – it fits into the platform and offers reliability and load-balancing.
  • Quartz for job scheduling – a “safe” choice (better the enemy you know than the devil you don’t).
  • Deployed on JBoss – it has the backing of a large company and paid support channels.

Now, the problem with a project like this isn’t necessarily the individual pieces of technology selected. I definitely have issues with some of the ones in my example, but the real issue is how the choices are made and what the motivations are for using certain technologies.

The stack of software above is notoriously more difficult to manage and work with compared to other choices. Development will take longer to get off the ground, changes will be more difficult to make as requirements evolve, and the project will ultimately end up more complicated than other possible solutions.

Enterprise Decision-Making

The goals that enterprise project usually fixate on when making choices are:

  • Low-risk technology – make a “safe” choice that won’t result in blow-back, even if it is known to have serious drawbacks.
  • Standards obsession – worrying more about having well-defined specifications like EJB3 or SOAP than offering the simplest solution that does the job effectively.
  • Need for paid support with an SLA, often without concern to the quality or timeliness of responses.
  • Designing out of fear of unknown future requirements.

These goals aren’t bad ones, with the exception of the last one, but they tend to overshadow the real goals of every software project. The primary objective of all software projects is to deliver a project that:

  • is on-time;
  • meets requirements;
  • is reliable;
  • performs well; and
  • is easy to maintain and extend.

These should be what decision-makers focus on in software projects, whether small or large. It’s obvious that sometimes special organizational needs factor into the choices that are made, but fundamentally good choices generally apply in all types of organizations.

So what if we were to reimagine our project with those goals in mind instead?

A Reimagined Enterprise Project

First, a little disclaimer: there are many ways one can go in any project, and I won’t claim that the technologies below are better that the ones mentioned previously. Tools need to be evaluated according to your needs, and everyone’s are different.

What I will try to do is demonstrate an example technology stack along with the reasoning for each choice. This will show how well designed systems can be built which survive in the enterprise world without succumbing to the bad choices that are often made.

Here’s the suggested stack:

  • Spring MVC using Thymeleaf – stable history, lots of development resources, quick development and flexibility. Don’t be afraid of using platforms or libraries, but try to avoid too much “buy-in” to their stack that you might regret.
  • Simple database layer using jOOQ for persistence where useful. This lets us manage performance in a more fine-grained way, while still getting easy database interaction and avoiding ORM pitfalls.
  • REST using Jackson JSON processor – REST and JSON are both popular because they are easy-to-use and understand, cheap to develop, use simple standards and are familiar to developers. Lock-in isn’t much of a problem either – we could easily switch to another JSON processor without much difficulty, unlike SOAP. This can be easily secured with SSL and basic authentication.
  • JMS messaging using JSON-encoded messages on ActiveMQ – loose coupling, reliability and load-balancing, without the nastiness of being stuck with Message-Driven Beans.
  • Obsidian Scheduler – simple-to-use, offers excellent monitoring and reduces burden on developers. Once again, the goal is to simplify and reduce costs where possible.
  • Deployed on Tomcat – no proprietary features used. This helps us follow standards, avoid upgrade issues and keep things working in the future. Who needs support with SLAs when things aren’t inexplicably breaking all the time?

I think the stack and corresponding explanations above help give an idea to what an enterprise project can be if it’s approached from the right angle. The idea is to show that even enterprise projects can be simple and be nimbly built – bloated frameworks and platforms aren’t a necessary part, and rarely offer any significant tangible benefit.

In Closing

Recent trends in development to technologies like REST have been very encouraging and inroads are being made into the enterprise world. Development groups are realizing that simplicity leads to reliability and cost-effective solutions long as the underlying technology choices meets the performance, security, etc. needs of the project.

The software world moves quickly, and is showing promising signs that it’s moving in the right direction. I can only hope that one day the memories of bloated enterprise platforms fade into obscurity.

Job Chaining in Quartz and Obsidian Scheduler

In this post I’m going to cover how to do job chaining in Quartz versus Obsidian Scheduler. Both are Java job schedulers, but they have different approaches so I thought I’d highlight them here and give some guidance to users using both options.

It’s very common when using a job scheduler to need to chain one job to another. Chaining in this case refers to executing a specific job after a certain job completes (or maybe even fails). Often we want to do this conditionally, or pass on data to the target job so it can receive it as input from the original job.

We’ll start with demonstrating how to do this in Quartz, which will take a fair bit of work. Obsidian will come after since it’s so simple.

Chaining in Quartz

Quartz is the most popular job scheduler out there, but unfortunately it doesn’t provide any way to give you chaining without you writing some code. Quartz is a low-level library at heart, and it doesn’t try to solve these types of problems for you, which in my mind is unfortunate since it puts the onus on developers. But despite this, many teams still end up using Quartz, so hopefully this is useful to some of you.

I’m going to outline probably the most basic way to perform chaining. It will allow a job to chain to another, passing on its JobDataMap (for state). This is simpler than using listeners, which would require extra configuration, but if you want to take a look, check out this listener for a starting point.

Sample Code

This will rely on an abstract class that will provided basic flow and chaining functionality to any subclasses. It acts as a very simple Template class.

First, let’s create the abstract class that gives us chaining behaviour:

import static org.quartz.JobBuilder.newJob;
import static org.quartz.TriggerBuilder.newTrigger;
import org.quartz.*;
import org.quartz.impl.*;

public abstract class ChainableJob implements Job {
   private static final String CHAIN_JOB_CLASS = "chainedJobClass";
   private static final String CHAIN_JOB_NAME = "chainedJobName";
   private static final String CHAIN_JOB_GROUP = "chainedJobGroup";
   
   @Override
   public void execute(JobExecutionContext context) throws JobExecutionException {
      // execute actual job code
      doExecute(context);

      // if chainJob() was called, chain the target job, passing on the JobDataMap
      if (context.getJobDetail().getJobDataMap().get(CHAIN_JOB_CLASS) != null) {
         try {
            chain(context);
         } catch (SchedulerException e) {
            e.printStackTrace();
         }
      }
   }
   
   // actually schedule the chained job to run now
   private void chain(JobExecutionContext context) throws SchedulerException {
      JobDataMap map = context.getJobDetail().getJobDataMap();
      @SuppressWarnings("unchecked")
      Class jobClass = (Class) map.remove(CHAIN_JOB_CLASS);
      String jobName = (String) map.remove(CHAIN_JOB_NAME);
      String jobGroup = (String) map.remove(CHAIN_JOB_GROUP);
      
      
      JobDetail jobDetail = newJob(jobClass)
            .withIdentity(jobName, jobGroup)
            .usingJobData(map)
            .build();
         
      Trigger trigger = newTrigger()
            .withIdentity(jobName + "Trigger", jobGroup + "Trigger")
                  .startNow()      
                  .build();
      System.out.println("Chaining " + jobName);
      StdSchedulerFactory.getDefaultScheduler().scheduleJob(jobDetail, trigger);
   }

   protected abstract void doExecute(JobExecutionContext context) 
                                    throws JobExecutionException;
   
   // trigger job chain (invocation waits for job completion)
   protected void chainJob(JobExecutionContext context, 
                          Class jobClass, 
                          String jobName, 
                          String jobGroup) {
      JobDataMap map = context.getJobDetail().getJobDataMap();
      map.put(CHAIN_JOB_CLASS, jobClass);
      map.put(CHAIN_JOB_NAME, jobName);
      map.put(CHAIN_JOB_GROUP, jobGroup);
   }
}

There’s a fair bit of code here, but it’s nothing too complicated. We create the basic flow for job chaining by creating an abstract class which calls a doExecute() method in the child class, then chains the job if it was requested by calling chainJob().

So how do we use it? Check out the job below. It actually chains to itself to demonstrate that you can chain any job and that it can be conditional. In this case, we will chain the job to another instance of the same class if it hasn’t already been chained, and we get a true value from new Random().nextBoolean().

import java.util.*;
import org.quartz.*;

public class TestJob extends ChainableJob {

   @Override
   protected void doExecute(JobExecutionContext context) 
                                   throws JobExecutionException {
      JobDataMap map = context.getJobDetail().getJobDataMap();
      System.out.println("Executing " + context.getJobDetail().getKey().getName() 
                         + " with " + new LinkedHashMap(map));
      
      boolean alreadyChained = map.get("jobValue") != null;
      if (!alreadyChained) {
         map.put("jobTime", new Date().toString());
         map.put("jobValue", new Random().nextLong());
      }
      
      if (!alreadyChained && new Random().nextBoolean()) {
         chainJob(context, TestJob.class, "secondJob", "secondJobGroup");
      }
   }
   
}

The call to chainJob() at the end will result in the automatic job chaining behaviour in the parent class. Note that this isn’t called immediately, but only executes after the job completes its doExecute() method.

Here’s a simple harness that demonstrates everything together:

import org.quartz.*;
import org.quartz.impl.*;

public class Test {
   
   public static void main(String[] args) throws Exception {

      // start up scheduler
      StdSchedulerFactory.getDefaultScheduler().start();

      JobDetail job = JobBuilder.newJob(TestJob.class)
             .withIdentity("firstJob", "firstJobGroup").build();

      // Trigger our source job to triggers another
      Trigger trigger = TriggerBuilder.newTrigger()
            .withIdentity("firstJobTrigger", "firstJobbTriggerGroup")
            .startNow()
            .withSchedule(
                  SimpleScheduleBuilder.simpleSchedule().withIntervalInSeconds(1)
                  .repeatForever()).build();

      StdSchedulerFactory.getDefaultScheduler().scheduleJob(job, trigger);
      Thread.sleep(5000);   // let job run a few times

      StdSchedulerFactory.getDefaultScheduler().shutdown();
   }
   
}

Sample Output

Executing firstJob with {}
Chaining secondJob
Executing secondJob with {jobValue=5420204983304142728, jobTime=Sat Mar 02 15:19:29 PST 2013}
Executing firstJob with {}
Executing firstJob with {}
Chaining secondJob
Executing secondJob with {jobValue=-2361712834083016932, jobTime=Sat Mar 02 15:19:31 PST 2013}
Executing firstJob with {}
Chaining secondJob
Executing secondJob with {jobValue=7080718769449337795, jobTime=Sat Mar 02 15:19:32 PST 2013}
Executing firstJob with {}
Chaining secondJob
Executing secondJob with {jobValue=7235143258790440677, jobTime=Sat Mar 02 15:19:33 PST 2013}
Executing firstJob with {}

Deficiencies

Well, we’re up and chaining, but there are some problems with this approach:

  • It doesn’t integrate with a container like Spring to use configured jobs. More code would be required.
  • It forces you to know up front which jobs you want to chain, and write code for it.
  • Configuration is fixed, unless, once again, you write more code.
  • No real-time changes (unless you write more code).
  • A fair bit of code to maintain , and high likelihood you will have to expand it for more functionality.

The theme here is that it’s doable, but it’s up to you to do the work to make it happen. Obsidian avoids these problems by making chaining configurable, instead of it being a feature of the job itself. Read on to find out how.

Chaining in Obsidian

In contrast to Quartz, chaining in Obsidian requires no code and no up-front knowledge of which jobs will chain or how you might want to chain them later. Chaining is a form of configuration, and like all job-related configuration in Obsidian, you can make live changes at any time without a build or any code at all. Job configuration can use a native REST API or the web UI that’s included with Obsidian.

The following chaining features are available for free:

  • No code and no redeploy to add or remove chains.
  • You can chain specific configurations of job classes.
  • You can chain only on certain states, including failure.
  • Chain conditionally based on source job saved state (equivalent to Quartz’s JobDataMap), including multiple conditions. Regexp/Equals/Greater than, etc.
  • Chain only when matching a schedule.

Check out the feature and UI documentation to find out more.

Now that we know what’s possible, let’s see an example. Once you have your jobs configured, just create a new chain using the UI. REST API support will be here shortly but as of 1.5.1 chaining isn’t included in the API. If you need to script this right now, we can provide pointers.

In the UI, it looks like the following:

Chaining UI

Easy, huh? All configuration is stored in a database, so it’s easy to replicate it in various environments or to automate it via scripting. As a bonus, Obsidian tracks and shows you all chaining state including what job triggered a chained job. It will even tell you why a job chain didn’t fire, whether it’s because the job status didn’t match, or one of your conditions didn’t.

Conclusion

That summarizes how you can go about chaining in Quartz and Obsidian. Quartz definitely has a minimalist approach, but that leaves developers with a lot of work to do.

Meanwhile, Obsidian provides rich functionality out of the box to keep developers working on their own rich functionality, instead of the plumbing that so often seems to consume their time. If you have any suggestions or feature requests for Obsidian, drop us a note by leaving a comment or by contacting us.

Comparing Job Development in Quartz and Obsidian

Getting your program code to the point that it satisfies the functional requirements provided is a milestone for developers, one that hopefully brings satisfaction and a sense of accomplishment. If that code must be executed on a schedule perhaps for multiple uses with custom schedules and configurable parameters, this can mean a whole new set of problems.
We’re going to compare how we would write a job in Quartz and one in Obsidian that would satisfy the above requirements. We’ll use the example scenario of a recurring report. In this scenario, the report has the following dynamic criteria: it is emailed to a specified user, the report format can be selected, either PDF or Excel, and of course the execution frequency varies by user.

The following will be the sample class we’ll use to satisfy these requirements.

public class MyReportClass {
    public void emailReport(String emailAddress, String reportFormat) {
	…generate report in desired format
	…email report to user
    }
}

For the purpose of this exercise, we will leave this class alone and write a wrapper class for scheduling, allowing for its continued use in non-scheduled contexts.

Let’s start with Obsidian. All Obsidian jobs start with implementing a single interface: SchedulableJob. Our Obsidian job class will look something like this:

import com.carfey.ops.job.Context;
import com.carfey.ops.job.SchedulableJob;
import com.carfey.ops.job.param.Configuration;
import com.carfey.ops.job.param.Parameter;
import com.carfey.ops.job.param.Type;

@Configuration(knownParameters={
		@Parameter(name= MyScheduledReportClass.EMAIL, type=Type.STRING, required=true),
		@Parameter(name= MyScheduledReportClass.REPORT_FORMAT, type=Type.STRING, defaultValue="PDF", required=true)
}) 
public class MyScheduledReportClass implements SchedulableJob {
	public static final String EMAIL = "email";
	public static final String REPORT_FORMAT = "reportFormat";

	public void execute(Context context) throws Exception {
		String email = context.getConfig().getString(EMAIL);
		String reportFormat = context.getConfig().getString(REPORT_FORMAT);
		new MyReportClass().emailReport(email, reportFormat);
	}
}

You’ll notice we can annotate the class with the required parameters. This ensures that when this job is scheduled for execution, the email and reportFormat parameters will always be available. Obsidian will not allow the job to be configured without these values and will also ensure their type. But we wouldn’t mind going a step further. We’d like to validate the reportFormat is valid. How can we do so before the job is run?
We can change our class to implement ConfigValidatingJob and implement the necessary method.

Now our class looks like this:

import com.carfey.ops.job.ConfigValidatingJob;
import com.carfey.ops.job.Context;
import com.carfey.ops.job.config.JobConfig;
import com.carfey.ops.job.param.Configuration;
import com.carfey.ops.job.param.Parameter;
import com.carfey.ops.job.param.Type;
import com.carfey.ops.parameter.ParameterException;
import com.carfey.suite.action.ValidationException;

@Configuration(knownParameters={
		@Parameter(name= MyScheduledReportClass.EMAIL, type=Type.STRING, required=true),
		@Parameter(name= MyScheduledReportClass.REPORT_FORMAT, type=Type.STRING, defaultValue="PDF", required=true)
}) 
public class MyScheduledReportClass implements ConfigValidatingJob {
	public static final String EMAIL = "email";
	public static final String REPORT_FORMAT = "reportFormat";

	public void execute(Context context) throws Exception {
		String email = context.getConfig().getString(EMAIL);
		String reportFormat = context.getConfig().getString(REPORT_FORMAT);
		new MyReportClass().emailReport(email, reportFormat);
	}

	public void validateConfig(JobConfig config) throws ValidationException, ParameterException {
		String reportFormat = config.getString(REPORT_FORMAT);
		if (!"PDF".equalsIgnoreCase(reportFormat) && !"EXCEL".equalsIgnoreCase(reportFormat)) {
			throw new ValidationException("Report format must be either PDF or EXCEL");
		}
	}

}

That’s it! Our job will now only accept being scheduled with an email address specified and a valid report format specified. You could easily extend this to other types of custom validation, such as ensuring the email address is valid or perhaps that is in an allowable domain.

Now for Quartz. Let’s first of all identify some differences. Quartz doesn’t provide any mechanisms for ensuring parameters are specified or are valid before runtime. And since Quartz doesn’t provide an execution context, the best you can do when you write your own code to do so is validate the parameters on startup. Our sample below will follow the easiest approach in Quartz, to simply fail the job at runtime if the report format is invalid.

import org.quartz.Job;
import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;


public class MyScheduledReportClass implements Job {
	public static final String EMAIL = "email";
	public static final String REPORT_FORMAT = "reportFormat";

	public void execute(JobExecutionContext context) throws JobExecutionException {

        JobDataMap data = context.getMergedJobDataMap();
        String email = data.getString(EMAIL);
        String reportFormat = data.getString(REPORT_FORMAT);
		if (!"PDF".equalsIgnoreCase(reportFormat) && !"EXCEL".equalsIgnoreCase(reportFormat)) {
			throw new JobExecutionException("Report format must be either PDF or EXCEL");
		}
		new MyReportClass().emailReport(email, reportFormat);
	}

}

You may be thinking that the classes seem fairly comparable and I would agree. But with the Obsidian job, there’s nothing else that needs to be done. Since setting the runtime schedule and specifying parameters tend to be fluid, those are not done in code or even in static configuration. Using Obsidian’s UI or REST API, you specify the schedule and parameters for each instance or version of the job that is needed.

Obsidian always provides an execution context that can be standalone or be embedded as a part of an existing execution context.

Quartz never provides an execution context. Unless you are deploying in a servlet container, you always need to initialize the scheduling environment. Even when using a servlet container, you must help Quartz along. That means that With Quartz, you’ve only a portion of the code and/or configuration you’ll need.

Initialize the scheduler:

SchedulerFactory sf = new StdSchedulerFactory();
Scheduler scheduler = sf.getScheduler();
scheduler.start();

No administration console and no REST API means code and/or config to schedule and parameterize your job.

JobDetail job = newJob(MyScheduledReportClass.class).withIdentity("joe's report", "group1").usingJobData(MyScheduledReportClass.EMAIL, "joe@****.com").usingJobData(MyScheduledReportClass.REPORT_FORMAT, "PDF").build();
Trigger trigger = newTrigger().withIdentity("trigger1", "group1").startNow().withSchedule(dailyAtHourAndMinute(1, 30)).build();
scheduler.scheduleJob(job, trigger);

Now this may not seem too bad, but now imagine that Joe says he wants the report in Excel, not PDF. Are you really going to say that it requires code changes, followed by a build, followed by testing, acceptance, and promoting a new release?

True, some of the above can be moved to configuration files. While that may avoid a build cycle, it does present its own set of issues. You still have to push new configuration files, restart the jvm process and deal with potential mistakes in the new configuration files that could potentially derail all scheduling.

This also doesn’t get into the issues surrounding misfires, Job Concurrency and execution exception handling and recoverability discussed here.

What do you think? Share your experiences using Quartz for scheduling in your java projects by leaving a comment. We’d like to hear from you.

Feature Comparison of Java Job Schedulers

At Carfey Software, we love our flagship product, Obsidian Scheduler. We believe that Obsidian is the best choice for most scheduling needs. Why? Because Obsidian is carefully designed to meet both simple and complex requirements. We think it stacks up well whether you are struggling with an existing scheduler or investigating if you should once again use one of the de facto scheduler solutions on your new project, or perhaps are just curious about alternatives.

So we decided to compare Obsidian to Quartz, cron4j and Spring. And if the technology you’re considering isn’t listed here, why not use these items as a guide to consider what is important for your upcoming project? For a brief overview, check out our feature comparison.

Real-time Schedule Changes / Real-time Job Configuration

Obsidian Quartz cron4j Spring
Yes No native interactivity No

Initially, these may not seem very important, but we’ve all likely dealt with situations where we had to temporarily disable a job or change when it runs due to changes in requirements, unexpected technical problems or simply unanticipated behaviour. Obsidian provides both a UI and a REST API to make these changes and they can be effective at the very next minute. Quartz and cron4j are able to make these changes, but they are done via an API or via configuration, so it’s up to you as the developer to find a way to expose this functionality in real-time.

  Obsidian Quartz cron4j Spring
Ad-hoc Job Submission Yes No No native interactivity No
Configurable Job Conflicts Yes No No No

As you can see, this means supporting something like ad-hoc job submission is also not easily done with these other technologies, when the library even supports it.

When it comes to configurable job conflicts, these too can be configured in real-time. So, if it turns out that two jobs that are executing concurrently are colliding with each other and this is while they are executing in your production environment, you can actually adjust to the circumstance with Obsidian, whereas with other schedulers, you may not have any recourse but shutting down, changing code or configuration, and then starting up again. With Obsidian’s conflict support, you could even choose the conflict configuration as a medium- or long-term solution.

Code- and XML-Free Job Configuration

Obsidian Quartz cron4j Spring
Yes No No No

Obsidian provides you with a rich administration UI exposed via a standard web application. We even support job parameterization that can be validated and enforced via the UI if your job is so designated. Quartz and cron4j are essentially just libraries, so they require code and/or configuration as their means of job configuration.

Since we want to be able to make these types of dynamic changes, Obsidian provides a write access user role which corresponds to scheduler operators who can access the UI and perform the necessary changes. All these changes are audited in Obsidian and these audit logs are searchable from the UI, giving you insight into what changes have been made by your team members.

Job Event Subscription/Notification

Obsidian Quartz cron4j Spring
Yes No No No

Quartz and cron4j can handle event notifications via custom listeners. But again, if you want to send out emails on certain events, you have to write that code. If you want to change who receives which notifications, you either expose the mechanism to make those changes, or push new configuration files or possibly even new code. Obsidian chooses not to use custom listeners since we have provided natively the means to do the things these listeners would be used for. Custom listeners would otherwise be needed to handle something like job chaining, but Obsidian supports that natively, even allowing for configuration of conditional chaining decisions. For all events, items can be subscribed to generally or by specific entity, e.g. subscribe to all job failures or just a specific job’s failures.

  Obsidian Quartz cron4j Spring
Custom Listeners No Yes Yes No
Job Chaining Yes Implement yourself using custom listeners No

Obsidian goes one step further and even allows you to be subscribe and be notified to a broader set of events. For example, you can be notified when an Obsidian node is shut down, when someone changes a job configuration item, when someone changes a system configuration item, and so on. And all notifications are logged in the system for review.

Monitoring & Management UI

Obsidian Quartz cron4j Spring
Yes No No No

Obsidian’s monitoring and management UI is powerful, yet very easy to use. You can even play around with it at our live, functional and interactive demo site to see for yourself. Or download Obsidian and have a local version running against an in-memory DB and bundled servlet container within minutes. Quartz does have an add-on pay product that provides some UI. But Obsidian’s UI is free to use even if you use Obsidian’s free single-node.

We’ve discussed management already, but monitoring and investigating is another key part of keeping software running smoothly. If a job fails or a job seems to have run with unexpected criteria, having to gain access to log files and then pore over them to try to find the problem is inefficient, unproductive and a frustrating process for support staff and developers alike. Obsidian’s UI can grant read-only access to support and developer staff so they can review the details of job executions (both success and failures). Filtering and custom search criteria can be used to drill down and find the relevant detail all without ever having to share or transfer files around.

Zero Configuration Clustering and Load Sharing

Obsidian Quartz cron4j Spring
Yes No No No

If Obsidian is running, it natively has the ability to be clustered providing you with load sharing, reliability and failover. Every Obsidian Scheduler instance of any type automatically joins the existing pool/cluster or establishes it if it is the first one on the scene. No extra configuration required. No communication between servers necessary. No multicast, no replication of data between servers. This means that you can easily swap out hardware in case of failure or add a new member for load sharing with ease. Of the comparison technologies, only Quartz supports clustering, but it requires special advanced configuration. Also, to change from non-clustered mode to clustered mode would require taking the existing Quartz instance down.

  Obsidian Quartz cron4j Spring
Job Execution Host Affinity Yes No Not Applicable

Obsidian in its pooling also supports host specificity so that within a cluster, specific nodes can be designated as the allowable execution nodes for a given job.

Scripting Language Support in Jobs

Obsidian Quartz cron4j Spring
Yes No No No

Obsidian allows you to use Groovy, JavaScript, Python and BeanShell as script languages, in addition to standard Java jobs. It’s been implemented such that you can edit the scripts right in Obsidian’s UI console. One of the biggest benefits this scripting support provides that we and our customers have found is the ability to quickly write new jobs without redeploying. For example, operators can react quickly to situations and configure a simple Python script to run in certain job failure conditions.

Scheduling Precision

Obsidian Quartz cron4j Spring
Minute Second Minute Millisecond

No Java scheduler can really guarantee with fine precision when a job will fire. Busy hardware could easily lead to pauses or delays in any strategy to fire any activity at an expected time. As such, and due to the performance degradations that would be associated with more aggressive scheduling, we made a decision with Obsidian to support only minute-level precision for job scheduling. If you absolutely require more aggressive and precise scheduling knowing there are no assurances, consider the alternatives above.

Job Scheduling & Management REST API

Obsidian Quartz cron4j Spring
Yes No No No

Obsidian introduced a REST API in version 1.5 to ease integration into other applications and software environments, regardless of the technology used. A complete range of job, scheduling and host management features are exposed via the API. This allows you to integrate Obsidian into external monitoring systems or perhaps even writing Obsidian jobs to react to specific situations. For example, if a job that runs hourly has been failing continually over a period of many hours, perhaps you would want automatically disable it. The API can also be used to retrieve the available execution and logging data in Obsidian and could be used for generating reports or informing interested parties of pertinent activity.

Custom Calendar Support

Obsidian Quartz cron4j Spring
No Yes No No

Quartz does have a feature to support custom calendars. This allows you to reference custom scheduling options in your job’s configured schedule. For example, perhaps you would want to run a job on every weekday, skipping certain business holidays. You can do so with Quartz, but not so with any of these other schedulers unless you were to put custom code in the job itself.

Conclusion

Obsidian has many additional features that haven’t been detailed here, such as configurable recovery options, resubmission of failed jobs, parameterized job support, job configuration validation, job results storage/retrieval and so on. In practice, many developers and even project managers gravitate toward these de facto solutions, but for too long we in the developer community been fighting with these scheduling technologies and contending with the inferior results. Try our live, functional and interactive demo site to see for yourself. If you like what you see, download Obsidian and be refreshed with this easy-to-use and feature-rich scheduler.

Null and 3-Dimensional Ordering Helpers in Java

When dealing with data sets retrieved from a database, if we want them ordered, we usually will want to order them right in the SQL, rather than order them after retrieval. Our database will typically be more efficient due to available processing power, potential use of available indexes and overall algorithm efficiency in modern RDBMSes. You also have great flexibility to have complex ordering criteria when you use these ORDER BY clauses. For example, assume you had a query that retrieved employee information including salary and relative steps (position) from the top position. You could easily have a first-level ordering where salaries are grouped into categories (<= $50 000, $50 001 to $100 000, > $100 001), and the next level ordered by relative position. If you could assume that salaries were appropriate for all employees, this might give you a nice idea of where there is too much or too little management in the company – a very rough approach I wouldn’t recommend, this is just a sample usage.

You get some free behaviour from your database when it comes to ordering, whether you realize it or not. When dealing with NULLs, the database has to make a decision how to order them. Every database I’ve ever worked with and likely all relational databases have a default behaviour. In ascending order, MySQL and SQL Server put NULL ahead of real values, they are “less than” a non-NULL value. Oracle and Postgres put NULL after real values, they are “greater than” non-NULL values. Oracle and Postgres nicely give you the NULLS FIRST and NULLS LAST instructions so you can actually override the defaults. Even in MySQL and SQLServer, you can find ways to override the defaults using functions in your order by clause. In MySQL I use IFNULL. In SQL Server, you could use ISNULL. These both give you the option of replacing null with a particular value. Just replace an appropriate value for the type you are sorting.

All sorting supported in these types of queries is two-dimensional. You pick columns and the rows are ordered by those. When you need to sort by additional dimensions of the data, you’re probably getting into areas that are addressed in other related technologies such as data warehousing and OLAP cubes. If that is appropriate and available for your case, by all means use those powerful features.

In many cases though, we either don’t have access to those technologies or we need our operations to be on current data. For example, let’s say you are working on an investment system, investor’s accounts, trades, positions, etc. are all maintained. You need to write a query to help extract trade activity for a given time frame. Our data comes back as a two-dimensional datasets even though we have more dimensions. Our query will return data on account(s) and the trade(s) per account. We need our results to be ordered by those accounts whose effected trades have the highest value. But we need to maintain the trades with their accounts. Simply ordering our query by the value of the effected trade would likely break the rows of the same account apart.

We have a choice, we can either order in the database and control our reconstruction of returning data to maintain the state and order of reconstructed objects or we can sort after the fact. In most cases, we probably don’t want to write new code each time we come across this problem that deals specifically with reconstituting the data from that query into our object model’s representation. Hopefully our ORM will help or we have some preexisting, functional and well-tested code that we can reuse to do so.

Another option is to sort in our code. We actually get lots of flexibility by doing this. Perhaps we have some financial functions that are written in our application that we can now use. We also don’t have to do all the sorting ourselves as we can take advantage of JDK features for Comparator and Collection sorting.

First, let’s deal with our null ordering problem. Let’s say our Trade object has some free public constant Comparators. These allow us to use a collection of Trades along with java.util.Collections.sort(List<Trade>, Comparator<Trade>). Trade.TRADE_VALUE_NULL_FIRST is the one we want to use. This Comparator is nothing more than a passthrough to a global Null Comparator helper.

private static final Comparator<Trade> TRADE_VALUE_NULL_FIRST = new Comparator<Trade>(){
  public int compare(Trade o1, Trade o2) {
    return ComparatorUtil.DECIMAL_NULL_FIRST_COMPARATOR.compare(
        o1.getTradeValue(), 
        o2.getTradeValue());
}};

... ComparatorUtil ...

public static NullFirstComparator<BigDecimal> DECIMAL_NULL_FIRST_COMPARATOR = 
  new NullFirstComparator<BigDecimal>();
public static NullLastComparator<BigDecimal> DECIMAL_NULL_LAST_COMPARATOR = 
  new NullLastComparator<BigDecimal>();
...snip...
public static NullLastComparator<String> STRING_NULL_LAST_COMPARATOR = 
  new NullLastComparator<String>();

public static class NullFirstComparator<T extends Comparable<T>> implements Comparator<T> {
  public int compare(T o1, T o2) {
    if (o1 == null && o2 == null) {
	return 0;
    } else if (o1 == null) {
	return -1;
    } else if (o2 == null) {
	return 1;
    } else {
	return o1.compareTo(o2);
    }
  }
}
public static class NullLastComparator<T extends Comparable<T>> implements Comparator<T> {
  public int compare(T o1, T o2) {
    if (o1 == null && o2 == null) {
	return 0;
    } else if (o1 == null) {
	return 1;
    } else if (o2 == null) {
	return -1;
    } else {
	return o1.compareTo(o2);
    }
  }
}

Now we have a simple, reusable solution we can use with any class and any nullable value in JDK sorting. Now we expose any ordering constants appropriate for business usage in our class. Now let’s deal with the more complex issue of hierarchical value ordering. We don’t want to write new code everytime we have to do something like this. So let’s just extend our idea of ordering helpers to hiearchical entities.

public interface Parent<C> {
  public List<C> getChildren();
}
public class ParentChildPropertiesComparator<P extends Parent<C>, C> implements Comparator<P> {
  private List<Comparator<C>> mChildComparators;
  public ParentChildPropertiesComparator(List<Comparator<C>> childComparators) {
    mChildComparators = Collections.unmodifiableList(childComparators);
  }
  public List<Comparator<C>> getChildComparators() {
    return mChildComparators;
  }
  public int compare(P o1, P o2) {
    int compareTo = 0;
    for (int i=0; i < mChildComparators.size() && compareTo == 0; i++) {
	Comparator<C> cc = mChildComparators.get(i);
	List<C> children1 = o1.getChildren();
	List<C> children2 = o2.getChildren();
	Collections.sort(children1, cc);
	Collections.sort(children2, cc);
	int max = Math.min(children1.size(), children2.size());
	for (int j=0; j < max && compareTo == 0; j++) {
	  compareTo = cc.compare(children1.get(j), children2.get(j));
	}
    }
    return compareTo;
  }
}

This is a little more complex, but still simple enough to easily grasp and reuse. We have the idea of a parent. This is not an OO relationship. This is a relationship of composition or aggregation. A parent can exist anywhere in the hierarchy, meaning a parent can also be a child. But in our sample, we have a simple parent/child relationship - Account/Trade. This new class, ParentChildPropertiesComparator supports the idea of taking in a List of ordered Comparators on the children entities but sorting on the parents. In our scenario, we are only sorting on one child value, but it could easily be several just as you could sort more than 2 levels of data.

We are assuming in our case that Account already implements the Parent interface for accounts. If not, you can always use the Adapter Design Pattern. Our Account/Trade sorting would now look like this.

List<Account> accounts = fetchPreviousMonthsTradeActivityByAccount();
List<Comparator<Trade>> comparators = Arrays.asList(Trade.TRADE_VALUE_NULL_FIRST);
ParentChildPropertiesComparator<Account, Trade> childComparator = 
  new ParentChildPropertiesComparator<Account, Trade>(comparators);
Collections.sort(accounts, childComparator);

Really! That's it. Our annoying problem of sorting accounts by those with highest trade values where some of those trade values could be null is solved in just a few lines of code. Our accounts are now sorted as desired. I came up with this approach and it is used successfully as a part of a query builder for a large-volume financial reconciliation system. Introduction of new sortable types and values requires only minimal additions. Take this approach for a whirl and see how incredibly powerful it is for dealing with sorting requirements across complex hierarchies of data. And drop us a line if you need help in implementation or have any comments.

Computing Common and Unique Elements In Multiple Collections – Java

This week, we’ll take a break from higher level problems and technology posts to deal with just a little code problem that a lot of us have probably faced. It’s nothing fancy or too hard, but it may save one of you 15 minutes someday, and occasionally it’s nice to get back to basics.

So let’s get down to it. On occasion, you’ll find you need to determine which elements in one collection exist in another, which are common, and/or which don’t exist in another collection. Apache Commons Collections has some some utility methods in CollectionUtils that are useful, notably intersection(), but this post goes a bit beyond that into calculating unique elements in collection of collections, and it’s always nice to get down to the details. We’ll also make the solution more generic by supporting any number of collections to operate against, rather than just two collections as CollectionUtils does. Plus there’s the fact that not all of us choose to or are able to include libraries just to get a couple useful utility methods.

When dealing with just two collections, it’s not a difficult problem, but not all developers are familiar with all the methods that java.util.Collection defines, so here is some sample code. They key is using the retainAll and removeAll methods together to build up the three sets – common, present in collection A only, and present in B only.

Set<String> a = new HashSet<String>();
a.add("a");
a.add("a2");
a.add("common");

Set<String> b = new HashSet<String>();
b.add("b");
b.add("b2");
b.add("common");

Set<String> inAOnly = new HashSet<String>(a);
inAOnly.removeAll(b);
System.out.println("A Only: " + inAOnly );

Set<String> inBOnly = new HashSet<String>(b);
inBOnly .removeAll(a);
System.out.println("B Only: " + inBOnly );

Set<String> common = new HashSet<String>(a);
common.retainAll(b);
System.out.println("Common: " + common);

Output:

A Only: [a, a2]
B Only: [b, b2]
Common: [common1]

Handling Three or More Collections

The problem is a bit more tricky when dealing with more than two collections, but it can be solved in a generic way fairly simply, as shown below:

Computing Common Elements
Computing common elements is easy, and this code will perform consistently even with a large number of collections.

   
public static void main(String[] args) {
   List<String> a = Arrays.asList("a", "b", "c");
   List<String> b = Arrays.asList("a", "b", "c", "d");   
   List<String> c = Arrays.asList("d", "e", "f", "g");
    
   List<List<String>> lists = new ArrayList<List<String>>();
   lists.add(a);
   System.out.println("Common in A: " + getCommonElements(lists));
   
   lists.add(b);
   System.out.println("Common in A & B: " + getCommonElements(lists));
   
   lists.add(c);
   System.out.println("Common in A & B & C: " + getCommonElements(lists));
   
   lists.remove(a);
   System.out.println("Common in B & C: " + getCommonElements(lists));
}
    
    
public static <T> Set<T> getCommonElements(Collection<? extends Collection<T>> collections) {

    Set<T> common = new LinkedHashSet<T>();
    if (!collections.isEmpty()) {
       Iterator<? extends Collection<T>> iterator = collections.iterator();
       common.addAll(iterator.next());
       while (iterator.hasNext()) {
          common.retainAll(iterator.next());
       }
    }
    return common;
}

Output:

Common in A: [a, b, c]
Common in A & B: [a, b, c]
Common in A & B & C: []
Common in B & C: [d]

Computing Unique Elements
Computing unique elements is just about as straightforward as computing common elements. Note that this code’s performance will degrade as you add a large number of collections, though in most practical cases it won’t matter. I presume there are ways this could be optimized, but since I haven’t had the problem, I’ve not bothered tryin. As Knuth famously said, “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil”.

   
public static void main(String[] args) {
   List<String> a = Arrays.asList("a", "b", "c");
   List<String> b = Arrays.asList("a", "b", "c", "d");   
   List<String> c = Arrays.asList("d", "e", "f", "g");
    
   List<List<String>> lists = new ArrayList<List<String>>();
   lists.add(a);
   System.out.println("Unique in A: " + getUniqueElements(lists));
   
   lists.add(b);
   System.out.println("Unique in A & B: " + getUniqueElements(lists));
   
   lists.add(c);
   System.out.println("Unique in A & B & C: " + getUniqueElements(lists));
   
   lists.remove(a);
   System.out.println("Unique in B & C: " + getUniqueElements(lists));
}
    
    
public static <T> List<Set<T>> getUniqueElements(Collection<? extends Collection<T>> collections) {
    
    List<Set<T>> allUniqueSets = new ArrayList<Set<T>>();
    for (Collection<T> collection : collections) {
       Set<T> unique = new LinkedHashSet<T>(collection);
       allUniqueSets.add(unique);
       for (Collection<T> otherCollection : collections) {
          if (collection != otherCollection) {
             unique.removeAll(otherCollection);
          }
       }
   }
       
    return allUniqueSets;
}

Output:

Unique in A: [[a, b, c]]
Unique in A & B: [[], [d]]
Unique in A & B & C: [[], [], [e, f, g]]
Unique in B & C: [[a, b, c], [e, f, g]]

That’s all there is to it. Feel free to use this code for whatever you like, and if you have any improvements or additions to suggest, leave a comment. Developers all benefit when we share knowledge and experience.

Problems with ORMs Part 2 – Queries

In my previous post on Object-Relational Mapping tools (ORMs), I discussed various issues that I’ve faced dealing with the common ORMs out there today, including Hibernate. This included issues related to generating a schema from POJOs, real-world performance and maintenance problems that crop up. Essentially, the conclusion is that ORMs get you most of the way there, but a balanced approach is needed, and sometimes you just want to avoid using your ORM’s toolset, so you should be able to bypass it when desired.

One huge flaw in modern ORMs that I see though is that they really want to help you solve all your SQL problems. What do I mean by this why would I say this is a fault? Well, I believe that Hibernate et al just try too hard and end up providing features that actually hurt developers more than they help. The main thing I have in mind when I say this is query support. Actual support for complex queries that are easily maintained is seriously lacking in ORMs and not because they’ve omitted things — it’s just because the tools they provide don’t use SQL, which was designed from the ground up for exactly this purpose.

Experiences in Hibernate

It’s been my experience that when you use features like HQL, frequently you’re thinking about saving yourself a few minutes up front, and there’s nothing wrong with this in itself, but it can cause serious problems. It’s my experience that frequently you end up wanting or needing to replace HQL with something more flexible, either because of a bug fix or enhancement, and this is where the trouble starts.

I consider myself an experienced developer and I pride myself on (usually) not breaking things — to me, that is one of the hallmarks of good developers. When you’re faced with ripping out a piece of code and replacing it wholesale, such as replacing HQL with SQL, you’re basically replacing code that has had a history that includes bug fixes, enhancements and performance tweaks. You are now responsible for duplicating every change to this code that’s ever been made and it’s quite possible you don’t understand the full scope of the changes or the niggling problems that were corrected in the past.

Note that this also applies to all the other query methods that Hibernate provides, including the Query API, and through extension, query support within the JPA. The issue is that you don’t want a solution that is brittle or limited that it has to be fully replaced later. This means that if you need to revert to SQL to get things done, there’s a good chance you should just do that in the first place. This same concept applies to all areas of software development.

So what do we aim for if the basic querying support in ORMs like Hibernate isn’t good enough?

Criteria for a Solid ORM

Bsaically, my personal requirements for an ORM come down to the following:

  • Schema first – generate your model from a database, not the other way around. If you have a platform-agnostic way of specifying DDL for the database, great, but it’s not a deal-breaker. Generating a database from some other domain-specific language or format helps nobody and results in a poorly designed schema.
  • SQL only – if you want to help me avoid writing code, then generate/expose key-based, etc. lookups for me. Don’t ask me to use your query API or some new query language. SQL was invented for queries, so let me use the right tool.
  • Give me easy ways to populate my domain objects from queries. This gives me 99% of what I’ll ever need, while giving me flexibility.
  • Allow me to populate arbitrary Java beans with query results – don’t tie me into your registry of known types.
  • Don’t force me into using a typical transaction container like the one Hibernate or Spring provides – they are a disaster and I’ve never see a practical use for them that made any sense. Let me handle where connections/transactions are acquired and released in my application – typically this only happens in a few places with clear semantics anyway. This can be some abstracted version of JDBC, but let me control it.
  • No clever/magic behaviour in my domain objects – when working with Hibernate, I spend a good time solving the same old proxy and lazy-loading issues. They never end and can’t be solved once-and-for-all which indicates a serious design issue.

Though these points seem completely reasonable to me, I’ve not encountered any ORMs that really meet my expectations, so at Carfey we’ve rolled our own little ORM, and I have to say that weekend projects and just general development with what we have is far easier and faster than Hibernate or the other ORMs I’ve used. What does it provide?

A Simple Utilitarian ORM

  • Java domain classes are generated from a DB schema. There’s no platform-agnostic DDL yet, but it’s on our TODO list. Beans include support for child collections, FK references, but it’s all lazy and optional – the beans support it, but if you don’t use them, there’s no impact. Use IDs directly if you want, or domain objects themselves. Persistence handles persisting dirty objects only, and saves are only done when requested – no magic flush behaviour.
  • Generated domain classes are for persistence only! Stick your business logic, etc. elsewhere.
  • SQL is used for all lookups, including primary key fetches and foreign key relationships. If you need to enhance a lookup, just steal the generated SQL and build on it. Methods and SQL is generated automatically from any indexed column so they are all provided for you automatically and are typesafe. This also provides a warning to the developer – if a lookup is not available in your domain class, it likely will perform poorly since no index exists.
  • Any domain class can be populated from a custom query in a typesafe manner – it’s flexible but easy to use.
  • Improved classes hide the standard JDBC types such as Connnection and Statement for ease of use, but we don’t force any transaction semantics on you, and you can always fall back to things like direct result set handling.
  • Some basic required features like a connection pool, database metadata, and soon, database slave failover.

We at Carfey don’t believe we’ve created some incredible new ORM that surpasses every other effort out there, and there are many features we’d have to add if this was a public project, but what we have works for us, and I think we have the correct approach. And at the very least, hopefully our experience can help you choose how you use your preferred ORM wisely and not spend too much time serving the tool instead of delivering software.

As a final note, if you have experience with ORMs that meet my list of requirements above and you’ve had good experiences with it, I’ve love to hear about it and would consider it for future Carfey projects.

Easy Deep Cloning of Serializable and Non-Serializable Objects in Java

Frequently developers rely on 3d party libraries to avoid reinventing the wheel, particularly in the Java world, with projects like Apache and Spring so prevalent. When dealing with these frameworks, we often have little or no control of the behaviour of their classes.
This can sometimes lead to problems. For instance, if you want to deep clone an object that doesn’t provide a suitable clone method, what are your options, short of writing a bunch of code?

Clone through Serialization
The simplest approach is to clone by taking advantage of an object being Serializable. Apache Commons provides a method to do this, but for completeness, code to do it yourself is below also.

@SuppressWarnings("unchecked")
public static  T cloneThroughSerialize(T t) throws Exception {
   ByteArrayOutputStream bos = new ByteArrayOutputStream();
   serializeToOutputStream(t, bos);
   byte[] bytes = bos.toByteArray();
   ObjectInputStream ois = new ObjectInputStream(new ByteArrayInputStream(bytes));
   return (T)ois.readObject();
}

private static void serializeToOutputStream(Serializable ser, OutputStream os) 
                                                          throws IOException {
   ObjectOutputStream oos = null;
   try {
      oos = new ObjectOutputStream(os);
      oos.writeObject(ser);
      oos.flush();
   } finally {
      oos.close();
   }
}

// using our custom method
Object cloned = cloneThroughSerialize (someObject);

// or with Apache Commons
cloned = org.apache.commons.lang. SerializationUtils.clone(someObject);

But what if the class we want to clone isn’t Serializable and we have no control over the source code or can’t make it Serializable?

Option 1 – Java Deep Cloning Library
There’s a nice little library which can deep clone virtually any Java Object – cloning. It takes advantage of Java’s excellent reflection capabilities to provide optimized deep-cloned versions of objects.

Cloner cloner=new Cloner();
Object cloned = cloner.deepClone(someObject);

As you can see, it’s very simple and effective, and requires minimal code. It has some more advanced abilities beyond this simple example, which you can check out here.

Option 2 – JSON Cloning
What about if we are not able to introduce a new library to our codebase? Some of us deal with approval processes to introduce new libraries, and it may not be worth it for a simple use case.

Well, as long as we have some way to serialize and restore an object, we can make a deep copy. JSON is commonly used, so it’s a good candidate,since most of us use one JSON library or another.

Most JSON libraries in Java have the ability to effectively serialize any POJO without any configuration or mapping required. This means that if you have a JSON library and cannot or will not introduce more libraries to provide deep cloning, you can leverage an existing JSON library to get the same effect. Note this method will be slower than others, but for the vast majority of applications, this won’t cause any performance problems.

Below is an example using the GSON library.

@SuppressWarnings("unchecked")
public static  T cloneThroughJson(T t) {
   Gson gson = new Gson();
   String json = gson.toJson(t);
   return (T) gson.fromJson(json, t.getClass());
}
// ...
Object cloned = cloneThroughJson(someObject);

Note that this is likely only to work if the copied object has a default no-argument constructor. In the case of GSON, you can use an instance creator to get around this. Other frameworks have similar concepts, so you can use that if you hit an issue with an unmodifiable class having not having the default constructor.

Conclusion
One thing I do recommend is that for any classes you need to clone, you should add some unit tests to ensure everything behaves as expected. This can prevent changes to the classes (e.g. upgrading library versions) from breaking your application without your knowledge, especially if you have a continuous integration environment set up.

I’ve outlined a couple methods to clone an object outside of normal cases without any custom code. If you’ve used any other methods to get the same result, please share.

Ignoring Self-Signed Certificates in Java

A problem that I’ve hit a few times in my career is that we sometimes want to allow self-signed certificates for development or testing purposes. A quick Google search shows the trouble that countless Java developers have run into over the years. Depending on the exact certificate issue, you may get an error like one of the following, though I’m almost positive there are other manifestations:

java.security.cert.CertificateException: Untrusted Server Certificate Chain


javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

Getting around this often requires modifying JDK trust store files which can be painful and often you end up running into trouble. On top of that, every developer on your team will have to do the same thing, and every new environment you’ll have the same issues recurring.

Fortunately, there is a way to deal with the problem in a generic way that won’t put any burden on your developers. We’re going to focus on vanilla HttpURLConnection type connections since it is the most general and should still help you understand the direction to take with other libraries. If you are using Apache HttpClient, see here.

Warning: Know what you are doing!

Be aware of what using this code means: it means you don’t care at all about host verification and are using SSL just to encrypt communications. You are not preventing man-in-the-middle attacks or verifying you are connected to the host you think you are. This generally comes down to a few valid cases:

  1. You are operating in a locked down LAN environment. You are not susceptible to having your requests intercepted by an attacker (or if you are, you have bigger issues).
  2. You are in a test or development environment where securing communication isn’t important.

If this matches your needs, then go ahead and proceed. Otherwise, maybe think twice about what you are trying to accomplish.

Solution: Modifying Trust Managers

Now that we’re past that disclaimer, we can solve the actual problem at hand. Java allows us to control the objects responsible for verifying a host and certificate for a HttpsURLConnection. This can be done globally but I’m sure those of you with experience will cringe at the thought of making such a sweeping change. Luckily we can also do it on a per-request basis, and since examples of this are hard to find on the web, I’ve provided the code below. This approach is nice since you don’t need to mess with swapping out SSLSocketFactory implementations globally.

Feel free to grab it and use it in your project.

package com.mycompany.http;

import java.net.*;
import javax.net.ssl.*;
import java.security.*;
import java.security.cert.*;

public class TrustModifier {
   private static final TrustingHostnameVerifier 
      TRUSTING_HOSTNAME_VERIFIER = new TrustingHostnameVerifier();
   private static SSLSocketFactory factory;

   /** Call this with any HttpURLConnection, and it will 
    modify the trust settings if it is an HTTPS connection. */
   public static void relaxHostChecking(HttpURLConnection conn) 
       throws KeyManagementException, NoSuchAlgorithmException, KeyStoreException {

      if (conn instanceof HttpsURLConnection) {
         HttpsURLConnection httpsConnection = (HttpsURLConnection) conn;
         SSLSocketFactory factory = prepFactory(httpsConnection);
         httpsConnection.setSSLSocketFactory(factory);
         httpsConnection.setHostnameVerifier(TRUSTING_HOSTNAME_VERIFIER);
      }
   }

   static synchronized SSLSocketFactory 
            prepFactory(HttpsURLConnection httpsConnection) 
            throws NoSuchAlgorithmException, KeyStoreException, KeyManagementException {

      if (factory == null) {
         SSLContext ctx = SSLContext.getInstance("TLS");
         ctx.init(null, new TrustManager[]{ new AlwaysTrustManager() }, null);
         factory = ctx.getSocketFactory();
      }
      return factory;
   }
   
   private static final class TrustingHostnameVerifier implements HostnameVerifier {
      public boolean verify(String hostname, SSLSession session) {
         return true;
      }
   }

   private static class AlwaysTrustManager implements X509TrustManager {
      public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException { }
      public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException { }
      public X509Certificate[] getAcceptedIssuers() { return null; }      
   }
   
}

Usage

To use the above code, just call the relaxHostChecking() method before you open the stream:

URL someUrl = ... // may be HTTPS or HTTP
HttpURLConnection connection = (HttpURLConnection) someUrl.openConnection();
TrustModifier.relaxHostChecking(connection); // here's where the magic happens

// Now do your work! 
// This connection will now live happily with expired or self-signed certificates
connection.setDoOutput(true);
OutputStream out = connection.getOutputStream();
...

There you have it, a complete example of a localized approach to supporting self-signed certificates. This does not affect the rest of your application which will continue to have strict hosting checking semantics. This example could be extended to use a configuration setting to determine whether relaxed host checking should be used, and I recommend you do so if using this code is primarily a way to facilitate development with self-signed certificates.

If you have any questions about the example or have questions about doing this in a specific HTTP library, leave a post and I’ll do my best to help.