Recipe for Success Pie

This month I finished up my 2012 Year-End Reviews for my Employees. Looking back on the process and my individual conversations with my employees, I came up with a “Recipe for Success” Pie Chart. I’m thinking about using it for my employee’s Mid-Year Reviews for 2013, but I figured I share it on my blog first; as I think this could be of great help to answering one of the most difficult set of questions faced by Managers or Employers from their Employees; that is the questions around “Advancement”.

Specifically I plan on using this when an Employee has questions around Advancement either for Promotions or Compensation. Also I want to use this for more junior employees that I feel have the potential for Advancement.

When you see the Pie Chart below in this post, please don’t pay too much attention to the actual percentage values, they are there simply to give the slices of the pie an appropriate portion of what I personally consider the important factors to success. The actual values should not be considered exact in any way, but just as the actual recipe for success is something that is more of an Art than a Science, and therefore even the title of this post is technically incorrect, I think seeing the portion of the pie and how they relate to each other in terms of their size is more accurate of a representation than the actual numbers. I know this might seem confusing at first, but I assure you once you see the graph it will make more sense to you.

So what are the Factors of Success? From my personal experience, and yes these factors may vary slightly from person to person if you ask a successful person what made them successful, but I feel in some underlining way, most if not all of these factors have played a role in the success of most people.

Recipe of Success (The Ingredients or Factors):

Note: Lowest Order of Importance is most important; 1 being the most important. There is a 0, however that’s really undefined; you’ll see what I mean when you read this part.

  • What You Know
    • Order of Importance: 2
    • This factor represents your Skills, Knowledge, Education, and Experience.
    • As an Educator I know pointed out, I should have/could have listed “What you Learned” as a separate slice of the pie. And he’s right. Actually I wanted to write more about this section before I published it, but hit the publish button instead of save by mistake.
    • Education both from established institutions such as Schools as well as Knowledge gained from personal and professional projects is invaluable.
    • Sometimes I value Knowledge gained from projects both personal and professional over official classroom based education and textbooks, especially in the realm of technology and specifically software development. See my blog post on Education verses Experience.
    • So I have to agree with my friend Donald that “What You Learned” should have it’s own slice of the pie, but I’m being lazy here, if it wasn’t so much work on a Saturday afternoon to edit that damn Pie Chart in Excel and PowerPoint and save it as an image again, I would do it! evilgrin
  • Who You Know
    • Order of Importance: 4
    • Everyone will tell you knowing the right people is your ticket to success, in school, jobs, and otherwise. However I feel the next attribute of my Success Pie is more important. Knowing the Right People of course is important, but just the fact that you “know” someone is unimportant. I know people that know CEOs of companies, but those same CEOs wouldn’t give them jobs, because they either lack too many of the other Factors of Success or they are more of an Acquaintance and really don’t know them, or in general just wouldn’t put their name on the line for that person.
    • In my experience, people throw the word “Friend” around too casually. Acquaintance is more appropriate for most of the relationships in our lives; again in my opinion.
    • However you must first know someone before they get to know you and you get to know them. That’s where the next ingredient comes into play.
  • How You Know Them
    • Order of Importance: 3
    • This factor is really the second half of “Who You Know”. As I mostly explained already, simply the fact that you “know” someone is insignificant. It is how you know them or rather really how they know you, that’s important.
    • This includes the depth of information on the professional level, although sometimes for some people even some personal facts are important, but remember TMI (Too Much Info) when getting into the realm of personal facts.
    • A person in a position of power that you want to leverage to help you succeed needs to know that they can trust you. That you will not hurt their own reputation, and that if they give you a task that you will succeed and make both you and them look good. They need to know that you will be a good representative for them.
    • These reasons and others are why “How You Know Them” is more important than “Who You Know.” But again it’s a complimentary ingredient to “Who You Know.” But please remember there’s a big difference and simply knowing someone doesn’t count for much by itself.
    • Some people will disagree here, but they do not understand what I mean, if they are “close friends”, etc, that means they trust you, which again relates to “How You Know Them” or “How They Know You.”…
  • What You Do (Or “Deliver”)
    • Order of Importance: 1
    • This is possibly the most important factor of success.
    • Everything up until this point helps you to Deliver. And what you Do or Deliver is the most important thing for your successes in life.
    • You have to Walk the Walk, not just Talk the Talk.
    • When you list experience on a resume, you better make sure that you actually Delivered what your experience says, because good interviewers can see right through the people who never delivered, but were still “part of a project.”
    • Your Skills, Education, Past Experience both Professionally and Personally, Connections, all add up to this moment. This is the moment you take center stage and show to the world you can actually do it; make it happen.
    • What you Deliver is why you get promoted, get more job offers when you aren’t even looking, get the big bonus or that raise, or grow your team and responsibilities.
    • This is when you earn that Pay Check, make it count!
  • Luck (See Below for expansion on this, too many sub-factors…)
    • Order of Importance: 0 (Why ZERO? Because I think it’s hard to quantify how important Luck really is, and will vary person by person.)
    • What is Luck? Luck is:
      • Being in the right place at the right time.
      • Saying the right things.
      • Knowing the right people.
      • Doing the right things.
      • Succeeding at a task instead of failing against the odds.
      • Making the right choices in general.
      • Getting the chance to work on the right projects.
      • Getting hired for the right job that will give you the opportunities to gain experience, exposure, etc.
      • Graduating at the right time.
      • Working for the right group or department or company.
      • Going to the right schools.
      • Participating in the right extracurricular activities.
      • Being seen when it counts.
    • The list can go on and on. This is why it’s considered ZERO on my list of ingredients in terms of Order of Importance. The definition of Luck itself is infinite and cannot really be determined. We can list components of what is luck, but really you only need some of them to help you be successful, not all of them.
    • I hear all to often people saying I just wasn’t lucky. Although there may be some truth to this because you might be equally as good as someone else or maybe even better but maybe you missed some of the other attributes of what makes a person successful, or perhaps it really is a missing component that I listed under luck, for example, being at the right place at the right time.
    • I am a Capitalist and I believe in the principals of Capitalism. And my Recipe for Success “Pie” applies only to a Capitalistic Society. I even wear a T-Shirt that says “Capitalist” on the front of it in a baseball styled font. So I don’t believe that we have to live in a Socialist vision of a fair society. Instead I believe to our Government should support Capitalism and Freedom and simply allow for the chance that someone, anyone, no matter where they come from or who they are, can become successful, but that does not mean that any specific individual will be successful.
    • I don’t want to make this post get too political so I’ll stop it right here. But let’s face it, Luck does play some roll in a Capitalistic Society, and that’s ok…

Any how, it’s time for Pie…

SucessFactorsPieChart

I’m interested if any other managers feel points from this post or my Pie Chart are useful for their own Employee Reviews when an employee asks about advancement. Please feel free to contact me on my contact page or leave some comments. Also I would like to hear any general comments from anyone if they agree or disagree with any of my points, or feel I should even consider adding additional “Ingredients” to my Success Pie.

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
 
Posted in Philosophy | 1 Comment

Pair Programming

My favorite agile software development method, is Pair Programming. It is a technique where two programmers will work together at a single computer, working on the same project or component. One is known as the Driver, the person who is actually writing the code, and the other is the Observer (of Navigator if we want to use the Car Driving or Piloting terms) who is looking for bugs, looking for solution, and just all around throwing out ideas to the driver.

In my experience using this technique, the Pair of Programmers swap between Driver and Observer throughout the development of the component they are working on.

The process was probably created organically verses someone actually just thinking up a new SDLC Method, two programmers who were probably friendly around the office decided to work on a problem together and sat in the same cube or otherwise next two each other.

This has been my experience as well. I did not know I was participating in a truly established Software Development Technique, until way later in my career and I started reading more about SDLC models and the like.

A coworker who I was working with on a project and I started to become friendly, and at the time I was a junior developer, especially in the realm of Large Scale Web Applications, and he was hired as a Senior Web Applications Developer, actually the first on our small team. I had some personally experience with creating CGI scripts and  it’s one of my God Given Gifts to be able to extrapolate things (especially computer things) very quickly, given very little information, and I had him help me setup a Servlet container server on my local workstation, so I could help develop the Servlets and JSP pages, which at the time I had no experience with. He naturally took the lead, but quickly saw, that although I lacked the experience with Web App development, I did have very solid programming and database skills, so we would start passing code back and forth through shared drives, emailing ZIPs, etc, and then we released why should we waste time doing that when we sat just one row of cubes away from each other, so we started sitting together at his cube (because his row was more empty than mine, and we could talk louder without bothering other developers around us).

He might have known the term Pair Programming without my knowledge, because he often used the Driver and Navigator terms, I knew what he meant by each of them, because he especially in the beginning started saying would you like to Drive, and stand up to switch seats, etc…

I find Pair Programming very effective for training junior developers as well. Currently at the time of writing this blog entry, I have a pair of more Junior Developers working in this type of format right now, and I strategically placed their cubes right next to each other without a partition so they can roll back and forth on their office chairs and switch very easily between Driver and Navigator, it has been working great, and both of them have been coming up to speed on helping to develop larger and larger components more quickly than I first anticipated. The Tech Lead who runs the team I have these two developers reporting to, was quite skeptical at first, but even he has turned around and started giving them larger projects to work on. I very happy with their progress, and although they are both extremely intelligent developers, I attribute the speed at which they have been able to work and deliver within a high stress enterprise level environment, to the Pair Programming model which they have been working in.

No developer can possibly know everything, and having another highly skilled developer watching your back can be invaluable especially when you are under the clock to deliver products quickly and without bugs.

You can read the Wikipedia article on Pair Programming, for the details on it, but honestly it’s a small article, and there’s really nothing to the model, other than you have two programmers that have a good rapport with each other work together at a single workstation (or better yet two workstations right next to each other, so that the Navigator can lookup API docs and other tips in the MAN Pages or the Internet, or other online reference source, quickly, while the Driver continues writing code.), working on the same project or component, and each of them play both the roles of the Driver and the Observer/Navigator. Again the Driver is the person who is writing the code at a given moment, and the Observer/Navigator is the one looking over the Drivers shoulders, looking for bugs on each line of code the Driver is writing (this reduces bugs in real time), and is constantly thinking about the next component or function of the current component, so the Pair can very quickly move on writing the code. The Wikipedia article on this says the Driver is working on the here and now or tactical aspects of the code, writing the lines of code in real time, while the Navigator is reading the lines of code looking for bugs while thinking about the strategic aspect of the code.

I also have found that if you setup a large enough cube or work area or the two developers, so the Navigator could quickly lean over and start typing on the Drivers keyboard while still wearing the Navigator hat to quickly fix a bug, etc, this is extremely helpful and again produces a better quality of code.

Pair Programming needs to be embraced by management of course, because it is easy to mistaken two programmers goofing off or the Navigator not really doing his or her job, but this is a big misconception, the Navigator is playing a very crucial role, in helping to produce quality code with a reduced time to market, and shortened QA/SIT cycles. Also if you as a manager associate a particular persons cube with the person who is the Driver all the time, you are mistaken, the two programmers may simply fine that particular cube move comfortable, or perhaps, the programmer who is physically assigned to that cube has a better workstation or better development tools installed, etc.

As I stated in this article, no single developer can possibly know everything, and an even bigger problem is that sometimes, quite often actually, when working on large scale projects, a developer can easily fall into a rut and have tunnel vision when trying to write a component to solve some problem. This is the programmer’s equivalence of Writers Block. A typical technique for solving this problem is to walk away from your desk, maybe get some fresh air, or to sleep on it, etc. Pair Programming however offers another solution to this problem, because for one thing it takes some weight off your shoulders and with two people looking at the same issue, often, someone will spot a solution that both of them working separately may have never come up with.

I look at it like the two programmers constantly bouncing ideas off of one another. Just like brain teasers gets the creative juices flowing by allowing dark neural pathways to light yup like a Christmas Tree, I believe a small group of individuals with similar skill sets do the same thing because the human brain is quite remarkable, and we are social beings and together just talking to each other (Keep Talking by Pink Floyd quickly comes to mind), seems to have been the key to humans creating an advanced civilization, at least to me. You never know what small seemingly insignificant statement, just a couple of words even, that the Navigator mentions to the Driver, lights up an individual. It’s like when Data from Star TrekThe Next Generation is processing something really intensively, like accessing the Borg Collective, his Positronic Neural Net starts blinking in patterns like crazy! evilgrin  I have seen in countless times when I have participated in Pair Programming. My partner will mention something it might only even be half related to the project we were currently working on, but I’ll scream, “I Got It!” and we talk out the idea together and quickly write out the code before either of us forgets it.

I highly recommend that if your managers approve of the practice, for younger developers to participate in Pair Programming.

And for more Senior Developers, working in Pairs with either another senior developer or even a junior developer, will help open your mind to new ideas when developing your critical components. The phrase “Two Minds Are Better Than One” really rings true here!

Pair Programming falls under the scope of Extreme Programming. Which is another great topic for discussion in another blog entry I’ll have to write…

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
 
Posted in Software Management | Leave a comment

Education verses Experience

As a hiring manager, I am often faced with the question of Education verses Experience. This extends beyond hiring experienced candidates, to entry level candidates as well. A usual rule of thumb for experienced hires is that 5 solid years of experience is equivalent to a Masters Degree, assuming that person already has a Bachelors. And I will go further to say, that if the candidate with 5 years experience worked on at least one hard core project that was released and supported in a production environment, that experience is worth much more than a Masters.

Convention of most firms dictates that we only hire candidates from the top schools and the top of their classes. For entry level candidates you will often see firms at Career Fairs and other Campus Recruiting events advertising requirements that state a GPA of 3.5 or above or something similar.

In my experience both the GPA and the school selection limits your ability to find quality candidates.

Specifically for programming and other Information Technology related jobs, we as hiring managers and human resource professionals need to broaden our searches.

A student with a 3.0 GPA which is still pretty good, who has spend their personal time working on programming projects, perhaps posting them on Source Forge or other Open Source directories or collaboration sites is much more worth my time as an interviewer, than a student who spend 100% of the time head deep in the books getting that 3.8 or even a 4.0 GPA.

The fact that the student takes time out of their personal schedule to program for fun or simply to help an open source project along, demonstrates that they are enthusiastic  about programming and that they have experience working outside the safety of the classroom setting; which although challenging in it’s own way, the classroom and the programming problems given by the majority of Computer Science programs have known solutions and are achievable. Not all problems in the real world have easy or predictably achievable solutions like they do in the classroom.

I do understand some students of computer science student and research very abstract problems which may not have a solution for many years or even decades, but those are usually very academic problems. Again this blog is about Real World Enterprise Programming, so I don’t cover those cases including in this post.

The majority of programming jobs out there do not involve these very academic problems; we live in a practical world and practical solutions are usually what Enterprise level development demands.

Having stated my case that I obviously value Experience (both professionally and personally) over education, I still believe you need a solid basis of a computer science curriculum to be successful in Enterprise Programming.

Specifically in the topics of Data Structures, Object Oriented Programming, and to some extent classic Algorithms.

From a Enterprise Programming view point, additional course work with Databases and Unix Programming (A Great book for this is: Advanced Unix Programming, this book helped me build my own Unix SHell: The PASH SHell) are also extremely important.

Universities, like my own, NYU Polytechnic usually focus on the theory and no so much on the practical. The professors expect you to sit in the labs, your dorms, your homes, and learn the programming languages inside and out, on your own time. For example, I had a project involving P-Threads in my Operating Systems course, which we were expected to either already know or learn on our own.

And during any of my interviews, I ensure myself or whoever I delegate the Interview Process to, covers Data Structures, Object Oriented Programming and Design, and the other topics I described above. However we can’t just stop at the theory; my interview process involves writing actual code, and tests to see if the candidate can apply the basic topics they learned in school to real world problems faced by developer on the job in the industry for which they are applying.

This just goes to show me that experience wins over education 10 times out of 10. I know my friends and family who work at Universities and other educational institutions are going to hate me for saying this, but it’s true, at least in my business…

This extends to experienced hires as well. I read 100’s of resumes every year and the candidates who demonstrate an interest in programming outside of the workspace, for example, have open source projects which they are a part of, or have their own web site or blog about programming or other Information Technology topics, always have a one up in my mind.

This post is going to eventually lead into an article I’ll write about “How to Interview High Caliber Developers”…

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
 
Posted in Philosophy | 1 Comment

Do Nothing Standalone Daemon – A Template for Java Daemon Processes

Once again, as promised here’s my template for created Standalone Daemon Processes in Java. You need to actually combine this with the unix command nohup to actually have it run as a background daemon process, but it makes use of two separate Java Threads to have it correctly behave as a background daemon process based on a Sleep Interval based Timer as the event trigger.

This of course is the Daemon counterpart article to my Batch Process post: Do Nothing Standalone Process – A Template for Batch Jobs and Utility Commands.

I have used code like this for many daemon processes in my personal projects and profession experience, always deployed on either Solaris Unix or Linux (RedHat Enterprise).

Exactly for the same reasons for a template for Batch Processes, having a standard template which all your Daemon Processes will follow, cuts down on maintenance and production support costs.

Summary to Start the Daemon Process:

  1. Remove the previous Stop touch file. (See below for stopping the daemon)
  2. Start the JVM using the nohup command
  3. Redirect STDOUT (> [TEXT_FILE_PATH]) and STDERR (2> [TEXT_FILE_PATH]) to log files or use the > [TEXT_FILE_PATH] 2>&1 argument to redirect both to the same file.
  4. Run it as a background process using the & syntax in Unix SHell.
  5. Record the PID (Process ID) of the nohup’s child process (your Java Process in this case) to a text file for usage in monitoring and production support. If you simply echo the environmental variable $! immediately after executing the nohup command.

Stopping the Daemon Process:

Because my Daemon Template has a built in "Stop File Watcher" which put simply watches the file system for a specific file (which you pass as a command line argument to the process), that as soon as it finds that this file exists, it will execute the Daemon Graceful shutdown routine. So, given this built in capability, to stop the Daemon Process you can write a shell script which simply created a *.stop touch file (empty text file), using the unix touch command. Normally, in my production batch, I run a script that created this stop touch file during our “green zone” hours, which is the time of the week or month when we are scheduled with the business user base to bring down our system for maintenance.

Monitoring the Daemon Process:

When creating robust reliable systems, you MUST monitor all components. Daemon Processes are notorious for mysteriously becoming unavailable without anyone from the development or product support team knowing about it crashing or otherwise going down unexpectedly. In my professional experience, if there’s not a bug, it’s usually because a System Admin or someone else with ROOT access, or perhaps a production support person with the proper privileges accidentally kills (for some reason usually with kill -9 ) your daemon process, and either doesn’t know they did, or fails to report it for one reason or another usually to CYA/CTA. So monitor the daemon processes you create is essential when rolling out a new Daemon. Normally, I do this by creating a repeating Batch Job that will run every 5 or 10 minutes switch executes a simple script that uses the PS command combined with the saved PID from the daemon start script and grep to check if the process is still running. If it is not found, there’s two things you can do: 1) If you are using a robust scheduler like Autosys, you can simply fail the job by exiting non-zero, which will send out the normal Autosys alert escalation. Or 2) You can use sendmail to email a development and/or production support mail distribution list. I have used either ways, and even a combination of both in my professional experience. Because this monitor job runs every couple of minutes you don’t have to worry about someone killing it…

 This is pretty much a copy of paste statement from my last blog post: I heavily documented the class, which I wrote in Eclipse specifically for this blog post, so I’m actually going to rely on the code and comments itself to do most of the talking on this post. But it works perfectly here as well. Enjoy the overly commented code!

The Code:

Download the Java Code in PDF Format: DoNothingStandaloneDaemon

/*
Copyright 2012 Robert C. Ilardi
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
 */

/**
 * Created Aug 19, 2012
 */
package com.roguelogic.util;

import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Properties;

/**
 * @author Robert C. Ilardi
 * 
 *         This is a Sample Class for a Standalone *Daemon* Process.
 *         Implementations that use this template may be run from a scheduler
 *         such as Cron or Autosys or as Manual Utility Processes using the UNIX
 *         Command NOHUP.
 * 
 *         IMPORTANT: This Java Process is intended to be ran with NOHUP.
 * 
 *         I have released this code under the Apache 2.0 Open Source License.
 *         Please feel free to use this as a template for your own Daemons or
 *         Utility Process Implementations.
 * 
 *         Finally, as you will notice I used STDOUT AND STDERR for all logging.
 *         This is for simplicity of the template. You can use Log4J or Java
 *         Logging or any other log library you prefer. In my professional
 *         experience, I also include an Exception or "Throwable" emailer
 *         mechanism so that our development team receives all exceptions from
 *         any process even front-ends in real time.
 * 
 */
public class DoNothingStandaloneDaemon {

  /*
   * I personally like having a single property file for the configuration of
   * all my batch jobs and utilities. In my professional projects, I actually
   * have a more complex method of properties management, where all properties
   * are stored in a database table, and I have something called a Resource
   * Bundle and Resource Helper facility to manage it.
   * 
   * My blog at EnterpriseProgrammer.com has more information on properties and
   * connection management using this concept.
   * 
   * However for demonstration purposes I am using a simple Properties object to
   * manage all configuration data for the Standalone Process Template. Feel
   * free to replace this field with a more advanced configuration management
   * mechanism that means your needs.
   */
  private Properties appProps;

  /*
   * This flag ensures that the Cleanup method only runs once. This is because I
   * wanted to have a shutdown hook, in case the process receives an interrupt
   * signal and in the main method, I explicitly call cleanup() from the finally
   * block. Technically the shutdown hook based on my implementation is only a
   * backup so it actually will never run unless there's a situation like an
   * interrupt signal.
   */
  private boolean ranCleanup = false;

  /*
   * If this variable is set to true, any exception caused in the cleanup
   * routine will cause the entire process to exit non-zero.
   * 
   * However in my professional experience, we usually just want to log these
   * exceptions, perhaps even email them to the team for investigation later,
   * and allow the process to exit ZERO, so that the batch job scheduler can
   * continue onto the next job, especially is the real execution is completed.
   */
  private boolean treatCleanupExceptionsAsFatal = false;

  /*
   * We need a object monitor to control the background thread used to run the
   * execution loop.
   */
  private Object loopControlLock = new Object();

  /*
   * A flag with tells the start and stop methods if the execution loop thread
   * has started or not.
   */
  private boolean loopStarted;

  /*
   * This flag tells the start, stop, and waitWhileExecution methods if the
   * process loop is running. It is also used to STOP the process loop from
   * running.
   */
  private boolean runProcessing = false;

  /*
   * This parameter needs to be set in order for the process loop to sleep a
   * certain number of seconds between each consecutive call to the actual
   * processing logic method.
   */
  private int processLoopSleepSecs;

  /*
   * This field is used as a counter for the number of processing loop
   * iterations. For debugging, logging, and even custom logic implementation
   * purposes, this is a nice piece of information to have.
   */
  private long loopIterationCnt;

  /*
   * This is the file path for the stop file watcher to watch. When the stop
   * file watcher thread finds the stop file at this location, it will
   * gracefully shutdown the daemon process.
   */
  private String stopFilePath;

  /*
   * We don't want to spend too many cycles watching for a stop file especially
   * since a daemon process normally runs for hours, days, or even weeks, so we
   * have a separate sleep seconds variable to control the interval between file
   * system checks.
   */
  private int stopFileSleepSecs;

  /*
   * This flag tells the start, stop file watcher methods if the file watcher
   * loop is running.
   */
  private boolean runStopFileWatcher;

  /*
   * We need a object monitor to control the background thread used to run the
   * stop file watcher loop.
   */
  private Object stopFileWatcherControlLock = new Object();

  /*
   * A flag with tells the start and stop methods if the stop file watcher loop
   * thread has started or not.
   */
  private boolean stopFileWarcherLoopStarted;

  /**
   * I'm not really using the constructor here. I purpose more explicit init
   * methods. It's a good practice especially if you work with a lot of
   * reflection, however feel free to add some base initialization here if you
   * prefer.
   */
  public DoNothingStandaloneDaemon() {}

  // Start public methods that shouldn't be customized by the user
  // ------------------------------------------------------------------->

  /**
   * The init method wraps two user customizable methods: 1. readProperties(); -
   * Use this to add reads from the appProps object. 2. customProcessInit() -
   * Use this to customize your process before the execution logic runs.
   * 
   * As stated previously, so not touch these methods, they are simple wrappers
   * around the methods you should customize instead and provide what in my
   * professional experience are good log messages for batch jobs or utilities
   * to print out, such as the execution timing information. This is especially
   * useful for long running jobs. You can eventually take average over the
   * course of many runs of the batch job, and then you will know when your
   * batch job is behaving badly, when it's taking too long to finish execution.
   */
  public synchronized void init() {
    long start, end, total;

    System.out.println("Initialization at: " + GetTimeStamp());
    start = System.currentTimeMillis();

    readProperties(); // Hook to the user's read properties method.
    customProcessInit(); // Hook to the user's custom process init method!

    end = System.currentTimeMillis();
    total = end - start;

    System.out.println("Initialization Completed at: " + GetTimeStamp());
    System.out.println("Total Init Execution Time: "
        + CompactHumanReadableTimeWithMs(total));
  }

  /**
   * Because we aren't using a more advanced mechanism for properties
   * management, I have included this method to allow the main() method to set
   * the path to the main properties file used by the batch jobs.
   * 
   * In my professional versions of this template, this method is embedded in
   * the init() method which basically will initialize the Resource Helper
   * component and obtain the properties from the configuration tables instead.
   * 
   * Again you shouldn't touch this method's implementation, instead use
   * readProperties() to customize what you do with the properties after the
   * properties load.
   */
  public void loadProperties(String appPropsPath) throws IOException {
    FileInputStream fis = null;

    try {
      fis = new FileInputStream(appPropsPath);
      appProps = new Properties();
      appProps.load(fis);
    } // End try block
    finally {
      if (fis != null) {
        try {
          fis.close();
        }
        catch (Exception e) {}
      }
    }
  }

  /**
   * This method sets the number of seconds the process loop will sleep between
   * each call to the logic processing method.
   * 
   * @param processLoopSleepSecs
   */
  public void setProcessLoopSleepSecond(int processLoopSleepSecs) {
    this.processLoopSleepSecs = processLoopSleepSecs;
  }

  /**
   * This method sets the number of seconds between each stop file check by the
   * stop file watcher.
   * 
   * @param stopFileSleepSecs
   */
  public void setStopFileWatcherSleepSeconds(int stopFileSleepSecs) {
    this.stopFileSleepSecs = stopFileSleepSecs;
  }

  /**
   * This method sets the file for the stop file watcher to loop for.
   * 
   * @param stopFilePath
   */
  public void setStopFilePath(String stopFilePath) {
    this.stopFilePath = stopFilePath;
  }

  /**
   * This method performs the cleanup of any JDBC connections, files, sockets,
   * and other resources that your execution process or your initialization
   * process may have opened or created.
   * 
   * Once again do not touch this method directly, instead put your cleanup code
   * in the customProcessCleanup() method.
   * 
   * This method is called automatically in the last finally block of the main
   * method, and if there's an interrupt signal or other fatal issue where
   * somehow the finally block didn't get called the Runtime shutdown hook will
   * invoke this method on System.exit...
   * 
   * @throws Exception
   */
  public synchronized void cleanup() throws Exception {
    long start, end, total;

    // This prevents cleanup from running more than onces.
    if (ranCleanup) {
      return;
    }

    try {
      System.out.println("Starting Cleanup at: " + GetTimeStamp());
      start = System.currentTimeMillis();

      stopStopFileWatcher(); // Make sure the stop file watcher is stopped!

      stopProcessingLoop(); // Make sure the processing loop is stopped!

      customProcessCleanup(); // Hook to the users Process Cleanup Method

      end = System.currentTimeMillis();
      total = end - start;

      System.out.println("Cleanup Completed at: " + GetTimeStamp());
      System.out.println("Total Cleanup Execution Time: "
          + CompactHumanReadableTimeWithMs(total));

      ranCleanup = true;
    } // End try block
    catch (Exception e) {
      /*
       * It is in my experience that the Operating System will cleanup anything
       * we have "forgotten" to clean up. Therefore I do not want to waste my
       * production support team members time at 3AM in the morning to handle
       * "why did a database connection not close" It will close eventually,
       * since it is just a socket, and even if it doesn't we'll catch this in
       * other jobs which may fail due to the database running out of
       * connections.
       * 
       * However I usually have these exceptions emailed to our development team
       * for investigation the next day. For demo purposes I did not include my
       * Exception/Stacktrace Emailing utility, however I encourage you to add
       * your own.
       * 
       * If you really need the process to exit non-ZERO because of the cleanup
       * failing, set the treatCleanupExceptionsAsFatal to true.
       */
      e.printStackTrace();

      if (treatCleanupExceptionsAsFatal) {
        throw e;
      }
    }
  }

  public void startStopFileWatcher() throws InterruptedException {
    Thread t;

    synchronized (stopFileWatcherControlLock) {
      if (runStopFileWatcher) {
        return;
      }

      stopFileWarcherLoopStarted = false;
      runStopFileWatcher = true;

      System.out.println("Starting Stop File Watcher at: " + GetTimeStamp());

      t = new Thread(stopFileWatcherRunner);
      t.start();

      while (!stopFileWarcherLoopStarted) {
        stopFileWatcherControlLock.wait();
      }
    }

    System.out.println("Stop File Watcher Thread Started Running at: "
        + GetTimeStamp());
  }

  public void stopStopFileWatcher() throws InterruptedException {
    synchronized (stopFileWatcherControlLock) {
      if (!stopFileWarcherLoopStarted || !runStopFileWatcher) {
        return;
      }

      System.out.println("Requesting Stop File Watcher Stop at: "
          + GetTimeStamp());

      runStopFileWatcher = false;

      while (stopFileWarcherLoopStarted) {
        stopFileWatcherControlLock.wait();
      }

      System.out.println("Stop File Watcher Stop Request Completed at: "
          + GetTimeStamp());
    }
  }

  /**
   * This method is used to start the processing loop's thread.
   * 
   * Again like the other methods in this section of the class, do not modify
   * this method directly.
   * 
   * @throws InterruptedException
   * 
   * @throws Exception
   */
  public void startProcessingLoop() throws InterruptedException {
    Thread t;

    synchronized (loopControlLock) {
      if (runProcessing) {
        return;
      }

      loopStarted = false;
      runProcessing = true;
      ranCleanup = false;

      System.out.println("Starting Processing Loop at: " + GetTimeStamp());

      t = new Thread(executionLoopRunner);
      t.start();

      while (!loopStarted) {
        loopControlLock.wait();
      }
    }

    System.out.println("Execution Processing Loop Thread Started Running at: "
        + GetTimeStamp());
  }

  /**
   * This method is used to stop or actually "request to stop" the processing
   * loop thread.
   * 
   * It waits while the processing loop is running.
   * 
   * @throws InterruptedException
   */
  public void stopProcessingLoop() throws InterruptedException {
    synchronized (loopControlLock) {
      if (!loopStarted || !runProcessing) {
        return;
      }

      System.out
          .println("Requesting Execution Loop Stop at: " + GetTimeStamp());

      runProcessing = false;

      while (loopStarted) {
        loopControlLock.wait();
      }

      System.out.println("Execution Loop Stop Request Completed at: "
          + GetTimeStamp());
    }
  }

  /**
   * This method will wait while the processing loop is running. Yes, I know we
   * can use Thread.join(), however, what if you want to embedded this class in
   * some other larger component, then you might not want to use the join method
   * directly. I personally like this implementation better, it tells me exactly
   * what I'm waiting on.
   * 
   * @throws InterruptedException
   */
  public void waitWhileExecuting() throws InterruptedException {
    synchronized (loopControlLock) {
      while (loopStarted) {
        loopControlLock.wait(1000);
      }
    }
  }

  /**
   * This is the runnable implementation as an anon inner class which contains
   * the actual execution loop of the Daemon. This execution loop is what really
   * separates the Daemon Process from the Standalone Process batch template.
   * While the Standalone Process template was meant for processes which run a
   * task and then exit once completed. This implementation is method to keep on
   * running for extended periods of time, re-executing the custom processing
   * logic over and over again after some sleep period.
   */
  private Runnable executionLoopRunner = new Runnable() {
    public void run() {
      try {
        synchronized (loopControlLock) {
          loopStarted = true;
          loopControlLock.notifyAll();
        }

        System.out.println("Executing Loop Thread Running!");

        while (runProcessing) {
          // Hook to the User's Custom Execute Processing
          // Method! - Where the magic happens!
          customExecuteProcessing();

          loopIterationCnt++;

          // Sleep between execution cycles
          try {
            for (int i = 1; runProcessing && i <= processLoopSleepSecs; i++) {
              Thread.sleep(1000);
            }
          }
          catch (Exception e) {}
        } // End while runProcessing loop
      } // End try block
      catch (Exception e) {
        e.printStackTrace();
      }
      finally {
        System.out.println("Execution Processing Loop Exit at: "
            + GetTimeStamp());

        synchronized (loopControlLock) {
          runProcessing = false;
          loopStarted = false;
          loopControlLock.notifyAll();
        }
      }
    }
  };

  /**
   * This is the runnable implementation as an anon inner class which contains
   * the Stop File Watcher loop. A Stop File Watcher is simply a standard file
   * watcher, except when it finds the target file, it will execute the daemon
   * shutdown routine. This is a form of inter-process communication via the
   * file system to enable a separate process or even a simple script to control
   * (or at least stop) the daemon process when it's running under NOHUP. You
   * can simple create a script which creates an empty file using the unix TOUCH
   * command.
   */
  private Runnable stopFileWatcherRunner = new Runnable() {
    public void run() {
      File f;

      try {
        synchronized (stopFileWatcherControlLock) {
          stopFileWarcherLoopStarted = true;
          stopFileWatcherControlLock.notifyAll();
        }

        System.out.println("Stop File Watcher Thread Running!");

        f = new File(stopFilePath);

        while (runStopFileWatcher) {
          // If we find the stop file
          // stop the processing loop
          // and exit this thread as well.
          if (f.exists()) {
            System.out.println("Stop File: '" + stopFilePath + "'  Found at: "
                + GetTimeStamp());
            stopProcessingLoop();
            break;
          }

          // Sleep between file existence checks
          try {
            for (int i = 1; runStopFileWatcher && i <= stopFileSleepSecs; i++) {
              Thread.sleep(1000);
            }
          }
          catch (Exception e) {}
        } // End while runStopFileWatcher loop
      } // End try block
      catch (Exception e) {
        e.printStackTrace();
      }
      finally {
        synchronized (stopFileWatcherControlLock) {
          runStopFileWatcher = false;
          stopFileWarcherLoopStarted = false;
          stopFileWatcherControlLock.notifyAll();
        }
      }
    }
  };

  /**
   * This is the method that adds the shutdown hook.
   * 
   * All this method does it property invokes the
   * Runtime.getRuntime().addShutdownHook(Thread t); method by adding an
   * anonymous class implementation of a thread.
   * 
   * This thread's run method simply calls the Process's cleanup method.
   * 
   * Whenever I create a class like this, I envision it being ran two ways,
   * either directly from the main() method or as part of a larger component,
   * which may wrap this entire class (A HAS_A OOP relationship).
   * 
   * In the case of the wrapper, adding the shutdown hook might be optional
   * since the wrapper may want to handle shutdown on it's own.
   * 
   */
  public synchronized void addShutdownHook() {
    Runtime.getRuntime().addShutdownHook(new Thread() {
      public void run() {
        try {
          cleanup();
        }
        catch (Exception e) {
          e.printStackTrace();
        }
      }
    });
  }

  /**
   * This method is only provided in case you are loading properties from an
   * input stream or other non-standard source that is not a File.
   * 
   * It becomes very useful in the wrapper class situation I described in the
   * comments about the addShutdownHook method.
   * 
   * Perhaps the wrapping process reads properties from a Database or a URL?
   * 
   * @param appProps
   */
  public void setAppProperties(Properties appProps) {
    this.appProps = appProps;
  }

  /**
   * Used to detect which mode the cleanup exceptions are handled in.
   * 
   * @return
   */
  public boolean isTreatCleanupExceptionsAsFatal() {
    return treatCleanupExceptionsAsFatal;
  }

  /**
   * Use this method to set if you want to treat cleanup exception as fatal. The
   * default, and my personal preference is not to make these exception fatal.
   * But I added the flexibility into the template for your usage.
   * 
   * @param treatCleanupExceptionsAsFatal
   */
  public void setTreatCleanupExceptionsAsFatal(
      boolean treatCleanupExceptionsAsFatal) {
    this.treatCleanupExceptionsAsFatal = treatCleanupExceptionsAsFatal;
  }

  // ------------------------------------------------------------------->
  // Start methods that need to be customized by the user
  // ------------------------------------------------------------------->
  /**
   * In general for performance reasons and for clarity even above performance,
   * I like pre-caching the properties as Strings or parsed Integers, etc,
   * before running any real business logic.
   * 
   * This is why I provide the hook to readProperties which should read
   * properties from the appProps field (member variable).
   * 
   * If you don't want to pre-cache your property values you can leave this
   * method blank. However I believe it's a good practice especially if your
   * batch process is a high speed ETL Loader process where every millisecond
   * counts when loading millions of records.
   */
  private synchronized void readProperties() {
    System.out.println("Add Your Property Reads Here!");
  }

  /**
   * After the properties are read from the readProperties() method this method
   * is called.
   * 
   * It is provided for the user to add custom initialization processing.
   * 
   * Let's say you want to open all JDBC connections at the start of a process,
   * this is probably the right place to do so.
   * 
   * For more complex implementations, this is the best place to create and
   * initialize all your sub-components of your process.
   * 
   * Let's say you have a DbConnectionPool, a Country Code Mapping utility, an
   * Address Fuzzy Logic Matching library.
   * 
   * This is where I would initialize these components.
   * 
   * The idea is to fail-fast in your batch processes, you don't want to wait
   * until you processed 10,000 records before some logic statement is triggered
   * to lazy instantiate these components, and because of a network issue or a
   * configuration mistake you get a fatal exception and your process exists,
   * and your data is only partially loaded and you or your production support
   * team members have to debug not only the process but debug the portion of
   * the data already loaded make it in ok. This is extremely important if your
   * batch process interacts is real-time system components such as message
   * publishers, maybe you started publishing the updated records to downstream
   * consumers?
   * 
   * Fail-Fast my friends... And as soon as the process starts if possible!
   */
  private synchronized void customProcessInit() {
    System.out.println("Add Custom Initialization Logic Here!");
  }

  /**
   * This is where you would add your custom cleanup processing. If you open and
   * connections, files, sockets, etc and keep references to these
   * objects/resources opened as fields in your class which is a good idea in
   * some cases especially long running batch processes you need a hook to be
   * able to close these resources before the process exits.
   * 
   * This is where that type of logic should be placed.
   * 
   * Now you can throw any exception you like, however the cleanup wrapper
   * method will simply log these exceptions, the idea here is that, even though
   * cleanup is extremely important, the next step of the process is a
   * System.exit and the operating system will most-likely reclaim any resources
   * such as files and sockets which have been left opened, after some bit of
   * time.
   * 
   * Now my preference is usually not to wake my production support guys up
   * because a database connection (on the extremely rare occasion) didn't close
   * correctly. The process still ran successfully at this point, so just exit
   * and log it.
   * 
   * However if you really need to make the cleanup be truly fatal to the
   * process you will have to set treatCleanupExceptionsAsFatal to true.
   * 
   * @throws Exception
   */
  private synchronized void customProcessCleanup() throws Exception {
    System.out.println("Add Custom Cleanup Logic Here!");
  }

  private synchronized void customExecuteProcessing() throws Exception {
    System.out.println("Loop Iteration Count = " + loopIterationCnt
        + " - Add Custom Processing Logic Here!");

    // Uncomment for testing if you want to see the behavior...
    if (loopIterationCnt == 5) {
      throw new Exception(
          "Testing what happens if an exception gets thrown here!");
    }
  }

  // ------------------------------------------------------------------->
  /*
   * Start String Utility Methods These are methods I have in my custom
   * "StringUtils.java" class I extracted them and embedded them in this class
   * for demonstration purposes.
   * 
   * I encourage everyone to build up their own set of useful String Utility
   * Functions please feel free to add these to your own set if you need them.
   */
  // ------------------------------------------------------------------->
  /**
   * This will return a string that is a human readable time sentence. It is the
   * "compact" version because instead of having leading ZERO Days, Hours,
   * Minutes, Seconds, it will only start the sentence with the first non-zero
   * time unit.
   * 
   * In my string utils I have a non-compact version as well that prints the
   * leading zero time units.
   * 
   * All depends on how you need to presented in your logs.
   */
  public static String CompactHumanReadableTimeWithMs(long milliSeconds) {
    long days, hours, inpSecs, leftOverMs;
    int minutes, seconds;
    StringBuffer sb = new StringBuffer();

    inpSecs = milliSeconds / 1000; // Convert Milliseconds into Seconds
    days = inpSecs / 86400;
    hours = (inpSecs - (days * 86400)) / 3600;
    minutes = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) / 60);
    seconds = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) - (minutes * 60));
    leftOverMs = milliSeconds - (inpSecs * 1000);

    if (days > 0) {
      sb.append(days);
      sb.append((days != 1 ? " Days" : " Day"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    if (hours > 0 || sb.length() > 0) {
      sb.append(hours);
      sb.append((hours != 1 ? " Hours" : " Hour"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    if (minutes > 0 || sb.length() > 0) {
      sb.append(minutes);
      sb.append((minutes != 1 ? " Minutes" : " Minute"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    if (seconds > 0 || sb.length() > 0) {
      sb.append(seconds);
      sb.append((seconds != 1 ? " Seconds" : " Second"));
    }

    if (sb.length() > 0) {
      sb.append(", ");
    }

    sb.append(leftOverMs);
    sb.append((seconds != 1 ? " Milliseconds" : " Millisecond"));

    return sb.toString();
  }

  /**
   * NVL = Null Value, in my experience, most times, we want to treat empty or
   * whitespace only strings are NULLs
   * 
   * So this method is here to avoid a lot of if (s == null || s.trim().length()
   * == 0) all over the place, instead you will find if(IsNVL(s)) instead.
   */
  public static boolean IsNVL(String s) {
    return s == null || s.trim().length() == 0;
  }

  /**
   * Check is "s" is a numeric value We could use Integer.praseInt and just
   * capture the exception if it's not a number, but I think that's a hack...
   * 
   * @param s
   * @return
   */
  public static boolean IsNumeric(String s) {
    boolean numeric = false;
    char c;

    if (!IsNVL(s)) {
      numeric = true;
      s = s.trim();

      for (int i = 0; i < s.length(); i++) {
        c = s.charAt(i);

        if (i == 0 && (c == '-' || c == '+')) {
          // Ignore signs...
          continue;
        }
        else if (c < '0' || c > '9') {
          numeric = false;
          break;
        }
      }
    }

    return numeric;
  }

  /**
   * Simply returns a timestamp as a String.
   * 
   * @return
   */
  public static String GetTimeStamp() {
    return (new java.util.Date()).toString();
  }

  // ------------------------------------------------------------------->
  // Start Main() Helper Static Methods
  // ------------------------------------------------------------------->
  /**
   * This method returns true if the command line arguments are valid, and false
   * otherwise.
   * 
   * Please change this method to meet your implementation's requirements.
   */
  private static boolean CheckCommandLineArguments(String[] args) {
    boolean ok = false;

    ok = args.length == 4 && !IsNVL(args[0]) && IsNumeric(args[1])
        && !IsNVL(args[2]) && IsNumeric(args[3]);

    return ok;
  }

  /**
   * This prints to STDERR (a common practice), the command line usage of the
   * program.
   * 
   * Please change this to meet your implementation's command line arguments.
   */
  private static void PrintUsage() {
    StringBuffer sb = new StringBuffer();
    sb.append("\nUsage: java ");
    sb.append(DoNothingStandaloneDaemon.class.getName());

    /*
     * Modify this append call to have each command line argument name example:
     * sb.append(
     * " [APP_PROPERTIES_FILE] [SOURCE_INPUT_FILE] [WSDL_URL] [TARGET_OUTPUT_FILE]"
     * );
     * 
     * For demo purposes we will only use [APP_PROPERTIES_FILE]
     */
    sb.append(" [APP_PROPERTIES_FILE] [PROCESS_LOOP_SLEEP_SECONDS] [STOP_FILE_PATH] [STOP_WATCHER_SECONDS]");
    sb.append("\n\n");
    System.err.print(sb.toString());
  }

  /**
   * I usually like the Batch and Daemon Processes or Utilities to print a small
   * Banner at the top of their output.
   * 
   * Please change this to suit your needs.
   */
  private static void PrintWelcome() {
    StringBuffer sb = new StringBuffer();
    sb.append("\n*********************************************\n");
    sb.append("*       Do Nothing Standalone Daemon        *\n");
    sb.append("*********************************************\n\n");
    System.out.print(sb.toString());
  }

  /**
   * This method simple prints the process startup time. I found this to be very
   * useful in batch job logs. I probably wouldn't change it, but you can if you
   * really need to.
   */
  private static void PrintStartupTime() {
    StringBuffer sb = new StringBuffer();
    sb.append("Startup Time: ");
    sb.append(GetTimeStamp());
    sb.append("\n\n");
    System.out.print(sb.toString());
  }

  // Start Main() Method
  // ------------------------------------------------------------------->
  /**
   * Here's your standard main() method which allows you to start a Java program
   * from the command line. You can probably use this as is, once you rename the
   * DoNothingStandaloneProcess class name to a proper name to represent your
   * implementation correctly.
   * 
   * MAKE SURE: To change the data type of the process object reference to the
   * name of your process implementation class. Other than that, you are good to
   * go with this main method!
   */
  public static void main(String[] args) {
    int exitCode;
    DoNothingStandaloneDaemon daemon = null;

    if (!CheckCommandLineArguments(args)) {
      PrintUsage();
      exitCode = 1;
    }
    else {
      try {
        PrintWelcome();
        PrintStartupTime();
        daemon = new DoNothingStandaloneDaemon();

        // I don't believe cleanup exceptions
        // area really fatal, but that's up to you...
        daemon.setTreatCleanupExceptionsAsFatal(false);

        // Load properties using the file way.
        daemon.loadProperties(args[0]);

        // Set process loop sleep seconds
        daemon.setProcessLoopSleepSecond(Integer.parseInt(args[1]));

        // Set the stop file watcher file path
        daemon.setStopFilePath(args[2]);

        // Set the stop file watcher sleep seconds
        daemon.setStopFileWatcherSleepSeconds(Integer.parseInt(args[3]));

        // Performance daemon Initialization,
        // again I don't like over use of the constructor.
        daemon.init();

        daemon.addShutdownHook(); // Just in case we get an interrupt signal...

        // Star the Stop File Watcher!
        // It is not enabled automatically
        // to make this template more flexible
        // if you want to embedded it in a larger component
        daemon.startStopFileWatcher();

        // Do the actually business logic execution!
        // If we made it to this point without an exception, that means
        // we are successful, the daemon exit code should be ZERO for SUCCESS!
        daemon.startProcessingLoop();

        // Wait while the execution loop is running!
        daemon.waitWhileExecuting();

        exitCode = 0;
      } // End try block
      catch (Exception e) {
        exitCode = 1; // If there was an exception, the daemon exit code should
        // be NON-ZERO for FAILURE!

        e.printStackTrace(); // Log the exception, if you have an Exception
        // email utility like I do, use that instead.
      }
      finally {
        if (daemon != null) {
          try {
            daemon.stopStopFileWatcher(); // Just in case stop file watcher
          }
          catch (Exception e) {
            e.printStackTrace();
          }

          try {
            daemon.stopProcessingLoop(); // Just in case stop processing loop
          }
          catch (Exception e) {
            e.printStackTrace();
          }

          try {
            // Technically we don't need to do this because
            // of the shutdown hook
            // But I like to be explicit here to show when during a
            // normal execution, when the call
            // to cleanup should happen.
            daemon.cleanup();
          }
          catch (Exception e) {
            // We shouldn't receive an exception
            // But in case there is a runtime exception
            // Just print it, but treat it as non-fatal.
            // Technically most if not all resources
            // will be reclaimed by the operating system as an
            // absolute last resort
            // so we did our best attempt at cleaning things up,
            // but we don't want to wake our developers or our
            // production services team
            // up at 3 in the morning because something weird
            // happened during cleanup.
            e.printStackTrace();

            // If we set the daemon to treat cleanup exception as fatal
            // the exit code will be set to 1...
            if (daemon != null && daemon.isTreatCleanupExceptionsAsFatal()) {
              exitCode = 1;
            }
          }
        }
      } // End finally block
    } // End else block

    // Make sure our standard streams are flushed
    // so we don't miss anything in the logs.
    System.out.flush();
    System.err.flush();
    System.out.println("Daemon Exit Code = " + exitCode);
    System.out.flush();

    // Make sure to return the exit code to the parent process
    System.exit(exitCode);
  }
  // ------------------------------------------------------------------->

} 

Thanks to http://www.palfrader.org/code2html/code2html.html for the Java Code to HTML Conversion…

Closing Remarks:

I believe this post is really self explanatory, but I’m extremely interested in hearing from you on any comments, question, or enhancements to my code you may have.

Again, this code is released under the Apache 2.0 Open Source License, so please feel free to use it in your own projects.

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
Posted in Development | Leave a comment

Do Nothing Standalone Process – A Template for Batch Jobs and Utility Commands

As promised, I’m sharing my template for how I want batch jobs and other standalone processes such as utility programs to be based on. This solves the problem I mentioned in my previous post “Helping your developers to maintain other people’s code“.

Having a standard template which all your batch jobs will follow, cuts down on maintenance and production support costs.

 I heavily documented the class, which I wrote in Eclipse specifically for this blog post, so I’m actually going to rely on the code and comments itself to do most of the talking on this post.

The Code:

Download the Java Code in PDF Format: DoNothingStandaloneProcess

/*
 Copyright 2012 Robert C. Ilardi
 
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
 
 http://www.apache.org/licenses/LICENSE-2.0
 
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
 */
 
/**
 * Created Aug 3, 2012
 */
package com.roguelogic.util;
 
import java.io.FileInputStream;
import java.io.IOException;
import java.util.Properties;
 
/**
 * @author Robert C. Ilardi
 *
 *         This is a Sample Class for a Standalone Process that would run as
 *         part of a Batch. Implementations that use this template may be run
 *         from a scheduler such as Cron or Autosys or as Manual Utility
 *         Processes.
 *
 *         I have released this code under the Apache 2.0 Open Source License.
 *         Please feel free to use this as a template for your own Batch Job or
 *         Utility Process Implementations.
 *
 *         Finally, as you will notice I used STDOUT AND STDERR for all logging.
 *         This is for simplicity of the template. You can use Log4J or Java
 *         Logging or any other log library you prefer. In my professional
 *         experience, I also include an Exception or "Throwable" emailer
 *         mechanism so that our development team receives all exceptions from
 *         any process even front-ends in real time.
 *
 */
 
public class DoNothingStandaloneProcess {
 
  /*
   * I personally like having a single property file for the configuration of
   * all my batch jobs and utilities. In my professional projects, I actually
   * have a more complex method of properties management, where all properties
   * are stored in a database table, and I have something called a Resource
   * Bundle and Resource Helper facility to manage it.
   *
   * My blog at EnterpriseProgrammer.com has more information on properties and
   * connection management using this concept.
   *
   * However for demonstration purposes I am using a simple Properties object to
   * manage all configuration data for the Standalone Process Template. Feel
   * free to replace this field with a more advanced configuration management
   * mechanism that means your needs.
   */
  private Properties appProps;
 
  /*
   * This flag ensures that the Cleanup method only runs once. This is because I
   * wanted to have a shutdown hook, in case the process receives an interrupt
   * signal and in the main method, I explicitly call cleanup() from the finally
   * block. Technically the shutdown hook based on my implementation is only a
   * backup so it actually will never run unless there's a situation like an
   * interrupt signal.
   */
  private boolean ranCleanup = false;
 
  /*
   * If this variable is set to true, any exception caused in the cleanup
   * routine will cause the entire process to exit non-zero.
   *
   * However in my professional experience, we usually just want to log these
   * exceptions, perhaps even email them to the team for investigation later,
   * and allow the process to exit ZERO, so that the batch job scheduler can
   * continue onto the next job, especially is the real execution is completed.
   */
  private boolean treatCleanupExceptionsAsFatal = false;
 
  /**
   * I'm not really using the constructor here. I purpose more explicit init
   * methods. It's a good practice especially if you work with a lot of
   * reflection, however feel free to add some base initialization here if you
   * prefer.
   */
  public DoNothingStandaloneProcess() {}
 
  // Start public methods that shouldn't be customized by the user
  // ------------------------------------------------------------------->
 
  /**
   * The init method wraps two user customizable methods: 1. readProperties(); -
   * Use this to add reads from the appProps object. 2. customProcessInit() -
   * Use this to customize your process before the execution logic runs.
   *
   * As stated previously, so not touch these methods, they are simple wrappers
   * around the methods you should customize instead and provide what in my
   * professional experience are good log messages for batch jobs or utilities
   * to print out, such as the execution timing information. This is especially
   * useful for long running jobs. You can eventually take average over the
   * course of many runs of the batch job, and then you will know when your
   * batch job is behaving badly, when it's taking too long to finish execution.
   */
  public synchronized void init() {
    long start, end, total;
 
    System.out.println("Initialization at: " + GetTimeStamp());
    start = System.currentTimeMillis();
 
    readProperties(); // Hook to the user's read properties method.
 
    customProcessInit(); // Hook to the user's custom process init method!
 
    end = System.currentTimeMillis();
 
    total = end - start;
 
    System.out.println("Initialization Completed at: " + GetTimeStamp());
    System.out.println("Total Init Execution Time: "
        + CompactHumanReadableTimeWithMs(total));
  }
 
  /**
   * Because we aren't using a more advanced mechanism for properties
   * management, I have included this method to allow the main() method to set
   * the path to the main properties file used by the batch jobs.
   *
   * In my professional versions of this template, this method is embedded in
   * the init() method which basically will initialize the Resource Helper
   * component and obtain the properties from the configuration tables instead.
   *
   * Again you shouldn't touch this method's implementation, instead use
   * readProperties() to customize what you do with the properties after the
   * properties load.
   */
  public synchronized void loadProperties(String appPropsPath)
      throws IOException {
    FileInputStream fis = null;
 
    try {
      fis = new FileInputStream(appPropsPath);
      appProps = new Properties();
      appProps.load(fis);
    } // End try block
    finally {
      if (fis != null) {
        try {
          fis.close();
        }
        catch (Exception e) {}
      }
    }
  }
 
  /**
   * This method performs the cleanup of any JDBC connections, files, sockets,
   * and other resources that your execution process or your initialization
   * process may have opened or created.
   *
   * Once again do not touch this method directly, instead put your cleanup code
   * in the customProcessCleanup() method.
   *
   * This method is called automatically in the last finally block of the main
   * method, and if there's an interrupt signal or other fatal issue where
   * somehow the finally block didn't get called the Runtime shutdown hook will
   * invoke this method on System.exit...
   *
   * @throws Exception
   */
  public synchronized void cleanup() throws Exception {
    long start, end, total;
 
    // This prevents cleanup from running more than onces.
    if (ranCleanup) {
      return;
    }
 
    try {
      System.out.println("Starting Cleanup at: " + GetTimeStamp());
      start = System.currentTimeMillis();
 
      customProcessCleanup(); // Hook to the users Process Cleanup Method
 
      end = System.currentTimeMillis();
 
      total = end - start;
 
      System.out.println("Cleanup Completed at: " + GetTimeStamp());
      System.out.println("Total Cleanup Execution Time: "
          + CompactHumanReadableTimeWithMs(total));
 
      ranCleanup = true;
    } // End try block
    catch (Exception e) {
      /*
       * It is in my experience that the Operating System will cleanup anything
       * we have "forgotten" to clean up. Therefore I do not want to waste my
       * production support team members time at 3AM in the morning to handle
       * "why did a database connection not close" It will close eventually,
       * since it is just a socket, and even if it doesn't we'll catch this in
       * other jobs which may fail due to the database running out of
       * connections.
       *
       * However I usually have these exceptions emailed to our development team
       * for investigation the next day. For demo purposes I did not include my
       * Exception/Stacktrace Emailing utility, however I encourage you to add
       * your own.
       *
       * If you really need the process to exit non-ZERO because of the cleanup
       * failing, set the treatCleanupExceptionsAsFatal to true.
       */
      e.printStackTrace();
 
      if (treatCleanupExceptionsAsFatal) {
        throw e;
      }
    }
  }
 
  /**
   * This method wraps the customExecuteProcessing() method where you should add
   * your customize process execution logic to.
   *
   * Again like the other methods in this section of the class, do not modify
   * this method directly.
   *
   * For demo purposes I made it throw the generic Exception object so that your
   * customExecuteProcessing() method can throw any Exception it likes.
   *
   * @throws Exception
   */
  public synchronized void executeProcessing() throws Exception {
    long start, end, total;
 
    ranCleanup = false;
 
    System.out.println("Start Processing at: " + GetTimeStamp());
    start = System.currentTimeMillis();
 
    customExecuteProcessing(); // Hook to the User's Custom Execute Processing
                               // Method! - Where the magic happens!
 
    end = System.currentTimeMillis();
 
    total = end - start;
 
    System.out.println("Processing Completed at: " + GetTimeStamp());
    System.out.println("Total Processing Execution Time: "
        + CompactHumanReadableTimeWithMs(total));
  }
 
  /**
   * This is the method that adds the shutdown hook.
   *
   * All this method does it property invokes the
   * Runtime.getRuntime().addShutdownHook(Thread t); method by adding an
   * anonymous class implementation of a thread.
   *
   * This thread's run method simply calls the Process's cleanup method.
   *
   * Whenever I create a class like this, I envision it being ran two ways,
   * either directly from the main() method or as part of a larger component,
   * which may wrap this entire class (A HAS_A OOP relationship).
   *
   * In the case of the wrapper, adding the shutdown hook might be optional
   * since the wrapper may want to handle shutdown on it's own.
   *
   */
  public synchronized void addShutdownHook() {
    Runtime.getRuntime().addShutdownHook(new Thread() {
      public void run() {
        try {
          cleanup();
        }
        catch (Exception e) {
          e.printStackTrace();
        }
      }
    });
  }
 
  /**
   * This method is only provided in case you are loading properties from an
   * input stream or other non-standard source that is not a File.
   *
   * It becomes very useful in the wrapper class situation I described in the
   * comments about the addShutdownHook method.
   *
   * Perhaps the wrapping process reads properties from a Database or a URL?
   *
   * @param appProps
   */
  public synchronized void setAppProperties(Properties appProps) {
    this.appProps = appProps;
  }
 
  /**
   * Used to detect which mode the cleanup exceptions are handled in.
   *
   * @return
   */
  public boolean isTreatCleanupExceptionsAsFatal() {
    return treatCleanupExceptionsAsFatal;
  }
 
  /**
   * Use this method to set if you want to treat cleanup exception as fatal. The
   * default, and my personal preference is not to make these exception fatal.
   * But I added the flexibility into the template for your usage.
   *
   * @param treatCleanupExceptionsAsFatal
   */
  public void setTreatCleanupExceptionsAsFatal(
      boolean treatCleanupExceptionsAsFatal) {
    this.treatCleanupExceptionsAsFatal = treatCleanupExceptionsAsFatal;
  }
 
  // ------------------------------------------------------------------->
 
  // Start methods that need to be customized by the user
  // ------------------------------------------------------------------->
 
  /**
   * In general for performance reasons and for clarity even above performance,
   * I like pre-caching the properties as Strings or parsed Integers, etc,
   * before running any real business logic.
   *
   * This is why I provide the hook to readProperties which should read
   * properties from the appProps field (member variable).
   *
   * If you don't want to pre-cache your property values you can leave this
   * method blank. However I believe it's a good practice especially if your
   * batch process is a high speed ETL Loader process where every millisecond
   * counts when loading millions of records.
   */
  private synchronized void readProperties() {
    System.out.println("Add Your Property Reads Here!");
  }
 
  /**
   * After the properties are read from the readProperties() method this method
   * is called.
   *
   * It is provided for the user to add custom initialization processing.
   *
   * Let's say you want to open all JDBC connections at the start of a process,
   * this is probably the right place to do so.
   *
   * For more complex implementations, this is the best place to create and
   * initialize all your sub-components of your process.
   *
   * Let's say you have a DbConnectionPool, a Country Code Mapping utility, an
   * Address Fuzzy Logic Matching library.
   *
   * This is where I would initialize these components.
   *
   * The idea is to fail-fast in your batch processes, you don't want to wait
   * until you processed 10,000 records before some logic statement is triggered
   * to lazy instantiate these components, and because of a network issue or a
   * configuration mistake you get a fatal exception and your process exists,
   * and your data is only partially loaded and you or your production support
   * team members have to debug not only the process but did the portion of the
   * data already loaded make it in ok. This is extremely important if your
   * batch process interacts is real-time system components such as message
   * publishers, maybe you started publishing the updated records to downstream
   * consumers?
   *
   * Fail-Fast my friends... And as soon as the process starts if possible!
   */
  private synchronized void customProcessInit() {
    System.out.println("Add Custom Initialization Logic Here!");
  }
 
  /**
   * This is where you would add your custom cleanup processing. If you open and
   * connections, files, sockets, etc and keep references to these
   * objects/resources opened as fields in your class which is a good idea in
   * some cases especially long running batch processes you need a hook to be
   * able to close these resources before the process exits.
   *
   * This is where that type of logic should be placed.
   *
   * Now you can throw any exception you like, however the cleanup wrapper
   * method will simply log these exceptions, the idea here is that, even though
   * cleanup is extremely important, the next step of the process is a
   * System.exit and the operating system will most-likely reclaim any resources
   * such as files and sockets which have been left opened, after some bit of
   * time.
   *
   * Now my preference is usually not to wake my production support guys up
   * because a database connection (on the extremely rare occasion) didn't close
   * correctly. The process still ran successfully at this point, so just exit
   * and log it.
   *
   * However if you really need to make the cleanup be truly fatal to the
   * process you will have to set treatCleanupExceptionsAsFatal to true.
   *
   * @throws Exception
   */
  private synchronized void customProcessCleanup() throws Exception {
    System.out.println("Add Custom Cleanup Logic Here!");
  }
 
  private synchronized void customExecuteProcessing() throws Exception {
    System.out.println("Add Custom Processing Logic Here!");
  }
 
  // ------------------------------------------------------------------->
 
  /*
   * Start String Utility Methods These are methods I have in my custom
   * "StringUtils.java" class I extracted them and embedded them in this class
   * for demonstration purposes.
   *
   * I encourage everyone to build up their own set of useful String Utility
   * Functions please feel free to add these to your own set if you need them.
   */
  // ------------------------------------------------------------------->
 
  /**
   * This will return a string that is a human readable time sentence. It is the
   * "compact" version because instead of having leading ZERO Days, Hours,
   * Minutes, Seconds, it will only start the sentence with the first non-zero
   * time unit.
   *
   * In my string utils I have a non-compact version as well that prints the
   * leading zero time units.
   *
   * All depends on how you need to presented in your logs.
   */
  public static String CompactHumanReadableTimeWithMs(long milliSeconds) {
    long days, hours, inpSecs, leftOverMs;
    int minutes, seconds;
    StringBuffer sb = new StringBuffer();
 
    inpSecs = milliSeconds / 1000; // Convert Milliseconds into Seconds
    days = inpSecs / 86400;
    hours = (inpSecs - (days * 86400)) / 3600;
    minutes = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) / 60);
    seconds = (int) (((inpSecs - (days * 86400)) - (hours * 3600)) - (minutes * 60));
    leftOverMs = milliSeconds - (inpSecs * 1000);
 
    if (days > 0) {
      sb.append(days);
      sb.append((days != 1 ? " Days" : " Day"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    if (hours > 0 || sb.length() > 0) {
      sb.append(hours);
      sb.append((hours != 1 ? " Hours" : " Hour"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    if (minutes > 0 || sb.length() > 0) {
      sb.append(minutes);
      sb.append((minutes != 1 ? " Minutes" : " Minute"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    if (seconds > 0 || sb.length() > 0) {
      sb.append(seconds);
      sb.append((seconds != 1 ? " Seconds" : " Second"));
    }
 
    if (sb.length() > 0) {
      sb.append(", ");
    }
 
    sb.append(leftOverMs);
    sb.append((seconds != 1 ? " Milliseconds" : " Millisecond"));
 
    return sb.toString();
  }
 
  /**
   * NVL = Null Value, in my experience, most times, we want to treat empty or
   * whitespace only strings are NULLs
   *
   * So this method is here to avoid a lot of if (s == null || s.trim().length()
   * == 0) all over the place, instead you will find if(IsNVL(s)) instead.
   */
  public static boolean IsNVL(String s) {
    return s == null || s.trim().length() == 0;
  }
 
  /**
   * Simply returns a timestamp as a String.
   *
   * @return
   */
  public static String GetTimeStamp() {
    return (new java.util.Date()).toString();
  }
 
  // ------------------------------------------------------------------->
 
  // Start Main() Helper Static Methods
  // ------------------------------------------------------------------->
 
  /**
   * This method returns true if the command line arguments are valid, and false
   * otherwise.
   *
   * Please change this method to meet your implementation's requirements.
   */
  private static boolean CheckCommandLineArguments(String[] args) {
    boolean ok = false;
 
    /*
     * This is configured to make sure we only have one parameter which is the
     * app properties. We could have made it more advanced and actually checked
     * if the file exists, but just checking is the parameter exists for demo
     * purposes is good enough.
     */
    ok = args.length == 1 && !IsNVL(args[0]);
 
    return ok;
  }
 
  /**
   * This prints to STDERR (a common practice), the command line usage of the
   * program.
   *
   * Please change this to meet your implementation's command line arguments.
   */
  private static void PrintUsage() {
    StringBuffer sb = new StringBuffer();
 
    sb.append("\nUsage: java ");
 
    sb.append(DoNothingStandaloneProcess.class.getName());
 
    /*
     * Modify this append call to have each command line argument name example:
     * sb.append(
     * " [APP_PROPERTIES_FILE] [SOURCE_INPUT_FILE] [WSDL_URL] [TARGET_OUTPUT_FILE]"
     * );
     *
     * For demo purposes we will only use [APP_PROPERTIES_FILE]
     */
    sb.append(" [APP_PROPERTIES_FILE]");
 
    sb.append("\n\n");
 
    System.err.print(sb.toString());
  }
 
  /**
   * I usually like the Batch and Daemon Processes or Utilities to print a small
   * Banner at the top of their output.
   *
   * Please change this to suit your needs.
   */
  private static void PrintWelcome() {
    StringBuffer sb = new StringBuffer();
 
    sb.append("\n*********************************************\n");
    sb.append("*       Do Nothing Standalone Process       *\n");
    sb.append("*********************************************\n\n");
 
    System.out.print(sb.toString());
  }
 
  /**
   * This method simple prints the process startup time. I found this to be very
   * useful in batch job logs. I probably wouldn't change it, but you can if you
   * really need to.
   */
  private static void PrintStartupTime() {
    StringBuffer sb = new StringBuffer();
 
    sb.append("Startup Time: ");
    sb.append(GetTimeStamp());
    sb.append("\n\n");
 
    System.out.print(sb.toString());
  }
 
  // Start Main() Method
  // ------------------------------------------------------------------->
 
  /**
   * Here's your standard main() method which allows you to start a Java program
   * from the command line. You can probably use this as is, once you rename the
   * DoNothingStandaloneProcess class name to a proper name to represent your
   * implementation correctly.
   *
   * MAKE SURE: To change the data type of the process object reference to the
   * name of your process implementation class. Other than that, you are good to
   * go with this main method!
   */
  public static void main(String[] args) {
    int exitCode;
    DoNothingStandaloneProcess process = null;
 
    if (!CheckCommandLineArguments(args)) {
      PrintUsage();
      exitCode = 1;
    }
    else {
      try {
        PrintWelcome();
 
        PrintStartupTime();
 
        process = new DoNothingStandaloneProcess();
 
        process.setTreatCleanupExceptionsAsFatal(false); // I don't believe
                                                         // cleanup exceptions
                                                         // are really fatal,
                                                         // but that's up to
                                                         // you...
 
        process.loadProperties(args[0]); // Load properties using the file way.
 
        process.init(); // Performance Process Initialization, again I don't
                        // like over use of the constructor.
 
        process.addShutdownHook(); // Just in case we get an interrupt signal...
 
        process.executeProcessing(); // Do the actually business logic
                                     // execution!
 
        // If we made it to this point without an exception, that means
        // we are successful, the process exit code should be ZERO for SUCCESS!
        exitCode = 0;
      } // End try block
      catch (Exception e) {
        exitCode = 1; // If there was an exception, the process exit code should
                      // be NON-ZERO for FAILURE!
        e.printStackTrace(); // Log the exception, if you have an Exception
                             // email utility like I do, use that instead.
      }
      finally {
        if (process != null) {
          try {
            process.cleanup(); // Technically we don't need to do this because
                               // of the shutdown hook
            // But I like to be explicit here to show when during a
            // normal execution, when the call
            // to cleanup should happen.
          }
          catch (Exception e) {
            // We shouldn't receive an exception
            // But in case there is a runtime exception
            // Just print it, but treat it as non-fatal.
            // Technically most if not all resources
            // will be reclaimed by the operating system as an
            // absolute last resort
            // so we did our best attempt at cleaning things up,
            // but we don't want to wake our developers or our
            // production services team
            // up at 3 in the morning because something weird
            // happened during cleanup.
            e.printStackTrace();
 
            // If we set the process to treat cleanup exception as fatal
            // the exit code will be set to 1...
            if (process != null && process.isTreatCleanupExceptionsAsFatal()) {
              exitCode = 1;
            }
          }
        }
      } // End finally block
    } // End else block
 
    // Make sure our standard streams are flushed
    // so we don't miss anything in the logs.
    System.out.flush();
    System.err.flush();
 
    System.out.println("Process Exit Code = " + exitCode);
 
    System.out.flush();
 
    // Make sure to return the exit code to the parent process
    System.exit(exitCode);
  }
 
  // ------------------------------------------------------------------->
 
}
 

Closing Remarks:

I hope you can see why such a simple template for Batch Jobs and other Standalone Processes, such as Utilities Commands can really help keep your code base clean and ensure anyone within your organization can debug, enhance, and support most if not all processes based on this template.

I’m very interested in any comments about how you use this template or one like it in your professional and personal programming projects, and if this template has given you any ideas, if you made any improvements to it, and in general any other comments, you may have.

In my next post, I plan on discussing and sharing my template for Daemon Processes, which I call DoNothingDaemonProcess. It is very similar to this template, except when combined with the Unix command nohup it will run as a background process on a Unix/Linux Server. The process itself has some special utility functions to help make it an enterprise caliber daemon process, which can be controlled via a Batch Scheduler or other external Control Processes.

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
Posted in Development | 1 Comment

Project Thunderbolt – Robert’s Tesla Coil Project

===================================
Project Name: Thunderbolt
Project Domain: High Voltage Physics
===================================

Goal: To create a full scale Tesla Coil with that produces at least 6in sparks aka “artificial lightning”.

Current Status: Project was a success, with a complete full scale, full power test on Friday, July 15, 2011!

Me and My Tesla Coil:

Tesla Coils and Nikola Tesla: Check out WikiPedia for more information on what a Tesla Coil is and the history about them. Also, please read about their inventor, definitely one of the most important inventors throughout Human History, Nikola Tesla.

My interest in Tesla Coils started when I first visited Liberty Science Center as a child. They had a fully working Tesla Coil on display and would run demos, creating Artificial Lightning at will with the flip of a switch.

Ever since then I wanted to build and possess my very own Tesla Coil.

When I was younger I never attempted it because of the huge voltages involved, as well as a general lack of funding. My 2011 build cost approximately $1000.00 in parts and materials. I could probably build one for less now, having mastered a build, but a lot of my first Tesla Coil build was trial and error and there was a bunch of failed parts and ideas, which increased the total cost of completing a working Tesla Coil.

Here’s the Part List of my Thunderbolt Tesla Coil:

  1. 800 Feet of 24 AWG Magnet Wire
  2. 12,000V, 30MA Neon Sign Transformer (Purchased used from eBay) non-Fault Tolerant (this is important, a Fault Tolerant Transformer will cause your Tesla Coil not to work).
  3. 48X 0.49nF 20,000V Capacitors (I purchased 200 of these  for about a dollar a piece off eBay. They are normally used in High Frequency Pulse Lasers.)
  4. 30 feet of 12 gauge solid copper wire (bare).
  5. 3in diameter, 24in length of PVC Pipe.
  6. 2X matching 3in PVC flange
  7. 2X 24x24in plywood board
  8. 16in 1×4 Wood Plank
  9. 2X 2in wide Steel brackets
  10. 2X 3in screws with matching washers and nuts.
  11. Line Filter to protect the house mains.
  12. 100 feet of 14 gauge insulated copper wire
  13. Erector Set for building metal frames for Capacitor Tank.
  14. Aluminum Dryer Vent Flex Hose.
  15. Various bits of 2×4 Wood for standoffs, etc.
  16. Various screws for mounting everything
  17. Various cuts of thin plywood for primary coil mounting.
  18. Liquid Nails glue for mounting parts that cannot be held together by conductive material such as screws.

Circuit Diagram:

Special thanks to the Tesla Coil Wikipedia Article for supplying the Circuit Diagram, and specifically the creator: Wikipedia User: Omegatron.

Creating the Spark:

I used the two 2in steel brackets, and the 3in screws to create a static spark gap. It’s not the most efficient spark gap for Tesla Coils these days, but it’s the easiest to build and if you add a PVC pipe to cover the gap and connect a high power vacuum like a shop vac/wet-dry vac you can create a so-called “sucker spark gap” which will increase the efficiency of the Spark Gap. However, even without the Vacuum enhancement the static spark gap still works well for creating 6in – 12in arcs and a wireless energy field that will light up a 18in fluorescent tube 5-6 feet away from the Tesla Coil.

Here’s the fully constructed Spark Gap. I did re-align it, as you can see in this picture the screws are not facing each other perfectly square.

Here’s a picture of the complete spark gap in operation (the full Tesla Coil setup is pictured and visible in the background):

Here’s a test of the completed Spark Gap on my dinning room table, with the gap connected directly to the Neon Sign Transformer, this is for fine tuning of the gap. You need to ensure it can arc with the Transformer connected alone to prevent a full short circuit:

Check out this video of the spark gap in operation when connected to the Tesla Coil. What you will notice is that the spark is of much greater size, brightest and overall power than the video of the spark gap simply connected directly to the Neon Sign Transformer which I tested on my dinning room table:

Creating the Capacitor Tank:

From a capacitance standpoint, the value is quite low compared to say an LED flasher or something like that, it’s in the nano-farad range. However you need extremely high voltage and the capacitors need to be able to withstand high frequency charging-discharging cycles.

I used 48 of my 0.49nF Pulse Capacitors in the following layout:

2 capacitors in series connected in banks of 4 of these pairs in Parallel, for a total of 8 capacitors per rack created out of erector set pieces. The banks are pictured here:

I then connected these of these banks in parallel with duplicate banks totalling 6-banks of 8 capacitors giving a total of 48 capacitors in this matrix configuration. This matrix of serial and parallel capacitors gives a total measured capacitance of between 6.11 and 5.95 nano-farads by my multi-meter.

Here’s the completely Assembled Capacitor “Tank”:

This physical layout cause arcs between the banks which were in parallel, plus the wires would arc right threw the insulator, so I refactored it into a double stacked layout:

Assembling the Secondary Coil:

It took me around 4 hours straight to wound the 800 feet of 24 gauge Magnet wire around the 3in PVC. 800 feet wound over the 9.43 in circumference over the PVC pipe gives you a little over 1000 winds. This is perfect, because my goal was a primary to secondary coil ratio of 1:100. That is 1 turn of the primary to 100 turns of the secondary.

The secondary itself is the nice upright red magnet wire most people associate with Tesla Coils. It usually has a tubular or spherical terminal at the top where the discharges occur.

Here’s me, winding the secondary coil:

What you need to try to do is not have any overlaps in your coil. a few here and there doesn’t hurt the result, I found however. But take your time, otherwise it’s a waste.

Here’s the completed secondary coil without terminal:

I mentioned at the top of the secondary coil you need a spherical or tubular terminal.

I used 3in diameter Aluminum Dryer Vent Flex Hose to create a circular tube at the top of my secondary coil.

Here’s a picture of the terminal:

As you can see I used Aluminum foil to close off any gaps in the flex hose, and the shape is more of an oval. I also used Liquid Nails glue to fasten the tube down to a piece of wood, which I then fasten to one of the PVC flanges, so that I can connect easily to the top end of my secondary coil.

I spent a lot of time thinking about how to make this entire setup modular, so I could transport it from my house to my mom’s and friend’s houses for demonstrations. The flanges worked great for this purpose.

Here’s the completed secondary coil setup with the base board, I used the second PVC flange to connect the secondary coil to the plywood board, again using liquid nails, as I didn’t want any conductive materials where I could have stray arc hitting the structure.

Creating the Primary Coil:

Although the primary coil only has 10 turns, it was more challenging to build for me than the secondary coil. I went through a few iterations before getting it right.

I used the the 12 gauge solid bare copper wire to create a secondary coil.

I eventually added one more turn of this wire to create a strike rail, which is connected right to ground to ensure there is no arc from the secondary to the primary, which would destroy the entire coil. The strike rail, is simply another turn of copper wire at the very top of the primary coil structure, which is importantly not a complete circle, it needs to be left opened, and it has one connection straight to true ground (the Earth).

Other then this you just need to connect everything up as shown in the circuit diagram above.

Here’s some photos of the Tesla Coil completed and operating as well as some video:

The Tesla Coil Operating in my backyard. The target object is a steel wrench on a camera tri-pod:

Here’s a video of me holding a 18in fluorescent bulb, acting as human ground, it proves my Tesla Coil is capable of Wireless Energy Transmission:

Wireless Electricity works! And yes, my Tesla Coil produces it!

Here’s a video of the Tesla Coil just striking a grounded target, the camera is directly below the coil, so it’s a great view of the lightning:

This is a head on view of the lightning. It’s a pretty interesting angle:

One of the first Full Power Tests:

For more videos please check out the following from the Ilardi.com Tesla Coil Page.

I hope this post was fun and interesting. I would really appreciate your comments!

Just Another Stream of Random Bits…
– Robert C. Ilardi
 
Posted in Randomness | 1 Comment

Helping your developers to maintain other people’s code

What is so difficult about maintaining another developers code? How can you as a development manager or architect ensure that all developers on your team can maintain any other team member’s code?

These are the two questions I want to quickly answer in this post. This post will lead into two other posts about creating a model or template for Batch Jobs and another one for Standalone Daemon Processes.

To answer the first question I posed above, ask yourself “When I was a developer what was the hardest thing about trying to fix a bug in someone else’s code? Where do I start?” You will quickly realize that answer to the question lays in the question itself. “Where do I start?” This is usually the most difficult problem faced by a developer when they begin looking at some else’s process.

As a developer the first thing you want and need to do is run the process on your own machine or in your own development environment so that you can watch what it’s doing and try to observe the bug for yourself. You need to do this on your own machine for a variety of reasons, the first being, it’s obviously not safe and in most industries or companies not even allowed to debug in a production environment.

Setting up your development environment to correct run and then debug someone else’s code is usually the most difficult thing about fixing a bug. The bug itself while challenging is usually a secondary issue when you first start to take over maintain someone’s code.

As a development manager or architect you can help eliminate this issue all together by creating a template or model for ALL Batch Jobs in your system, and then again for ALL Daemon Processes in your system.

It sounds simple, and when you think about you, you will probably ask yourself “Aren’t most systems architected in this way?” The answer from my experience is NO. Usually the collection of Batch Jobs and Daemon Processes in large enterprise class systems vary as much as the number of core developers or team leads your have in your development group.

This presents a big problem when it comes to maintenance of these jobs and processes because you cannot quick ramp up fixing bugs or making enhancements, especially when a developer who original wrote the process leaves the company or moves on to another project.

Also, the ability to reverse engineer someone’s process is a special skill that not all developer possess. I have founded that usually if I can setup a process to run on someone’s development environment in my team, they can then fix the bug or make the enhancement, but the actual setup usually is the problem which I can only rely on a handful of people to cover.

I have touched upon this issue in my other posts on How we Build Software; especially around resource management, where I spoke about ensuring loading of properties or configuration and database connections.

I’m limiting this post to Batch Jobs and Daemon Processes, however the same issue exists across all types of components of an enterprise system, from Middleware to UIs. I choose to limit our discussion today to Batch Jobs and Daemons because they are the most common and simplest examples where issues of different coding styles directly affects the teams ability to turn around fixes and enhancements quickly.

Also, these elements are usually where the most freedom of developer’s coding expression can take form.

By creating a template for ALL Batch Jobs and Daemon Processes within your systems which you mandate are followed by all your team leads, architects, and developers on your teams, you will ensure that the maintenance responsibilities for these processes are more easily be transferred from team member to team member.

Once your developers can run one of your Jobs or Processes, you can be sure they can run ANY of these jobs, and once they have them running in their own development environment, debugging and enhancements follow much more quickly.

If you combine this with my recommendations on resource management, building an Enterprise Commons, and everything else I mentioned in my how we build software post, I’m sure you will have a consistent system which can be maintained for decades.

In my next two posts, I plan on going over my own two templates for Batch Jobs and Standalone Daemon Processes. I will even give you the actual code of the templates.

Just Another Stream of Random Bits…
– Robert C. Ilardi

Posted in Software Management | 1 Comment

Data Record Request Framework

In this post we will discuss a framework which I have designed and implemented which is used to store and manage “Request Data” separately from the golden source or actual database.

The root concept and benefit of this, is that the Request Framework plus the Request Data Model enables a system or application to keep in-flight data that may or may not be associated with a workflow process separate from the live production data.

Because of this, we sometimes refer to this as In-Flight or Staging Area data.

Another common use for this framework is in large scale complex web applications such as a Tax Forms application or some other web application with potentially Hundreds of fields.

While a user is working on the Data Record (which includes the sum total of fields a Request Type or set of screens within the application supports editing on), which we refer to as the “Request” itself, the system has the freedom of saving the data at any time without affecting the Live Production or what we refer to as the Golden Source Record or Copy of the data.

A common web application scenario where you might want to use this, is when for scalability purposes you don’t want to store the Request object in the HTTP Session and instead you just store the Request Id which associates the user with a record in the Request Data Model. Now, say in a “Wizard-like” application, every time the user clicks the Next button to proceed to the next section of the set of forms, the app can save the request updates to the database, again without affecting the Golden Source until the entire full set of form pages in the Wizard’s guided path are completed.

In a workflow based system, you can use this concept to be able to store the data in a persist data store such as a database while the Workflow request itself is traveling from step to step or queue to queue in the Process path. Complex business workflows sometimes take days or even weeks to complete a single request in some certain circumstances (say if you want to open an account for a client of a bank and you are waiting for them to send you signed documents) and therefore keeping the request data in a persisted state while in-flight is invaluable in a workflow based system.

Some workflow engines support this concept out of the box, that is storing user defined fields in the workflow database tables itself, however I have found this to be inflexible, and if we refer back to my Adapter-Factory model for Vendor Product Integration, you want to minimize the use of “extended” non-core product functions for the sake of portability.

What is the Request Framework exactly?

The Request Framework is a combination of three components.

  1. Request APIs
    1. Store
      1. Stores in a target abstract data store (aka the Request Database), the Name-Value Pair set, transformed from the in-memory Request Object Model, via the Object Codec (Could be a database or a file, or any other persistent data storage mechanism. I have also used the transformed name-value pairs, to serialize an object over sockets).
    2. Load
      1. This is simply the opposite I/O operation of the Store API. It loads the Name-Value Pair form from the data store, and using the Object Codec transforms the data back into an in-memory Request Object Model object.
    3. Archive
      1. I use this API to move Requests that have completed their workflow process to duplicate Request Data Map and Narrows Map tables which I call the archive version of these tables.
      2. This is used to ensure the performance of loading and storing of the requests which are still active in the workflow process is maintained over the lifetime of the application. As Request Counts grow, we don’t want completed, requests which will not be loaded often to slow down the performance of the main tables. The form of the tables, which are described below are very narrow, but become very tall due to the nature of the highly normalized form of name-value pair storage.
      3. I have put a check in my implementations of the load API to detect if a Request is in the Active or Archive tables, and load the request no matter where it is. This is useful when an auditor comes and wants to see a request from N-number of years ago.
    4. Clone
      1. This API again is self explanatory. Often users want to “copy” a request they already submitted and then just change the few fields they need to create the new request. This is one of the key user activities that benefits from this API. However some internal system operations, can also benefit from this API.
      2. It can also used to clone a request from a production environment to a UAT environment for production support testing and debugging of a production issue with a particular request.
    5. Delete
      1. Depending on the nature of the business, you may need to differentiate between physically removing a request from the database and simply marking it as deleted often referred to as a Logical Delete.
      2. You can put a flag in your request transfer object to this API so the implementation can support both physical and logical deletes.
      3. Logical deletes are used very often over physical deletes in highly regulated industries, due to auditing requirements.
  2. The Object Codec
    1. The Object Codec implementation that I prefer to use in my own systems will be saved for the next article I post. However for now, all you need to know is that you need a way to “Serialize” a Request Object to some text-based format for fast and easy storage to a Persistent Data Store such as a Database; that’s the Encode Half of the Codec. And the Decode Half of the Codec is the implementation to take the Text-Based form of the Request Object and “De-Serialize” it back to the In-Memory Request Object, once retrieved from the Data Store. The actual Data Store functions are separate from the Object Codec by design, so that many different types of Data Storage implementations can be used without bloating the code of the Object Codec. The only job of the Object Codec should be to Serialize and De-Serialize the Request Object.
  3. The Request Data Model
    1. This is the final piece of the puzzle. The Request Data Model is designed to extremely quickly (in the cases of my systems, sub-second) store and load any single Request. In my experience we usually test the performance of the Data Model with an Request Object payload of around 500 to 1000 fields per request.
    2. The data model must be designed to accommodate the Serialized Form produced by your Object Codec Implementation.

The Request Framework

The Request Framework is the set of APIs that wrap the calls to the Object Codec and the Data Store Persistence layer to interact with the Request Data Model, in my systems this is usually JDBC. I prefer direct JDBC over ORM Frameworks, for both speed and fine-grain control over the SQL to keep to sub-second store and load times usually required by my application users.

Solution Overview:

  • Request Objects Flexibility
    • Developers can design any complex Java-Bean Compliant Object as a request object, without having to take into consideration the database model.
    • Request Objects should encapsulate all fields related to the Golden Source Data Model as Object Model Objects within a root Request object class.
    • If it’s a workflow driven system, they Workflow Process Keys should also be contained within the Request Object.
    • Request Processing, Golden Source Writes, and Workflow Actions can eventually be handled in a layer I refer to as Smart Persistence. Which we will discuss in a separate article.
    • If the Golden Source Data Model contains distinct data entities, than there should be one Request Class for each Data Model Entity.
    • Also if required by business requirements, there can be combination Request Types; requests that combine multiple entity types from the Data Model.
      • However in my experience you should always start with a single Request Object for each Data Model Root Entity. (Examples: Account Request, Client Request, Product Request)

Serialized Form:

I prefer to serialize or “transform” an Object in-memory to text based Name-Value Pairs. The Name or Key of the pair is the fully-qualified Variable or Field Name using the “.” (period/dot) object notation and “[ ]” array notation for array elements.

There are only name-value pairs for “scalar” non-user-defined objects. Therefore only built-in types, plus Strings, Dates, Enums, and other basic types can be stored as a name value pair. But since all user defined data types are simple Objects which contain the native or built in types for the actual data elements, user-defined objects are stored as multiple name-value pairs, one pair for each variable within the user-defined type.

Expanding upon this, we can store N-level nested object’s data using the Dot object dereferencing notation to create the fully-qualified names.

Examples of Names:

Note: Root Object Name is: AccountRequest (this will NOT be included in the fully-qualified name).

    • addresses[0].line1
    • addresses[1].type
    • ratings.sAndP.ratingValue
    • requestorName
    • requestId

The values of the name-value pairs are the String representation of the field or variable’s actual value. For a String, this would be the value itself, for numbers (int, float, double, long, short), these are easily converted to text representations. Other built in types such as Date objects which most modern languages support, can be converted either as a parsable Date-Timestamp string which the Decoder/Deserializer can convert back into the data object, or even as a Long integer which is the date’s representation as milliseconds elapsed since some Epoch. The value can be any text representation of the variables value which can be efficiently parsed back into the native data type in-memory once the name-value pair is processed by the Deserializer/Decoder of the ObjectCodec.

Examples of name-value pairs:

    • addresses[0].line1 = 123 Main Street
    • addresses[1].type = Mailing Address
    • ratings.sAndP.ratingValue = AAA
    • requestorName = John Smith
    • requestId = 6474721

The Request Data Model

The Request Data Model can be reduced to a Conceptual Model of only THREE basic entities or tables. The diagram below shows these tables and their cardinality.

Conceptual Model:

Logical Model:

The Tables:

  • Request
    • This is the “main” table of the request data model.
    • Contained within it is the basic data about a request, otherwise called the “header”
    • For each unique Request Id there is one and only one row in this table.
    • Table Structure:

  • Data Map
    • The data map table stores the Name-Value Pairs of the requests.
    • For a single unique Request Id, there may be N-number of rows of Name-Value Pairs within the Data Map table.
    • There is at least ONE row in this table for every primitive/native built in data type or ObjectCodec supported Data Type within the Java Bean compliant Request Object model.
      • The value field is NOT defined as a CBLOB/BLOB, instead for efficiency its defined as a VARCHAR.
        • For elements whose data length is longer than the length of the VARCHAR field defined in the database table, we introduce a sequence number field, and the name-value pair is split across the multiple rows.
          • When the Request Data Map is being loaded back from the database, the name-value pairs which have been split into multiple rows, will be concatenated back into a single row, using the sequence number to ensure the proper ordering when reassembling the string representation of the variable value.
          • If you divide the LENGTH of the VALUE by the MAX LENGTH of the defined VARCHAR field in the database, you will get the number of rows the name-value pair needs to be split into (if it doesn’t divided, evenly just add 1, you can either use modulus for this, or use integer division, then times the result by the length of the VARCHAR field, and minus that from the actual data length. If the result is great than ZERO, add 1 row).
    • Table Structure:

  • Narrows Map
    • This table is only used when a variable or field within the Request Object Model is a base or abstract type (basically we are using Polymorphism), and the field references some sub-class or concrete type.
    • The concrete data type information, mainly the fully-qualified class name is stored in this table, associated with the object notation path of the field that references it.
    • This is so the ObjectCodec can properly decode complex Request Objects where the original creation code of the Request Object leverage the properties of the language to use Polymorphism.
    • This is sort of an extended feature, and in general in your own projects if you want to use this name-value pair design for storing request data, you can leave this part out and just make the coding convention for your project restrict using polymorphism within your request object model.

  • Request Xref
    • Xref of course is short form for Cross Reference. A commonly defined table in many relational database schemas.
    • The Request Cross Reference in this case, is used to store Unique ID or Keys other than the Request ID itself, that are related to the Request.
    • These can be IDs for the workflow engine to use.
    • They can also be application specific IDs, such as a Golden Source primary key, so that we can track which requests have been associated with that Golden Source record for reporting and audit trail purposes. (Although there are many other ways to achieve this, depending on your data model).
    • It can also be used to relate this request to a request within another system, in the case when you have programmatic inter-system integration. (An external system can raise or update data on a request within your system / Enterprise Application Integration).
    • Table Structure:

  • Workflow States
    • This may be a set of tables, depending on your workflow audit trail requirements.
    • These tables are defined to store Workflow Step Audit information, such as the usernames and actions the user took at each step within a workflow process for a particular Request.
    • Now, the workflow engine itself stores this information, however in my systems I duplicate this outside of the workflow’s native data store, to maintain a loosely coupled state, between my system and the vendor supplied workflow engines; again see my Adapter-Factory Vendor Project Integration Model for more information on this. 

The Request Framework Advantage:

I hope from the above description of my Request Framework and Data Model, you can see real world applications where this would be extremely useful in your own applications. I know for me, both on my professional projects and my personal programming projects, I have seen this framework and data model grow and become the most useful tool in my arsenal for tackling complex Golden Source and In-Flight data separation issues, as well as delivering a solution to business requirements of being able to change the Request Model quickly for short time to market releases to production. The framework and data model above definitely delivers to the agile development world. In an upcoming article I will dive deeper into the Object Codec utility which I use in conjunction with the request framework.

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Architecture | Leave a comment

Windows verses Mac verses Linux? What do you use?

So I’m writing this quick little post from my favorite Starbucks, on my Mac Book Pro. Do you think I’m a Mac-preferred User?

Well, Mac OS X is BSD (Unix) based, if you are a programmer you probably already know this, but I wonder how many normal users actually do, or even care?

The first computer I owned was a Commodore 64 in the early 80’s. Which is the first computer I learned to Program on when I was 7 years old (See Commodore BASIC).

I started using MS-DOS in the late 80’s and Windows in the early 90’s. I still occasionally run FreeDOS using Virtual Box for virtualization on Windows.

I first installed Linux on my home PC, RedHat 5.1 in 1998.

I’m actually a late bloomer in the Apple world. I never even touched a Mac until I was being interviewed by Apple at their HQ in Cupertino (Yes, I actually have been to One Infinite Loop) and they asked me to program (I won’t give the interview question away out of respect to the team), but it was a component that had to implement a pre-defined interface, designed to watch me in real time write the code for a Producer-Consumer, which they ran using a Multi-Thread test driver program they already had prepared.

Here’s a picture I snapped an entrance to One Infinite Loop:

Bragging Rights: I did it in record time according to the team.

And yes, I did get the job, but I had to turn it down for personal reasons.

Here’s me studying the night before my Full Tech Interview at Apple’s Campus:

Anyway, back to my original point, on the Apple Interview, I told them this is my first time programming on a Mac, they said that’s ok we don’t care, but you are going to love it, and just told me the difference for copying and pasting using the Command key instead of the Control key.

Truth, is today, I have multiple Windows boxes, a couple of Linux Servers, and even embedded Linux single-board computers, and a Mac Book Pro at home.

This post is not an argument for or against any one particular Operating System. I now think the common argument over which is better Windows or Mac, is as irrelevant as the arguments I had in the early 90’s over which Gaming Console was better, Super Nintendo or Sega Genesis.

None of them are better! They are all just different and each have pluses and minuses. Each have their own cool features and each have some ways to do thing that suck, and make you ask yourself “WHY?”. Each have their benefits, and finally each have their own vulnerabilities.

If you really believe whatever OS you use do not have any vulnerabilities, you are fooling yourself.

So my take on it is, why not have one of each! If you are a professional software developer, you should be trying out different Operating Systems, and since developers make decent money, why not buy one of each.

In my case, I mainly run three OSes at home: Windows, Mac OS X, and Linux (currently Ubuntu, Fedora, and Angstrom Distros).

So what Operating System does an Enterprise Programmer use? All of them…

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Randomness | Leave a comment

Adapter Factory Design Pattern

It is often the case where Enterprise Application require one or more Vendor based products to be integrated into the home grown system.

While sometimes useful, there are many issues that arise from simply embedding a product into your code.

In my own past experience, I have integrated everything from Workflow Engines to Unstructured Data Search Indexes.

Some of the common issues that come up when integrating a product or service (it can be commercial or open source or even another home grown framework used within the organization) are:

  • Deployment of new versions of the product.
  • A high level architectural or firm wide product support or vendor change.
  • Trying to integrate multiple products of the same type supplied by multiple vendors seamlessly in your system.
  • Having to transition to a new product or version over an extended period of time or more than one release version of your application.

Many years ago when I was faced with firm wide IT political issues around Workflow Engine products; at the time I was using a home-grown patented engine, and the firm’s architecture group decided that all workflow based applications must use Tibco’s Staffware product, I came up with a strategy of being able to support both own in-house engine and Staffware simultaneously using two patterns from the Gang of Four (GoF) playbook.

I combined the Adapter Pattern with the Factory Pattern to create what I call the Adapter-Factory Mechanism for Product Integration.

Before we get into the details on how it actually works, I want to share with everyone the diagram…

High Level Design Diagram:

How does it works?

If we take my workflow engine example, I think it will be pretty simple to explain.

Note: Most modern workflow engine’s are large software products which usually include entire UI builders, and even their own application servers in some instances. In my experience I only leverage workflow engine packages for their workflow processing; so basically I use their APIs to interact programmatically with their engines to move requests around a workflow process.

Each Workflow Engine exposes it’s own set of APIs, in the case of Java it’s usually a set of JARs and the APIs can be rather complex including admin functions, and various other things that we might not be interested in.

The first step in the process of creating an Adapter-Factory is to declare a new Interface for which every Adapter will implement. This Interface declares certain methods, that are required by your application and are somewhat common across the multiple vendors or products you need to integrate with.

In the case of the workflow engine example, these methods are things like GetQueues(), TransitionWorkflow(), etc.

One of the fundamental ideas is that you do NOT want anyone outside of the Adapter layer to deal with the native objects used by the vendor or product specific APIs. So the second step is creating what I refer to as “Proxy” objects which may have a mirror image of fields as the Vendor specific objects, but they can NEVER reference any vendor data types.

Part of the job of the Adapter is to translate to and from these proxy objects and the native vendor objects.

The next step is to implement one Adapter per Vendor/Product, or even one Adapter per Vendor/Product-Version combination (in the case of where you need to support multiple versions of the same Product).

The ability to add a new Adapter at any time mitigates the risk that a Vendor may produce a new version of a product which you for one reason or another (such as support contracts) need to migrate to in the future. You simply add a new Adapter for the new version of the Product, and keep the old version active in your code base as a fall back strategy or during a staggered rollout.

Because we “proxy” or mirror every native vendor/product object and never expose those native objects above the Adapter level, this besides making it possible to support multiple vendors or versions at the same time, minimizes the changes to the rest of the system if a new vendor or version comes on board.

Once we have implemented one or more Adapters for the Vendors or Products we need to support in this Adapter-Factory implementation, the next step is to create the Factory itself.

Normally, I make it pretty simple, I have a “Default” Adapter type that will be returned if the caller of the factory does not pass the Name or Flag representing a specific Adapter Type to return. This default Adapter is usually configured via a property so I can change the default version without having to change and recompile the factory itself. I usually make Factory objects like these, Singletons. Other than these two specifications, the Factory just follows the normal Factory Design Pattern.

Finally, I always wrap the Factory and the calls to the Adapter implementations in a Facade. This simplifies the client code’s interaction with the Adapter-Factory itself and makes it very easy to use this design pattern without putting a burden of understanding the pattern itself on the side of the client code developers.

This pretty much sums my Adapter-Factory Design Pattern. I have used it in Production systems very heavily and my experience with having to work with multiple vendor products that provides the same type of service within a single application has become a lot easier to deal with because of this design. I hope this pattern becomes a useful tool in your toolbox when designing and developing your own systems.

Final Note: If you think about it JDBC itself is an Adapter-Factory!

Just Another Stream of Random Bits…
– Robert C. Ilardi
Posted in Architecture | 2 Comments